code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# How random is `r/random`?
There's a limit of 0.5 req/s (1 request every 2 seconds)
## What a good response looks like (status code 302)
```
$ curl https://www.reddit.com/r/random
<html>
<head>
<title>302 Found</title>
</head>
<body>
<h1>302 Found</h1>
The resource was found at <a href="https://www.reddit.com/r/Amd/?utm_campaign=redirect&utm_medium=desktop&utm_source=reddit&utm_name=random_subreddit">https://www.reddit.com/r/Amd/?utm_campaign=redirect&utm_medium=desktop&utm_source=reddit&utm_name=random_subreddit</a>;
you should be redirected automatically.
</body>
</html>
```
## What a bad response looks like (status code 429)
```
$ curl https://www.reddit.com/r/random
<!doctype html>
<html>
<head>
<title>Too Many Requests</title>
<style>
body {
font: small verdana, arial, helvetica, sans-serif;
width: 600px;
margin: 0 auto;
}
h1 {
height: 40px;
background: transparent url(//www.redditstatic.com/reddit.com.header.png) no-repeat scroll top right;
}
</style>
</head>
<body>
<h1>whoa there, pardner!</h1>
<p>we're sorry, but you appear to be a bot and we've seen too many requests
from you lately. we enforce a hard speed limit on requests that appear to come
from bots to prevent abuse.</p>
<p>if you are not a bot but are spoofing one via your browser's user agent
string: please change your user agent string to avoid seeing this message
again.</p>
<p>please wait 4 second(s) and try again.</p>
<p>as a reminder to developers, we recommend that clients make no
more than <a href="http://github.com/reddit/reddit/wiki/API">one
request every two seconds</a> to avoid seeing this message.</p>
</body>
</html>
```
# What happens
GET --> 302 (redirect) --> 200 (subreddit)
I only want the name of the subreddit, so I don't need to follow the redirect.
```
import pandas as pd
import requests
from time import sleep
from tqdm import tqdm
from random import random
def parse_http(req):
"""
Returns the name of the subreddit from a request
If the status code isn't 302, returns "Error"
"""
if req.status_code != 302:
return "Error"
start_idx = req.text.index('/r/') + len('/r/')
end_idx = req.text.index('?utm_campaign=redirect') - 1
return req.text[start_idx:end_idx]
sites = []
codes = []
headers = {
'User-Agent': 'Mozilla/5.0'
}
# Works for 10, 100 @ 3 seconds / request
# Works for 10 @ 2 seconds / request
for _ in tqdm(range(1000), ascii=True):
# Might have to mess with the User-Agent to look less like a bot
# https://evanhahn.com/python-requests-library-useragent
# Yeah the User-Agent says it's coming from python requests
# Changing it fixed everything
r = requests.get('https://www.reddit.com/r/random',
headers=headers,
allow_redirects=False)
if r.status_code == 429:
print("Got rate limit error")
sites.append(parse_http(r))
codes.append(r.status_code)
# Jitter the sleep a bit to throw off bot detection
sleep(2 + random())
#[print(code, site) for code, site in zip(codes, sites)];
for row in list(zip(codes, sites))[-10:]:
print(row[0], row[1])
df = pd.DataFrame(list(zip(sites, codes)), columns=['subreddit', 'response_code'])
df.head()
df.info()
from time import time
fname = 'reddit_randomness_' + str(int(time())) + '.csv'
df.to_csv(fname,index=False)
```
| github_jupyter |
# A 🤗 tour of transformer applications
In this notebook we take a tour around transformers applications. The transformer architecture is very versatile and allows us to perform many NLP tasks with only minor modifications. For this reason they have been applied to a wide range of NLP tasks such as classification, named entity recognition, or translation.
## Pipeline
We experiment with models for these tasks using the high-level API called pipeline. The pipeline takes care of all preprocessing and returns cleaned up predictions. The pipeline is primarily used for inference where we apply fine-tuned models to new examples.
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/pipeline.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=800>
```
from IPython.display import YouTubeVideo
YouTubeVideo('1pedAIvTWXk')
```
## Setup
Before we start we need to make sure we have the transformers library installed as well as the sentencepiece tokenizer which we'll need for some models.
```
%%capture
!pip install transformers
!pip install sentencepiece
```
Furthermore, we create a textwrapper to format long texts nicely.
```
import textwrap
wrapper = textwrap.TextWrapper(width=80, break_long_words=False, break_on_hyphens=False)
```
## Classification
We start by setting up an example text that we would like to analyze with a transformer model. This looks like your standard customer feedback from a transformer:
```
text = """Dear Amazon, last week I ordered an Optimus Prime action figure \
from your online store in Germany. Unfortunately, when I opened the package, \
I discovered to my horror that I had been sent an action figure of Megatron \
instead! As a lifelong enemy of the Decepticons, I hope you can understand my \
dilemma. To resolve the issue, I demand an exchange of Megatron for the \
Optimus Prime figure I ordered. Enclosed are copies of my records concerning \
this purchase. I expect to hear from you soon. Sincerely, Bumblebee."""
print(wrapper.fill(text))
```
One of the most common tasks in NLP and especially when dealing with customer texts is _sentiment analysis_. We would like to know if a customer is satisfied with a service or product and potentially aggregate the feedback across all customers for reporting.
For text classification the model gets all the inputs and makes a single prediction as shown in the following example:
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/clf_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
We can achieve this by setting up a `pipeline` object which wraps a transformer model. When initializing we need to specify the task. Sentiment analysis is a subfield of text classification where a single label is given to a
```
from transformers import pipeline
sentiment_pipeline = pipeline('text-classification')
```
You can see a warning message: we did not specify in the pipeline which model we would like to use. In that case it loads a default model. The `distilbert-base-uncased-finetuned-sst-2-english` model is a small BERT variant trained on [SST-2](https://paperswithcode.com/sota/sentiment-analysis-on-sst-2-binary) which is a sentiment analysis dataset.
You'll notice that the first time you execute the model a download is executed. The model is downloaded from the 🤗 Hub! The second time the cached model will be used.
Now we are ready to run our example through pipeline and look at some predictions:
```
sentiment_pipeline(text)
```
The model predicts negative sentiment with a high confidence which makes sense. You can see that the pipeline returns a list of dicts with the predictions. We can also pass several texts at the same time in which case we would get several dicts in the list for each text one.
## Named entity recognition
Let's see if we can do something a little more sophisticated. Instead of just finding the overall sentiment let's see if we can extract named entities such as organizations, locations, or individuals from the text. This task is called named entity recognition (NER). Instead of predicting just a class for the whole text a class is predicted for each token, thus this task belongs to the category of token classification:
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/ner_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=550>
Again, we just load a pipeline for the NER task without specifying a model. This will load a default BERT model that has been trained on the [CoNLL-2003](https://huggingface.co/datasets/conll2003).
```
ner_pipeline = pipeline('ner')
```
When we pass our text through the model we get a long list of dicts: each dict corresponds to one detected entity. Since multiple tokens can correspond to a a single entity we can apply an aggregation strategy that merges entities if the same class appears in consequtive tokens.
```
entities = ner_pipeline(text, aggregation_strategy="simple")
print(entities)
```
Let's clean the outputs a bit up:
```
for entity in entities:
print(f"{entity['word']}: {entity['entity_group']} ({entity['score']:.2f})")
```
It seems that the model found most of the named entities but was confused about the class of the transformer characters. This is no surprise since the original dataset probably did not contain many transformer characters. For this reason it makes sense to further fine-tune a model on your on dataset!
## Question-answering
We have now seen an example of text and token classification using transformers. However, there are more interesting tasks we can use transformers for. One of them is question-answering. In this task the model is given a question and a context and needs to find the answer to the question within the context. This problem can be rephrased into a classification problem: For each token the model needs to predict whether it is the start or the end of the answer. In the end we can extract the answer by looking at the span between the token with the highest start probability and highest end probability:
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/qa_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
You can imagine that this requires quite a bit of pre- and post-processing logic. Good thing that the pipeline takes care of all that!
```
qa_pipeline = pipeline("question-answering")
```
This default model is trained on the canonical [SQuAD dataset](https://huggingface.co/datasets/squad). Let's see if we can ask it what the customer wants:
```
question = "What does the customer want?"
outputs = qa_pipeline(question=question, context=text)
outputs
question2 = "How much the product?"
outputs2 = qa_pipeline(question=question2, context=text)
outputs2
```
Awesome, that sounds about right!
## Summarization
Let's see if we can go beyond these natural language understanding tasks (NLU) where BERT excels and delve into the generative domain. Note that generation is much more expensive since we usually generate one token at a time and need to run this several times.
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/gen_steps.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
A popular task involving generation is summarization. Let's see if we can use a transformer to generate a summary for us:
```
summarization_pipeline = pipeline("summarization")
```
This model is trained was trained on the [CNN/Dailymail dataset](https://huggingface.co/datasets/cnn_dailymail) to summarize news articles.
```
outputs = summarization_pipeline(text, max_length=45, clean_up_tokenization_spaces=True)
print(wrapper.fill(outputs[0]['summary_text']))
```
## Translation
But what if there is no model in the language of my data? You can still try to translate the text. The Helsinki NLP team has provided over 1000 language pair models for translation. Here we load one that translates English to Japanese:
```
translator = pipeline("translation_en_to_ja", model="Helsinki-NLP/opus-tatoeba-en-ja")
```
Let's translate the a text to Japanese:
```
text = 'At the MLT workshop in Tokyo we gave an introduction about Transformers.'
outputs = translator(text, clean_up_tokenization_spaces=True)
print(wrapper.fill(outputs[0]['translation_text']))
```
We can see that the text is clearly not perfectly translated, but the core meaning stays the same. Another cool application of translation models is data augmentation via backtranslation!
## Custom Model
As a last example let's have a look at a cool application showing the versatility of transformers: zero-shot classification. In zero-shot classification the model receives a text and a list of candidate labels and determines which labels are compatible with the text. Instead of having fixed classes this allows for flexible classification without any labelled data! Usually this is a good first baseline!
```
zero_shot_classifier = pipeline("zero-shot-classification",
model="vicgalle/xlm-roberta-large-xnli-anli")
```
Let's have a look at an example:
```
text = '東京のMLTワークショップで,トランスフォーマーについて紹介しました.'
classes = ['Japan', 'Switzerland', 'USA']
zero_shot_classifier(text, classes)
```
This seems to have worked really well on this short example. Naturally, for longer and more domain specific examples this approach might suffer.
## More pipelines
There are many more pipelines that you can experiment with. Look at the following list for an overview:
```
from transformers import pipelines
for task in pipelines.SUPPORTED_TASKS:
print(task)
```
Transformers not only work for NLP but can also be applied to other modalities. Let's have a look at a few.
### Computer vision
Recently, transformer models have also entered computer vision. Check out the DETR model on the [Hub](https://huggingface.co/facebook/detr-resnet-101-dc5):
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/object_detection.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
### Audio
Another promising area is audio processing. Especially Speech2Text there have been some promising advancements recently. See for example the [wav2vec2 model](https://huggingface.co/facebook/wav2vec2-base-960h):
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/speech2text.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
### Table QA
Finally, a lot of real world data is still in form of tables. Being able to query tables is very useful and with [TAPAS](https://huggingface.co/google/tapas-large-finetuned-wtq) you can do tabular question-answering:
<img src="https://github.com/huggingface/workshops/blob/main/machine-learning-tokyo/images/tapas.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
## Cache
Whenever we load a new model from the Hub it is cached on the machine you are running on. If you run these examples on Colab this is not an issue since the persistent storage will be cleaned after your session anyway. However, if you run this notebook on your laptop you might have just filled several GB of your hard drive. By default the cache is saved in the folder `~/.cache/huggingface/transformers`. Make sure to clear it from time to time if your hard drive starts to fill up.
```
```
| github_jupyter |
# Titania = CLERK MOTEL
On Bumble, the Queen of Fairies and the Queen of Bees got together to find some other queens.
* Given
* Queen of Fairies
* Queen of Bees
* Solutions
* C [Ellery Queen](https://en.wikipedia.org/wiki/Ellery_Queen) = TDDTNW M UPZTDO
* L Queen of Hearts = THE L OF HEARTS
* E Queen Elizabeth = E ELIZABETH II
* R Steve McQueen = STEVE MC R MOVIES
* K Queen Latifah = K LATIFAH ALBUMS
* meta
```
C/M L/O
E/T R/E
K/L
```
```
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
LIT NPGRU IRL GWOLTNW
LIT ENTTJ MPVVFU GWOLTNW
LIT TEWYLFRU MNPOO GWOLTNW
LIT OFRGTOT LCFU GWOLTNW
LIT PNFEFU PV TZFD
""", hint="cryptogram", threshold=1)
# LIT NPGRU IRL GWOLTNW
# THE ROMAN HAT MYSTERY
# LIT ENTTJ MPVVFU GWOLTNW
# THE GREEK COFFIN MYSTERY
# LIT TEWYLFRU MNPOO GWOLTNW
# THE EGYPTIAN CROSS MYSTERY
# LIT OFRGTOT LCFU GWOLTNW
# THE SIAMESE TWIN MYSTERY
# LIT PNFEFU PV TZFD
# THE ORIGIN OF EVIL
# TDDTNW M UPZTDO
# ELLERY C NOVELS
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
KQLECDP
NDWSDNLSI
ZOMXFUSLDI
LZZ BFPN PNDFQ NDMWI
YOMRFUS KMQW
""", hint="cryptogram")
# Queen of Hearts
# THELOFHEARTS
# PNDOLZNDMQPI
# CROQUET
# KQLECDP
# HEDGEHOGS
# NDWSDNLSI
# FLAMINGOES
# ZOMXFUSLDI
# OFF WITH THEIR HEADS
# LZZ BFPN PNDFQ NDMWI
# BLAZING CARD
# YOMRFUS KMQW
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
ZOXMNRBFGP DGQGXT
XYIBNK
DINRXT XFGIQTK
QYRBTKL ITNBRNRB PYRGIXF
YXTGR QNRTI
""", hint="cryptogram")
# TQN?GZTLF
# Queen Elizabeth
# EELIZABETHII
#
# BUCKINGHAM PALACE
# ZOXMNRBFGP DGQGXT
# CORGIS
# XYIBNK
# PRINCE CHARLES
# DINRXT XFGIQTK
# LONGEST-REIGNING MONARCH
# QYRBTKL ITNBRNRB PYRGIXF
# OCEAN LINER
# YXTGR QNRTI
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
LUF ZTYSWDWMFSL VFQFS
LUF YEFTL FVMTRF
LUF LPXFEWSY WSDFESP
RTRWJJPS
LUF MWSMWSSTLW OWC
""", hint="cryptogram", threshold=1)
# Steve McQueen
# STEVEMCRMOVIES
# VLFQFZMEZPQWFV
# THE MAGNIFICENT SEVEN
# LUF ZTYSWDWMFSL VFQFS
# THE GREAT ESCAPE
# LUF YEFTL FVMTRF
# THE TOWERING INFERNO
# LUF LPXFEWSY WSDFESP
# PAPILLON
# RTRWJJPS
# THE CINCINNATI KID
# LUF MWSMWSSTLW OWC
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
HZRWPO FY Z BDBRZ
IQZVL PODTH
FPGOP DH RNO VFWPR
RNO GZHZ FXOHB ZQIWU
SOPBFHZ
""", hint="cryptogram", threshold=1)
# Queen Latifah
# LQZRDYZNZQIWUB
# KLATIFAHALBUMS
# NATURE OF A SISTA
# HZRWPO FY Z BDBRZ
# BLACK REIGN
# IQZVL PODTH
# ORDER IN THE COURT
# FPGOP DH RNO VFWPR
# THE DANA OVENS ALBUM
# RNO GZHZ FXOHB ZQIWU
# PERSONA
# SOPBFHZ
import forge
from puzzle.puzzlepedia import puzzlepedia
puzzle = puzzlepedia.parse("""
LQZRDYZNZQIWUB
PNDOLZNDMQPI
TDDTNWMUPZTDO
TTQNJGZTLFNN
VLFQFZMEZPQWFV
""", hint="cryptogram")
################
# LQZRDYZNZQIWUB
# KLATIFAHALBUMS = K / L
################
# PNDOLZNDMQPI
# THELOFHEARTS = L / O
################
# TDDTNWMUPZTDO
# ELLERYCNOVELS = C / M
################
# TTQNJGZTLFNN
# EELIZABETHII = E / T
################
# VLFQFZMEZPQWFV
# STEVEMCRMOVIES = R / E
################
```
| github_jupyter |
# Contrasts Overview
```
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
```
This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm
A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these variables amounts to testing whether the mean for that level is statistically significantly different from the mean of the base category. This dummy coding is called Treatment coding in R parlance, and we will follow this convention. There are, however, different coding methods that amount to different sets of linear hypotheses.
In fact, the dummy coding is not technically a contrast coding. This is because the dummy variables add to one and are not functionally independent of the model's intercept. On the other hand, a set of *contrasts* for a categorical variable with `k` levels is a set of `k-1` functionally independent linear combinations of the factor level means that are also independent of the sum of the dummy variables. The dummy coding isn't wrong *per se*. It captures all of the coefficients, but it complicates matters when the model assumes independence of the coefficients such as in ANOVA. Linear regression models do not assume independence of the coefficients and thus dummy coding is often the only coding that is taught in this context.
To have a look at the contrast matrices in Patsy, we will use data from UCLA ATS. First let's load the data.
#### Example Data
```
import pandas as pd
url = 'https://stats.idre.ucla.edu/stat/data/hsb2.csv'
hsb2 = pd.read_table(url, delimiter=",")
hsb2.head(10)
```
It will be instructive to look at the mean of the dependent variable, write, for each level of race ((1 = Hispanic, 2 = Asian, 3 = African American and 4 = Caucasian)).
```
hsb2.groupby('race')['write'].mean()
```
#### Treatment (Dummy) Coding
Dummy coding is likely the most well known coding scheme. It compares each level of the categorical variable to a base reference level. The base reference level is the value of the intercept. It is the default contrast in Patsy for unordered categorical factors. The Treatment contrast matrix for race would be
```
from patsy.contrasts import Treatment
levels = [1,2,3,4]
contrast = Treatment(reference=0).code_without_intercept(levels)
print(contrast.matrix)
```
Here we used `reference=0`, which implies that the first level, Hispanic, is the reference category against which the other level effects are measured. As mentioned above, the columns do not sum to zero and are thus not independent of the intercept. To be explicit, let's look at how this would encode the `race` variable.
```
hsb2.race.head(10)
print(contrast.matrix[hsb2.race-1, :][:20])
sm.categorical(hsb2.race.values)
```
This is a bit of a trick, as the `race` category conveniently maps to zero-based indices. If it does not, this conversion happens under the hood, so this won't work in general but nonetheless is a useful exercise to fix ideas. The below illustrates the output using the three contrasts above
```
from statsmodels.formula.api import ols
mod = ols("write ~ C(race, Treatment)", data=hsb2)
res = mod.fit()
print(res.summary())
```
We explicitly gave the contrast for race; however, since Treatment is the default, we could have omitted this.
### Simple Coding
Like Treatment Coding, Simple Coding compares each level to a fixed reference level. However, with simple coding, the intercept is the grand mean of all the levels of the factors. Patsy doesn't have the Simple contrast included, but you can easily define your own contrasts. To do so, write a class that contains a code_with_intercept and a code_without_intercept method that returns a patsy.contrast.ContrastMatrix instance
```
from patsy.contrasts import ContrastMatrix
def _name_levels(prefix, levels):
return ["[%s%s]" % (prefix, level) for level in levels]
class Simple(object):
def _simple_contrast(self, levels):
nlevels = len(levels)
contr = -1./nlevels * np.ones((nlevels, nlevels-1))
contr[1:][np.diag_indices(nlevels-1)] = (nlevels-1.)/nlevels
return contr
def code_with_intercept(self, levels):
contrast = np.column_stack((np.ones(len(levels)),
self._simple_contrast(levels)))
return ContrastMatrix(contrast, _name_levels("Simp.", levels))
def code_without_intercept(self, levels):
contrast = self._simple_contrast(levels)
return ContrastMatrix(contrast, _name_levels("Simp.", levels[:-1]))
hsb2.groupby('race')['write'].mean().mean()
contrast = Simple().code_without_intercept(levels)
print(contrast.matrix)
mod = ols("write ~ C(race, Simple)", data=hsb2)
res = mod.fit()
print(res.summary())
```
### Sum (Deviation) Coding
Sum coding compares the mean of the dependent variable for a given level to the overall mean of the dependent variable over all the levels. That is, it uses contrasts between each of the first k-1 levels and level k In this example, level 1 is compared to all the others, level 2 to all the others, and level 3 to all the others.
```
from patsy.contrasts import Sum
contrast = Sum().code_without_intercept(levels)
print(contrast.matrix)
mod = ols("write ~ C(race, Sum)", data=hsb2)
res = mod.fit()
print(res.summary())
```
This corresponds to a parameterization that forces all the coefficients to sum to zero. Notice that the intercept here is the grand mean where the grand mean is the mean of means of the dependent variable by each level.
```
hsb2.groupby('race')['write'].mean().mean()
```
### Backward Difference Coding
In backward difference coding, the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable.
```
from patsy.contrasts import Diff
contrast = Diff().code_without_intercept(levels)
print(contrast.matrix)
mod = ols("write ~ C(race, Diff)", data=hsb2)
res = mod.fit()
print(res.summary())
```
For example, here the coefficient on level 1 is the mean of `write` at level 2 compared with the mean at level 1. Ie.,
```
res.params["C(race, Diff)[D.1]"]
hsb2.groupby('race').mean()["write"][2] - \
hsb2.groupby('race').mean()["write"][1]
```
### Helmert Coding
Our version of Helmert coding is sometimes referred to as Reverse Helmert Coding. The mean of the dependent variable for a level is compared to the mean of the dependent variable over all previous levels. Hence, the name 'reverse' being sometimes applied to differentiate from forward Helmert coding. This comparison does not make much sense for a nominal variable such as race, but we would use the Helmert contrast like so:
```
from patsy.contrasts import Helmert
contrast = Helmert().code_without_intercept(levels)
print(contrast.matrix)
mod = ols("write ~ C(race, Helmert)", data=hsb2)
res = mod.fit()
print(res.summary())
```
To illustrate, the comparison on level 4 is the mean of the dependent variable at the previous three levels taken from the mean at level 4
```
grouped = hsb2.groupby('race')
grouped.mean()["write"][4] - grouped.mean()["write"][:3].mean()
```
As you can see, these are only equal up to a constant. Other versions of the Helmert contrast give the actual difference in means. Regardless, the hypothesis tests are the same.
```
k = 4
1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean())
k = 3
1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean())
```
### Orthogonal Polynomial Coding
The coefficients taken on by polynomial coding for `k=4` levels are the linear, quadratic, and cubic trends in the categorical variable. The categorical variable here is assumed to be represented by an underlying, equally spaced numeric variable. Therefore, this type of encoding is used only for ordered categorical variables with equal spacing. In general, the polynomial contrast produces polynomials of order `k-1`. Since `race` is not an ordered factor variable let's use `read` as an example. First we need to create an ordered categorical from `read`.
```
hsb2['readcat'] = np.asarray(pd.cut(hsb2.read, bins=3))
hsb2.groupby('readcat').mean()['write']
from patsy.contrasts import Poly
levels = hsb2.readcat.unique().tolist()
contrast = Poly().code_without_intercept(levels)
print(contrast.matrix)
mod = ols("write ~ C(readcat, Poly)", data=hsb2)
res = mod.fit()
print(res.summary())
```
As you can see, readcat has a significant linear effect on the dependent variable `write` but not a significant quadratic or cubic effect.
| github_jupyter |
# Gym environment with scikit-decide tutorial: Continuous Mountain Car
In this notebook we tackle the continuous mountain car problem taken from [OpenAI Gym](https://gym.openai.com/), a toolkit for developing environments, usually to be solved by Reinforcement Learning (RL) algorithms.
Continuous Mountain Car, a standard testing domain in RL, is a problem in which an under-powered car must drive up a steep hill.
<div align="middle">
<video controls autoplay preload
src="https://gym.openai.com/videos/2019-10-21--mqt8Qj1mwo/MountainCarContinuous-v0/original.mp4">
</video>
</div>
Note that we use here the *continuous* version of the mountain car because
it has a *shaped* or *dense* reward (i.e. not sparse) which can be used successfully when solving, as opposed to the other "Mountain Car" environments.
For reminder, a sparse reward is a reward which is null almost everywhere, whereas a dense or shaped reward has more meaningful values for most transitions.
This problem has been chosen for two reasons:
- Show how scikit-decide can be used to solve Gym environments (the de-facto standard in the RL community),
- Highlight that by doing so, you will be able to use not only solvers from the RL community (like the ones in [stable_baselines3](https://github.com/DLR-RM/stable-baselines3) for example), but also other solvers coming from other communities like genetic programming and planning/search (use of an underlying search graph) that can be very efficient.
Therefore in this notebook we will go through the following steps:
- Wrap a Gym environment in a scikit-decide domain;
- Use a classical RL algorithm like PPO to solve our problem;
- Give CGP (Cartesian Genetic Programming) a try on the same problem;
- Finally use IW (Iterated Width) coming from the planning community on the same problem.
```
import os
from time import sleep
from typing import Callable, Optional
import gym
import matplotlib.pyplot as plt
from IPython.display import clear_output
from stable_baselines3 import PPO
from skdecide import Solver
from skdecide.hub.domain.gym import (
GymDiscreteActionDomain,
GymDomain,
GymPlanningDomain,
GymWidthDomain,
)
from skdecide.hub.solver.cgp import CGP
from skdecide.hub.solver.iw import IW
from skdecide.hub.solver.stable_baselines import StableBaseline
# choose standard matplolib inline backend to render plots
%matplotlib inline
```
When running this notebook on remote servers like with Colab or Binder, rendering of gym environment will fail as no actual display device exists. Thus we need to start a virtual display to make it work.
```
if "DISPLAY" not in os.environ:
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
_display.start()
```
## About Continuous Mountain Car problem
In this a problem, an under-powered car must drive up a steep hill.
The agent (a car) is started at the bottom of a valley. For any given
state the agent may choose to accelerate to the left, right or cease
any acceleration.
### Observations
- Car Position [-1.2, 0.6]
- Car Velocity [-0.07, +0.07]
### Action
- the power coefficient [-1.0, 1.0]
### Goal
The car position is more than 0.45.
### Reward
Reward of 100 is awarded if the agent reached the flag (position = 0.45) on top of the mountain.
Reward is decrease based on amount of energy consumed each step.
### Starting State
The position of the car is assigned a uniform random value in [-0.6 , -0.4].
The starting velocity of the car is always assigned to 0.
## Wrap Gym environment in a scikit-decide domain
We choose the gym environment we would like to use.
```
ENV_NAME = "MountainCarContinuous-v0"
```
We define a domain factory using `GymDomain` proxy available in scikit-decide which will wrap the Gym environment.
```
domain_factory = lambda: GymDomain(gym.make(ENV_NAME))
```
Here is a screenshot of such an environment.
Note: We close the domain straight away to avoid leaving the OpenGL pop-up window open on local Jupyter sessions.
```
domain = domain_factory()
domain.reset()
plt.imshow(domain.render(mode="rgb_array"))
plt.axis("off")
domain.close()
```
## Solve with Reinforcement Learning (StableBaseline + PPO)
We first try a solver coming from the Reinforcement Learning community that is make use of OpenAI [stable_baselines3](https://github.com/DLR-RM/stable-baselines3), which give access to a lot of RL algorithms.
Here we choose [Proximal Policy Optimization (PPO)](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) solver. It directly optimizes the weights of the policy network using stochastic gradient ascent. See more details in stable baselines [documentation](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) and [original paper](https://arxiv.org/abs/1707.06347).
### Check compatibility
We check the compatibility of the domain with the chosen solver.
```
domain = domain_factory()
assert StableBaseline.check_domain(domain)
domain.close()
```
### Solver instantiation
```
solver = StableBaseline(
PPO, "MlpPolicy", learn_config={"total_timesteps": 10000}, verbose=True
)
```
### Training solver on domain
```
GymDomain.solve_with(solver, domain_factory)
```
### Rolling out a solution
We can use the trained solver to roll out an episode to see if this is actually solving the problem at hand.
For educative purpose, we define here our own rollout (which will probably be needed if you want to actually use the solver in a real case). If you want to take a look at the (more complex) one already implemented in the library, see the `rollout()` function in [utils.py](https://github.com/airbus/scikit-decide/blob/master/skdecide/utils.py) module.
By default we display the solution in a matplotlib figure. If you need only to check wether the goal is reached or not, you can specify `render=False`. In this case, the rollout is greatly speed up and a message is still printed at the end of process specifying success or not, with the number of steps required.
```
def rollout(
domain: GymDomain,
solver: Solver,
max_steps: int,
pause_between_steps: Optional[float] = 0.01,
render: bool = True,
):
"""Roll out one episode in a domain according to the policy of a trained solver.
Args:
domain: the maze domain to solve
solver: a trained solver
max_steps: maximum number of steps allowed to reach the goal
pause_between_steps: time (s) paused between agent movements.
No pause if None.
render: if True, the rollout is rendered in a matplotlib figure as an animation;
if False, speed up a lot the rollout.
"""
# Initialize episode
solver.reset()
observation = domain.reset()
# Initialize image
if render:
plt.ioff()
fig, ax = plt.subplots(1)
ax.axis("off")
plt.ion()
img = ax.imshow(domain.render(mode="rgb_array"))
display(fig)
# loop until max_steps or goal is reached
for i_step in range(1, max_steps + 1):
if pause_between_steps is not None:
sleep(pause_between_steps)
# choose action according to solver
action = solver.sample_action(observation)
# get corresponding action
outcome = domain.step(action)
observation = outcome.observation
# update image
if render:
img.set_data(domain.render(mode="rgb_array"))
fig.canvas.draw()
clear_output(wait=True)
display(fig)
# final state reached?
if outcome.termination:
break
# close the figure to avoid jupyter duplicating the last image
if render:
plt.close(fig)
# goal reached?
is_goal_reached = observation[0] >= 0.45
if is_goal_reached:
print(f"Goal reached in {i_step} steps!")
else:
print(f"Goal not reached after {i_step} steps!")
return is_goal_reached, i_step
```
We create a domain for the roll out and close it at the end. If not closing it, an OpenGL popup windows stays open, at least on local Jupyter sessions.
```
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
```
We can see that PPO does not find a solution to the problem. This is mainly due to the way the reward is computed. Indeed negative reward accumulates as long as the goal is not reached, which encourages the agent to stop moving.
Even if we increase the training time, it still occurs. (You can test that by increasing the parameter "total_timesteps" in the solver definition.)
Actually, typical RL algorithms like PPO are a good fit for domains with "well-shaped" rewards (guiding towards the goal), but can struggle in sparse or "badly-shaped" reward environment like Mountain Car Continuous.
We will see in the next sections that non-RL methods can overcome this issue.
### Cleaning up
Some solvers need proper cleaning before being deleted.
```
solver._cleanup()
```
Note that this is automatically done if you use the solver within a `with` statement. The syntax would look something like:
```python
with solver_factory() as solver:
MyDomain.solve_with(solver, domain_factory)
rollout(domain=domain, solver=solver)
```
## Solve with Cartesian Genetic Programming (CGP)
CGP (Cartesian Genetic Programming) is a form of genetic programming that uses a graph representation (2D grid of nodes) to encode computer programs.
See [Miller, Julian. (2003). Cartesian Genetic Programming. 10.1007/978-3-642-17310-3.](https://www.researchgate.net/publication/2859242_Cartesian_Genetic_Programming) for more details.
Pros:
+ ability to customize the set of atomic functions used by CPG (e.g. to inject some domain knowledge)
+ ability to inspect the final formula found by CGP (no black box)
Cons:
- the fitness function of CGP is defined by the rewards, so can be unable to solve in sparse reward scenarios
### Check compatibility
We check the compatibility of the domain with the chosen solver.
```
domain = domain_factory()
assert CGP.check_domain(domain)
domain.close()
```
### Solver instantiation
```
solver = CGP("TEMP_CGP", n_it=25, verbose=True)
```
### Training solver on domain
```
GymDomain.solve_with(solver, domain_factory)
```
### Rolling out a solution
We use the same roll out function as for PPO solver.
```
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
```
CGP seems doing well on this problem. Indeed the presence of periodic functions ($asin$, $acos$, and $atan$) in its base set of atomic functions makes it suitable for modelling this kind of pendular motion.
***Warning***: On some cases, it happens that CGP does not actually find a solution. As there is randomness here, this is not possible. Running multiple episodes can sometimes solve the problem. If you have bad luck, you will even have to train again the solver.
```
for i_episode in range(10):
print(f"Episode #{i_episode}")
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=False,
)
finally:
domain.close()
```
### Cleaning up
```
solver._cleanup()
```
## Solve with Classical Planning (IW)
Iterated Width (IW) is a width based search algorithm that builds a graph on-demand, while pruning non-novel nodes.
In order to handle continuous domains, a state encoding specific to continuous state variables dynamically and adaptively discretizes the continuous state variables in such a way to build a compact graph based on intervals (rather than a naive grid of discrete point values).
The novelty measures discards intervals that are included in previously explored intervals, thus favoring to extend the state variable intervals.
See https://www.ijcai.org/proceedings/2020/578 for more details.
### Prepare the domain for IW
We need to wrap the Gym environment in a domain with finer charateristics so that IW can be used on it. More precisely, it needs the methods inherited from `GymPlanningDomain`, `GymDiscreteActionDomain` and `GymWidthDomain`. In addition, we will need to provide to IW a state features function to dynamically increase state variable intervals. For Gym domains, we use Boundary Extension Encoding (BEE) features as explained in the [paper](https://www.ijcai.org/proceedings/2020/578) mentioned above. This is implemented as `bee2_features()` method in `GymWidthDomain` that our domain class will inherit.
```
class D(GymPlanningDomain, GymWidthDomain, GymDiscreteActionDomain):
pass
class GymDomainForWidthSolvers(D):
def __init__(
self,
gym_env: gym.Env,
set_state: Callable[[gym.Env, D.T_memory[D.T_state]], None] = None,
get_state: Callable[[gym.Env], D.T_memory[D.T_state]] = None,
termination_is_goal: bool = True,
continuous_feature_fidelity: int = 5,
discretization_factor: int = 3,
branching_factor: int = None,
max_depth: int = 1000,
) -> None:
GymPlanningDomain.__init__(
self,
gym_env=gym_env,
set_state=set_state,
get_state=get_state,
termination_is_goal=termination_is_goal,
max_depth=max_depth,
)
GymDiscreteActionDomain.__init__(
self,
discretization_factor=discretization_factor,
branching_factor=branching_factor,
)
GymWidthDomain.__init__(
self, continuous_feature_fidelity=continuous_feature_fidelity
)
gym_env._max_episode_steps = max_depth
```
We redefine accordingly the domain factory.
```
domain4width_factory = lambda: GymDomainForWidthSolvers(gym.make(ENV_NAME))
```
### Check compatibility
We check the compatibility of the domain with the chosen solver.
```
domain = domain4width_factory()
assert IW.check_domain(domain)
domain.close()
```
### Solver instantiation
As explained earlier, we use the Boundary Extension Encoding state features `bee2_features` so that IW can dynamically increase state variable intervals. In other domains, other state features might be more suitable.
```
solver = IW(
state_features=lambda d, s: d.bee2_features(s),
node_ordering=lambda a_gscore, a_novelty, a_depth, b_gscore, b_novelty, b_depth: a_novelty
> b_novelty,
parallel=False,
debug_logs=False,
domain_factory=domain4width_factory,
)
```
### Training solver on domain
```
GymDomainForWidthSolvers.solve_with(solver, domain4width_factory)
```
### Rolling out a solution
**Disclaimer:** This roll out can be a bit painful to look on local Jupyter sessions. Indeed, IW creates copies of the environment at each step which makes pop up then close a new OpenGL window each time.
We have to slightly modify the roll out function as observations for the new domain are now wrapped in a `GymDomainProxyState` to make them serializable. So to get access to the underlying numpy array, we need to look for `observation._state`.
```
def rollout_iw(
domain: GymDomain,
solver: Solver,
max_steps: int,
pause_between_steps: Optional[float] = 0.01,
render: bool = False,
):
"""Roll out one episode in a domain according to the policy of a trained solver.
Args:
domain: the maze domain to solve
solver: a trained solver
max_steps: maximum number of steps allowed to reach the goal
pause_between_steps: time (s) paused between agent movements.
No pause if None.
render: if True, the rollout is rendered in a matplotlib figure as an animation;
if False, speed up a lot the rollout.
"""
# Initialize episode
solver.reset()
observation = domain.reset()
# Initialize image
if render:
plt.ioff()
fig, ax = plt.subplots(1)
ax.axis("off")
plt.ion()
img = ax.imshow(domain.render(mode="rgb_array"))
display(fig)
# loop until max_steps or goal is reached
for i_step in range(1, max_steps + 1):
if pause_between_steps is not None:
sleep(pause_between_steps)
# choose action according to solver
action = solver.sample_action(observation)
# get corresponding action
outcome = domain.step(action)
observation = outcome.observation
# update image
if render:
img.set_data(domain.render(mode="rgb_array"))
fig.canvas.draw()
clear_output(wait=True)
display(fig)
# final state reached?
if outcome.termination:
break
# close the figure to avoid jupyter duplicating the last image
if render:
plt.close(fig)
# goal reached?
is_goal_reached = observation._state[0] >= 0.45
if is_goal_reached:
print(f"Goal reached in {i_step} steps!")
else:
print(f"Goal not reached after {i_step} steps!")
return is_goal_reached, i_step
domain = domain4width_factory()
try:
rollout_iw(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
```
IW works especially well in mountain car.
Indeed we need to increase the cinetic+potential energy to reach the goal, which comes to increase as much as possible the values of the state variables (position and velocity). This is exactly what IW is designed to do (trying to explore novel states, which means here with higher position or velocity).
As a consequence, IW can find an optimal strategy in a few seconds (whereas in most cases PPO and CGP can't find optimal strategies in the same computation time).
### Cleaning up
```
solver._cleanup()
```
## Conclusion
We saw that it is possible thanks to scikit-decide to apply solvers from different fields and communities (Reinforcement Learning, Genetic Programming, and Planning) on a OpenAI Gym Environment.
Even though the domain used here is more classical for RL community, the solvers from other communities performed far better. In particular the IW algorithm was able to find an efficient solution in a very short time.
| github_jupyter |
Timing
------
Quickly time a single line.
```
import math
import ubelt as ub
timer = ub.Timer('Timer demo!', verbose=1)
with timer:
math.factorial(100000)
```
Robust Timing and Benchmarking
------------------------------
Easily do robust timings on existing blocks of code by simply indenting
them. The quick and dirty way just requires one indent.
```
import math
import ubelt as ub
for _ in ub.Timerit(num=200, verbose=3):
math.factorial(10000)
```
Loop Progress
-------------
``ProgIter`` is a (mostly) drop-in alternative to
```tqdm`` <https://pypi.python.org/pypi/tqdm>`__.
*The advantage of ``ProgIter`` is that it does not use any python threading*,
and therefore can be safer with code that makes heavy use of multiprocessing.
Note: ProgIter is now a standalone module: ``pip intstall progiter``)
```
import ubelt as ub
import math
for n in ub.ProgIter(range(7500)):
math.factorial(n)
import ubelt as ub
import math
for n in ub.ProgIter(range(7500), freq=2, adjust=False):
math.factorial(n)
# Note that forcing freq=2 all the time comes at a performance cost
# The default adjustment algorithm causes almost no overhead
>>> import ubelt as ub
>>> def is_prime(n):
... return n >= 2 and not any(n % i == 0 for i in range(2, n))
>>> for n in ub.ProgIter(range(1000), verbose=2):
>>> # do some work
>>> is_prime(n)
```
Caching
-------
Cache intermediate results in a script with minimal boilerplate.
```
import ubelt as ub
cfgstr = 'repr-of-params-that-uniquely-determine-the-process'
cacher = ub.Cacher('test_process', cfgstr)
data = cacher.tryload()
if data is None:
myvar1 = 'result of expensive process'
myvar2 = 'another result'
data = myvar1, myvar2
cacher.save(data)
myvar1, myvar2 = data
```
Hashing
-------
The ``ub.hash_data`` constructs a hash corresponding to a (mostly)
arbitrary ordered python object. A common use case for this function is
to construct the ``cfgstr`` mentioned in the example for ``ub.Cacher``.
Instead of returning a hex, string, ``ub.hash_data`` encodes the hash
digest using the 26 lowercase letters in the roman alphabet. This makes
the result easy to use as a filename suffix.
```
import ubelt as ub
data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])]
ub.hash_data(data)
import ubelt as ub
data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])]
ub.hash_data(data, hasher='sha512', base='abc')
```
Command Line Interaction
------------------------
The builtin Python ``subprocess.Popen`` module is great, but it can be a
bit clunky at times. The ``os.system`` command is easy to use, but it
doesn't have much flexibility. The ``ub.cmd`` function aims to fix this.
It is as simple to run as ``os.system``, but it returns a dictionary
containing the return code, standard out, standard error, and the
``Popen`` object used under the hood.
```
import ubelt as ub
info = ub.cmd('cmake --version')
# Quickly inspect and parse output of a
print(info['out'])
# The info dict contains other useful data
print(ub.repr2({k: v for k, v in info.items() if 'out' != k}))
# Also possible to simultaniously capture and display output in realtime
info = ub.cmd('cmake --version', tee=1)
# tee=True is equivalent to using verbose=1, but there is also verbose=2
info = ub.cmd('cmake --version', verbose=2)
# and verbose=3
info = ub.cmd('cmake --version', verbose=3)
```
Cross-Platform Resource and Cache Directories
---------------------------------------------
If you have an application which writes configuration or cache files,
the standard place to dump those files differs depending if you are on
Windows, Linux, or Mac. UBelt offers a unified functions for determining
what these paths are.
The ``ub.ensure_app_cache_dir`` and ``ub.ensure_app_resource_dir``
functions find the correct platform-specific location for these files
and ensures that the directories exist. (Note: replacing "ensure" with
"get" will simply return the path, but not ensure that it exists)
The resource root directory is ``~/AppData/Roaming`` on Windows,
``~/.config`` on Linux and ``~/Library/Application Support`` on Mac. The
cache root directory is ``~/AppData/Local`` on Windows, ``~/.config`` on
Linux and ``~/Library/Caches`` on Mac.
```
import ubelt as ub
print(ub.shrinkuser(ub.ensure_app_cache_dir('my_app')))
```
Downloading Files
-----------------
The function ``ub.download`` provides a simple interface to download a
URL and save its data to a file.
The function ``ub.grabdata`` works similarly to ``ub.download``, but
whereas ``ub.download`` will always re-download the file,
``ub.grabdata`` will check if the file exists and only re-download it if
it needs to.
New in version 0.4.0: both functions now accepts the ``hash_prefix`` keyword
argument, which if specified will check that the hash of the file matches the
provided value. The ``hasher`` keyword argument can be used to change which
hashing algorithm is used (it defaults to ``"sha512"``).
```
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.download(url, verbose=0)
>>> print(ub.shrinkuser(fpath))
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.grabdata(url, verbose=0, hash_prefix='944389a39')
>>> print(ub.shrinkuser(fpath))
try:
ub.grabdata(url, verbose=0, hash_prefix='not-the-right-hash')
except Exception as ex:
print('type(ex) = {!r}'.format(type(ex)))
```
# Dictionary Tools
```
import ubelt as ub
item_list = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'bannana']
groupid_list = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit']
groups = ub.group_items(item_list, groupid_list)
print(ub.repr2(groups, nl=1))
import ubelt as ub
item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]
ub.dict_hist(item_list)
import ubelt as ub
items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]
ub.find_duplicates(items, k=2)
import ubelt as ub
dict_ = {'K': 3, 'dcvs_clip_max': 0.2, 'p': 0.1}
subdict_ = ub.dict_subset(dict_, ['K', 'dcvs_clip_max'])
print(subdict_)
import ubelt as ub
dict_ = {1: 'a', 2: 'b', 3: 'c'}
print(list(ub.dict_take(dict_, [1, 2, 3, 4, 5], default=None)))
import ubelt as ub
dict_ = {'a': [1, 2, 3], 'b': []}
newdict = ub.map_vals(len, dict_)
print(newdict)
import ubelt as ub
mapping = {0: 'a', 1: 'b', 2: 'c', 3: 'd'}
ub.invert_dict(mapping)
import ubelt as ub
mapping = {'a': 0, 'A': 0, 'b': 1, 'c': 2, 'C': 2, 'd': 3}
ub.invert_dict(mapping, unique_vals=False)
```
AutoDict - Autovivification
---------------------------
While the ``collections.defaultdict`` is nice, it is sometimes more
convenient to have an infinitely nested dictionary of dictionaries.
(But be careful, you may start to write in Perl)
```
>>> import ubelt as ub
>>> auto = ub.AutoDict()
>>> print('auto = {!r}'.format(auto))
>>> auto[0][10][100] = None
>>> print('auto = {!r}'.format(auto))
>>> auto[0][1] = 'hello'
>>> print('auto = {!r}'.format(auto))
```
String-based imports
--------------------
Ubelt contains functions to import modules dynamically without using the
python ``import`` statement. While ``importlib`` exists, the ``ubelt``
implementation is simpler to user and does not have the disadvantage of
breaking ``pytest``.
Note ``ubelt`` simply provides an interface to this functionality, the
core implementation is in ``xdoctest``.
```
>>> import ubelt as ub
>>> module = ub.import_module_from_path(ub.truepath('~/code/ubelt/ubelt'))
>>> print('module = {!r}'.format(module))
>>> module = ub.import_module_from_name('ubelt')
>>> print('module = {!r}'.format(module))
>>> modpath = ub.util_import.__file__
>>> print(ub.modpath_to_modname(modpath))
>>> modname = ub.util_import.__name__
>>> assert ub.truepath(ub.modname_to_modpath(modname)) == modpath
```
Horizontal String Concatenation
-------------------------------
Sometimes its just prettier to horizontally concatenate two blocks of
text.
```
>>> import ubelt as ub
>>> B = ub.repr2([[1, 2], [3, 4]], nl=1, cbr=True, trailsep=False)
>>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False)
>>> print(ub.hzcat(['A = ', B, ' * ', C]))
```
| github_jupyter |
```
%run ../Python_files/util_data_storage_and_load.py
%run ../Python_files/load_dicts.py
%run ../Python_files/util.py
import numpy as np
from numpy.linalg import inv
# load link flow data
import json
with open('../temp_files/link_day_minute_Jul_dict_JSON_adjusted.json', 'r') as json_file:
link_day_minute_Jul_dict_JSON = json.load(json_file)
# week_day_Jul_list = [2, 3, 4, 5, 6, 9, 10, 11, 12, 13, 16, 17, 18, 19, 20, 23, 24, 25, 26, 27, 30, 31]
# testing set 1
week_day_Jul_list_1 = [20, 23, 24, 25, 26, 27, 30, 31]
# testing set 2
week_day_Jul_list_2 = [11, 12, 13, 16, 17, 18, 19]
# testing set 3
week_day_Jul_list_3 = [2, 3, 4, 5, 6, 9, 10]
link_flow_testing_set_Jul_PM_1 = []
for link_idx in range(24):
for day in week_day_Jul_list_1:
key = 'link_' + str(link_idx) + '_' + str(day)
link_flow_testing_set_Jul_PM_1.append(link_day_minute_Jul_dict_JSON[key] ['PM_flow'])
link_flow_testing_set_Jul_PM_2 = []
for link_idx in range(24):
for day in week_day_Jul_list_2:
key = 'link_' + str(link_idx) + '_' + str(day)
link_flow_testing_set_Jul_PM_2.append(link_day_minute_Jul_dict_JSON[key] ['PM_flow'])
link_flow_testing_set_Jul_PM_3 = []
for link_idx in range(24):
for day in week_day_Jul_list_3:
key = 'link_' + str(link_idx) + '_' + str(day)
link_flow_testing_set_Jul_PM_3.append(link_day_minute_Jul_dict_JSON[key] ['PM_flow'])
len(link_flow_testing_set_Jul_PM_1)
testing_set_1 = np.matrix(link_flow_testing_set_Jul_PM_1)
testing_set_1 = np.matrix.reshape(testing_set_1, 24, 8)
testing_set_1 = np.nan_to_num(testing_set_1)
y = np.array(np.transpose(testing_set_1))
y = y[np.all(y != 0, axis=1)]
testing_set_1 = np.transpose(y)
testing_set_1 = np.matrix(testing_set_1)
testing_set_2 = np.matrix(link_flow_testing_set_Jul_PM_2)
testing_set_2 = np.matrix.reshape(testing_set_2, 24, 7)
testing_set_2 = np.nan_to_num(testing_set_2)
y = np.array(np.transpose(testing_set_2))
y = y[np.all(y != 0, axis=1)]
testing_set_2 = np.transpose(y)
testing_set_2 = np.matrix(testing_set_2)
testing_set_3 = np.matrix(link_flow_testing_set_Jul_PM_3)
testing_set_3 = np.matrix.reshape(testing_set_3, 24, 7)
testing_set_3 = np.nan_to_num(testing_set_3)
y = np.array(np.transpose(testing_set_3))
y = y[np.all(y != 0, axis=1)]
testing_set_3 = np.transpose(y)
testing_set_3 = np.matrix(testing_set_3)
np.size(testing_set_1, 1), np.size(testing_set_3, 0)
testing_set_3[:,:1]
# write testing sets to file
zdump([testing_set_1, testing_set_2, testing_set_3], '../temp_files/testing_sets_Jul_PM.pkz')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Granero0011/AB-Demo/blob/master/Monte_Carlo_Simulation_Example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
import seaborn as sns
sns.set_style('whitegrid')
avg = 1
std_dev=.1
num_reps= 500
num_simulations= 1000
pct_to_target = np.random.normal(avg, std_dev, num_reps).round(2)
sales_target_values = [75_000, 100_000, 200_000, 300_000, 400_000, 500_000]
sales_target_prob = [.3, .3, .2, .1, .05, .05]
sales_target = np.random.choice(sales_target_values, num_reps, p=sales_target_prob)
df = pd.DataFrame(index=range(num_reps), data={'Pct_To_Target': pct_to_target,
'Sales_Target': sales_target})
df['Sales'] = df['Pct_To_Target'] * df['Sales_Target']
def calc_commission_rate(x):
""" Return the commission rate based on the table:
0-90% = 2%
91-99% = 3%
>= 100 = 4%
"""
if x <= .90:
return .02
if x <= .99:
return .03
else:
return .04
df['Commission_Rate'] = df['Pct_To_Target'].apply(calc_commission_rate)
df['Commission_Amount'] = df['Commission_Rate'] * df['Sales']
# Define a list to keep all the results from each simulation that we want to analyze
all_stats = []
# Loop through many simulations
for i in range(num_simulations):
# Choose random inputs for the sales targets and percent to target
sales_target = np.random.choice(sales_target_values, num_reps, p=sales_target_prob)
pct_to_target = np.random.normal(avg, std_dev, num_reps).round(2)
# Build the dataframe based on the inputs and number of reps
df = pd.DataFrame(index=range(num_reps), data={'Pct_To_Target': pct_to_target,
'Sales_Target': sales_target})
# Back into the sales number using the percent to target rate
df['Sales'] = df['Pct_To_Target'] * df['Sales_Target']
# Determine the commissions rate and calculate it
df['Commission_Rate'] = df['Pct_To_Target'].apply(calc_commission_rate)
df['Commission_Amount'] = df['Commission_Rate'] * df['Sales']
# We want to track sales,commission amounts and sales targets over all the simulations
all_stats.append([df['Sales'].sum().round(0),
df['Commission_Amount'].sum().round(0),
df['Sales_Target'].sum().round(0)])
results_df = pd.DataFrame.from_records(all_stats, columns=['Sales',
'Commission_Amount',
'Sales_Target'])
results_df.describe().style.format('{:,}')
```
| github_jupyter |
```
import random
import os
import sys
from time import sleep
from datetime import datetime
import requests as rt
import numpy as np
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException,ElementNotInteractableException, ElementClickInterceptedException
import sqlalchemy as sa
from sqlalchemy.orm import sessionmaker
def get_browser(driver_path=r'chromedriver/chromedriver.exe', headless=False):
options = webdriver.ChromeOptions()
if headless:
options.add_argument('headless')
options.add_argument('window-size=1200x600')
browser = webdriver.Chrome(driver_path, options=options)
return browser
def get_vacancies_on_page(browser):
#close pop-up window with suggested region (if present)
try:
browser.find_element_by_class_name('bloko-icon_cancel').click()
except (NoSuchElementException, ElementNotInteractableException):
pass
vacancy_cards = browser.find_elements_by_class_name('vacancy-serp-item ')
return vacancy_cards
def get_vacancy_info(card, browser, keyword, verbose=True):
try:
card.find_element_by_class_name('vacancy-serp-item__info')\
.find_element_by_tag_name('a')\
.send_keys(Keys.CONTROL + Keys.RETURN) #open new tab in Chrome
sleep(2) #let it fully load
#go to the last opened tab
browser.switch_to.window(browser.window_handles[-1])
basic_info = False
while not basic_info:
try:
vacancy_title = browser.find_element_by_xpath('//div[@class="vacancy-title"]//h1').text
company_name = browser.find_element_by_xpath('//a[@class="vacancy-company-name"]').text
company_href_hh = browser.find_element_by_xpath('//a[@class="vacancy-company-name"]').get_attribute('href')
publish_time = browser.find_element_by_xpath('//p[@class="vacancy-creation-time"]').text
basic_info = True
except:
sleep(3)
if verbose:
print("Title: ", vacancy_title )
print("Company: ", company_name )
print("Company link: ", company_href_hh )
print("Publish time: ", publish_time )
try:
salary = browser.find_element_by_xpath('//div[@class="vacancy-title"]//p[@class="vacancy-salary"]').text
except NoSuchElementException :
salary = 'не указано'
try:
emp_mode = browser.find_element_by_xpath('//p[@data-qa="vacancy-view-employment-mode"]').text
except NoSuchElementException :
emp_mode = 'не указано'
finally:
emp_mode = emp_mode.strip().replace('\n', ' ')
try:
exp = browser.find_element_by_xpath('//span[@data-qa="vacancy-experience"]').text
except NoSuchElementException :
exp = 'не указано'
finally:
exp = exp.strip().replace('\n', ' ')
try:
company_address = browser.find_element_by_xpath('//span[@data-qa="vacancy-view-raw-address"]').text
except NoSuchElementException:
company_address = 'не указано'
try:
vacancy_description = browser.find_element_by_xpath('//div[@data-qa="vacancy-description"]').text
except NoSuchElementException:
vacancy_description = 'не указано'
finally:
vacancy_description = vacancy_description.replace('\n', ' ')
try:
vacancy_tags = browser.find_element_by_xpath('//div[@class="bloko-tag-list"]').text
except NoSuchElementException:
vacancy_tags = 'не указано'
finally:
vacancy_tags = vacancy_tags.replace('\n', ', ')
if verbose:
print("Salary: ", salary )
print("Company address: ", company_address )
print('Experience: ', exp)
print('Employment mode: ', emp_mode)
print("Vacancy description: ", vacancy_description[:50] )
print("Vacancy tags: ", vacancy_tags)
browser.close() #close tab
browser.switch_to.window(browser.window_handles[0]) #switch to the first tab
dt = str(datetime.now())
vacancy_info = {'dt': dt,
'keyword': keyword,
'vacancy_title': vacancy_title,
'vacancy_salary': salary,
'vacancy_tags': vacancy_tags,
'vacancy_description': vacancy_description,
'vacancy_experience' : exp,
'employment_mode': emp_mode,
'company_name':company_name,
'company_link':company_href_hh,
'company_address':company_address,
'publish_place_and_time':publish_time}
return vacancy_info
except Exception as ex:
print('Exeption while scraping info!')
print(str(ex))
return None
def insert_data(data, engine, table_name, schema):
metadata = sa.MetaData(bind=engine)
table = sa.Table(table_name, metadata, autoload=True, schema=schema)
con = engine.connect()
try:
con.execute(table.insert().values(data))
except Exception as ex:
print('Exception while inserting data!')
print(str(ex))
finally:
con.close()
def scrape_HH(browser, keyword='Python', pages2scrape=3, table2save='HH_vacancies', verbose=True):
url = f'https://hh.ru/search/vacancy?area=1&fromSearchLine=true&st=searchVacancy&text={keyword}&from=suggest_post'
browser.get(url)
while pages2scrape > 0:
vacancy_cards = get_vacancies_on_page(browser=browser)
for card in vacancy_cards:
vacancy_info = get_vacancy_info(card, browser=browser, keyword=keyword, verbose=verbose)
insert_data(data=vacancy_info, engine=engine, table_name=table2save)
if verbose:
print('Inserted row')
try:
#click to the "Next" button to load other vacancies
browser.find_element_by_xpath('//a[@data-qa="pager-next"]').click()
print('Go to the next page')
except (NoSuchElementException, ElementNotInteractableException):
browser.close()
break
finally:
pages2scrape -= 1
mysql_con = '' #add your connection to DB
engine = sa.create_engine(mysql_con)
browser = get_browser(driver_path=r'chromedriver/chromedriver.exe', headless=False)
scrape_HH(browser, keyword='Grafana', pages2scrape=15, verbose=False)
```
| github_jupyter |
```
import tensorflow as tf
import numpy as np
import tsp_env
def attention(W_ref, W_q, v, enc_outputs, query):
with tf.variable_scope("attention_mask"):
u_i0s = tf.einsum('kl,itl->itk', W_ref, enc_outputs)
u_i1s = tf.expand_dims(tf.einsum('kl,il->ik', W_q, query), 1)
u_is = tf.einsum('k,itk->it', v, tf.tanh(u_i0s + u_i1s))
return tf.einsum('itk,it->ik', enc_outputs, tf.nn.softmax(u_is))
def critic_network(enc_inputs,
hidden_size = 128, embedding_size = 128,
max_time_steps = 5, input_size = 2,
batch_size = 128,
initialization_stddev = 0.1,
n_processing_steps = 5, d = 128):
# Embed inputs in larger dimensional tensors
W_embed = tf.Variable(tf.random_normal([embedding_size, input_size],
stddev=initialization_stddev))
embedded_inputs = tf.einsum('kl,itl->itk', W_embed, enc_inputs)
# Define encoder
with tf.variable_scope("encoder"):
enc_rnn_cell = tf.nn.rnn_cell.LSTMCell(hidden_size)
enc_outputs, enc_final_state = tf.nn.dynamic_rnn(cell=enc_rnn_cell,
inputs=embedded_inputs,
dtype=tf.float32)
# Define process block
with tf.variable_scope("process_block"):
process_cell = tf.nn.rnn_cell.LSTMCell(hidden_size)
first_process_block_input = tf.tile(tf.Variable(tf.random_normal([1, embedding_size]),
name='first_process_block_input'),
[batch_size, 1])
# Define attention weights
with tf.variable_scope("attention_weights", reuse=True):
W_ref = tf.Variable(tf.random_normal([embedding_size, embedding_size],
stddev=initialization_stddev),
name='W_ref')
W_q = tf.Variable(tf.random_normal([embedding_size, embedding_size],
stddev=initialization_stddev),
name='W_q')
v = tf.Variable(tf.random_normal([embedding_size], stddev=initialization_stddev),
name='v')
# Processing chain
processing_state = enc_final_state
processing_input = first_process_block_input
for t in range(n_processing_steps):
processing_cell_output, processing_state = process_cell(inputs=processing_input,
state=processing_state)
processing_input = attention(W_ref, W_q, v,
enc_outputs=enc_outputs, query=processing_cell_output)
# Apply 2 layers of ReLu for decoding the processed state
return tf.squeeze(tf.layers.dense(inputs=tf.layers.dense(inputs=processing_cell_output,
units=d, activation=tf.nn.relu),
units=1, activation=None))
batch_size = 128; max_time_steps = 5; input_size = 2
enc_inputs = tf.placeholder(tf.float32, [batch_size, max_time_steps, input_size])
bsln_value = critic_network(enc_inputs,
hidden_size = 128, embedding_size = 128,
max_time_steps = 5, input_size = 2,
batch_size = 128,
initialization_stddev = 0.1,
n_processing_steps = 5, d = 128)
tours_rewards_ph = tf.placeholder(tf.float32, [batch_size])
loss = tf.losses.mean_squared_error(labels=tours_rewards_ph,
predictions=bsln_value)
train_op = tf.train.AdamOptimizer(1e-2).minimize(loss)
##############################################################################
# Trying it out: can we learn the reward of the optimal policy for the TSP5? #
##############################################################################
def generate_batch(n_cities, batch_size):
inputs_list = []; labels_list = []
env = tsp_env.TSP_env(n_cities, use_alternative_state=True)
for i in range(batch_size):
env.reset()
s = env.reset()
coords = s.reshape([4, n_cities])[:2, ].T
inputs_list.append(coords)
labels_list.append(env.optimal_solution()[0])
return np.array(inputs_list), np.array(labels_list)
# Create tf session and initialize variables
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# Training loop
loss_vals = []
for i in range(10000):
inputs_batch, labels_batch = generate_batch(max_time_steps, batch_size)
loss_val, _ = sess.run([loss, train_op],
feed_dict={enc_inputs: inputs_batch,
tours_rewards_ph: labels_batch})
loss_vals.append(loss_val)
if i % 50 == 0:
print(loss_val)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.log(loss_vals_slow_lr))
plt.xlabel('Number of iterations')
plt.ylabel('Log of mean squared error')
len(loss_vals)
```
| github_jupyter |
<div align="right"><i>COM418 - Computers and Music</i></div>
<div align="right"><a href="https://people.epfl.ch/paolo.prandoni">Lucie Perrotta</a>, <a href="https://www.epfl.ch/labs/lcav/">LCAV, EPFL</a></div>
<p style="font-size: 30pt; font-weight: bold; color: #B51F1F;">Channel Vocoder</p>
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Audio
from IPython.display import IFrame
from scipy import signal
import import_ipynb
from Helpers import *
figsize=(10,5)
import matplotlib
matplotlib.rcParams.update({'font.size': 16});
fs=44100
```
In this notebook, we will implement and test an easy **channel vocoder**. A channel vocoder is a musical device that allows to sing while playing notes on a keyboard at the same time. The vocoder blends the voice (called the modulator) with the played notes on the keyboard (called the carrier) so that the resulting voice sings the note played on the keyboard. The resulting voice has a robotic, artificial sound that is rather popular in electronic music, with notable uses by bands such as Daft Punk, or Kraftwerk.
<img src="https://www.bhphotovideo.com/images/images2000x2000/waldorf_stvc_string_synthesizer_1382081.jpg" alt="Drawing" style="width: 35%;"/>
The implementation of a Channel vocoder is in fact quite simple. It takes 2 inputs, the carrier and the modulator signals, that must be of the same length. It divides each signal into frequency bands called **channels** (hence the name) using many parallel bandpass filters. The width of each channel can be equal, or logarithmically sized to match the human ear perception of frequency. For each channel, the envelope of the modulator signal is then computed, for instance using a rectifier and a moving average. It is simply multiplied to the carrier signal for each channel, before all channels are added back together.
<img src="https://i.imgur.com/aIePutp.png" alt="Drawing" style="width: 65%;"/>
To improve the intelligibility of the speech, it is also possible to add AWGN to each to the carrier of each band, helping to produce non-voiced sounds, such as the sound s, or f.
As an example signal to test our vocoder with, we are going to use dry voice samples from the song "Nightcall" by french artist Kavinsky.

First, let's listen to the original song:
```
IFrame(src="https://www.youtube.com/embed/46qo_V1zcOM?start=30", width="560", height="315", frameborder="0", allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture")
```
## 1. The modulator and the carrier signals
We are now going to recreate the lead vocoder using 2 signals: we need a modulator signal, a voice pronouning the lyrics, and a carrier signal, a synthesizer, containing the notes for the pitch.
### 1.1. The modulator
Let's first import the modulator signal. It is simply the lyrics spoken at the right rhythm. No need to sing or pay attention to the pitch, only the prononciation and the rhythm of the text are going to matter. Note that the voice sample is available for free on **Splice**, an online resource for audio production.
```
nightcall_modulator = open_audio('snd/nightcall_modulator.wav')
Audio('snd/nightcall_modulator.wav', autoplay=False)
```
### 1.2. The carrier
Second, we import a carrier signal, which is simply a synthesizer playing the chords that are gonna be used for the vocoder. Note that the carrier signal does not need to feature silent parts, since the modulator's silences will automatically mute the final vocoded track. The carrier and the modulator simply need to be in synch with each other.
```
nightcall_carrier = open_audio('snd/nightcall_carrier.wav')
Audio("snd/nightcall_carrier.wav", autoplay=False)
```
## 2. The channel vocoder
### 2.1. The channeler
Let's now start implementing the phase vocoder. The first tool we need is an efficient filter to allow decomposing both the carrier and the modulator signals into channels (or bands). Let's call this function the **channeler** since it decomposes the input signals into frequency channels. It takes as input a signal to be filtered, a integer representing the number of bands, and a boolean for setting if we want white noise to be added to each band (used for the carrier).
```
def channeler(x, n_bands, add_noise=False):
"""
Separate a signal into log-sized frequency channels.
x: the input signal
n_bands: the number of frequency channels
add_noise: add white noise or note to each channel
"""
band_freqs = np.logspace(2, 14, n_bands+1, base=2) # get all the limits between the bands, in log space
x_bands = np.zeros((n_bands, x.size)) # Placeholder for all bands
for i in range(n_bands):
noise = 0.7*np.random.random(x.size) if add_noise else 0 # Create AWGN or not
x_bands[i] = butter_pass_filter(x + noise, np.array((band_freqs[i], band_freqs[i+1])), fs, btype="band", order=5).astype(np.float32) # Carrier + uniform noise
return x_bands
# Example plot
plt.figure(figsize=figsize)
plt.magnitude_spectrum(nightcall_carrier)
plt.title("Carrier signal before channeling")
plt.xscale("log")
plt.xlim(1e-4)
plt.show()
carrier_bands = channeler(nightcall_carrier, 8, add_noise=True)
plt.figure(figsize=figsize)
for i in range(8):
plt.magnitude_spectrum(carrier_bands[i], alpha=.7)
plt.title("Carrier channels after channeling and noise addition")
plt.xscale("log")
plt.xlim(1e-4)
plt.show()
```
### 2.2. The envelope computer
Next, we can implement a simple envelope computer. Given a signal, this function computes its temporal envelope.
```
def envelope_computer(x):
"""
Envelope computation of one channels of the modulator
x: the input signal
"""
x = np.abs(x) # Rectify the signal to positive
x = moving_average(x, 1000) # Smooth the signal
return 3*x # Normalize # Normalize
plt.figure(figsize=figsize)
plt.plot(np.abs(nightcall_modulator)[:150000] , label="Modulator")
plt.plot(envelope_computer(nightcall_modulator)[:150000], label="Modulator envelope")
plt.legend(loc="best")
plt.title("Modulator signal and its envelope")
plt.show()
```
### 2.3. The channel vocoder (itself)
We can now implement the channel vocoder itself! It takes as input both signals presented above, as well as an integer controlling the number of channels (bands) of the vocoder. A larger number of channels results in the finer grained vocoded sound, but also takes more time to compute. Some artists may voluntarily use a lower numer of bands to increase the artificial effect of the vocoder. Try playing with it!
```
def channel_vocoder(modulator, carrier, n_bands=32):
"""
Channel vocoder
modulator: the modulator signal
carrier: the carrier signal
n_bands: the number of bands of the vocoder (better to be a power of 2)
"""
# Decompose both modulation and carrier signals into frequency channels
modul_bands = channeler(modulator, n_bands, add_noise=False)
carrier_bands = channeler(carrier, n_bands, add_noise=True)
# Compute envelope of the modulator
modul_bands = np.array([envelope_computer(modul_bands[i]) for i in range(n_bands)])
# Multiply carrier and modulator
result_bands = np.prod([modul_bands, carrier_bands], axis=0)
# Merge back all channels together and normalize
result = np.sum(result_bands, axis=0)
return normalize(result) # Normalize
nightcall_vocoder = channel_vocoder(nightcall_modulator, nightcall_carrier, n_bands=32)
Audio(nightcall_vocoder, rate=fs)
```
The vocoded voice is still perfectly intelligible, and it's easy to understand the lyrics. However, the pitch of the voice is now the synthesizer playing chords! One can try to deactivate the AWGN and compare the results. We finally plot the STFT of all 3 signals. One can notice that the vocoded signal has kept the general shape of the voice (modulator) signal, but is using the frequency information from the carrier!
```
# Plot
f, t, Zxx = signal.stft(nightcall_modulator[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Original voice (modulator)")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
f, t, Zxx = signal.stft(nightcall_vocoder[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Vocoded voice")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
f, t, Zxx = signal.stft(nightcall_carrier[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Carrier")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
```
## 3. Playing it together with the music
Finally, let's try to play it with the background music to see if it sounds like the original!
```
nightcall_instru = open_audio('snd/nightcall_instrumental.wav')
nightcall_final = nightcall_vocoder + 0.6*nightcall_instru
nightcall_final = normalize(nightcall_final) # Normalize
Audio(nightcall_final, rate=fs)
```
| github_jupyter |
Authored by: Avani Gupta <br>
Roll: 2019121004
**Note: dataset shape is version dependent hence final answer too will be dependent of sklearn version installed on machine**
# Excercise: Eigen Face
Here, we will look into ability of PCA to perform dimensionality reduction on a set of Labeled Faces in the Wild dataset made available from scikit-learn. Our images will be of shape (62, 47). This problem is also famously known as the eigenface problem. Mathematically, we would like to find the principal components (or eigenvectors) of the covariance matrix of the set of face images. These eigenvectors are essentially a set of orthonormal features depicts the amount of variation between face images. When plotted, these eigenvectors are called eigenfaces.
#### Imports
```
import numpy as np
import matplotlib.pyplot as plt
from numpy import pi
from sklearn.datasets import fetch_lfw_people
import seaborn as sns; sns.set()
import sklearn
print(sklearn.__version__)
```
#### Setup data
```
faces = fetch_lfw_people(min_faces_per_person=8)
X = faces.data
y = faces.target
print(faces.target_names)
print(faces.images.shape)
```
Note: **images num is version dependent** <br>
I get (4822, 62, 47) in my version of sklearn which is 0.22.2. <br>
Since our images is of the shape (62, 47), we unroll each image into a single row vector of shape (1, 4822). This means that we have 4822 features defining each image. These 4822 features will result into 4822 principal components in the PCA projection space. Therefore, each image location contributes more or less to each principal component.
#### Implement Eigen Faces
```
print(faces.images.shape)
img_shape = faces.images.shape[1:]
print(img_shape)
def FindEigen(X_mat):
X_mat -= np.mean(X_mat, axis=0, keepdims=True)
temp = np.matmul(X_mat.T, X_mat)
cov_mat = 1/X_mat.shape[0]* temp
eigvals, eigvecs = np.linalg.eig(cov_mat)
ind = eigvals.argsort()[::-1]
return np.real(eigvals[ind]), np.real(eigvecs[:, ind])
def plotFace(faces, h=10, v=1):
fig, axes = plt.subplots(v, h, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(faces[i].reshape(*img_shape), cmap='gray')
def plotgraph(eigenvals):
plt.plot(range(1, eigenvals.shape[0]+1), np.cumsum(eigenvals / np.sum(eigenvals)))
plt.show()
def PrincipalComponentsNum(X, eigenvals, threshold=0.95):
num = np.argmax(np.cumsum(eigenvals / np.sum(eigenvals)) >= threshold) + 1
print(f"No. of principal components required to preserve {threshold*100} % variance is: {num}.")
```
### Q1
How many principal components are required such that 95% of the vari-
ance in the data is preserved?
```
eigenvals, eigenvecs = FindEigen(X)
plotgraph(eigenvals)
PrincipalComponentsNum(X, eigenvals)
```
### Q2
Show the reconstruction of the first 10 face images using only 100 principal
components.
```
def reconstructMat(X, eigvecs, num_c):
return (np.matmul(X,np.matmul(eigvecs[:, :num_c], eigvecs[:, :num_c].T)))
faceNum = 10
print('original faces')
plotFace(X[:faceNum, :], faceNum)
recFace = reconstructMat(X[:faceNum, :], eigenvecs, 100)
print('reconstructed faces using only 100 principal components')
plotFace(recFace, faceNum)
```
# Adding noise to images
We now add gaussian noise to the images. Will PCA be able to effectively perform dimensionality reduction?
```
def plot_noisy_faces(noisy_faces):
fig, axes = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(noisy_faces[i].reshape(62, 47), cmap='binary_r')
```
Below we plot first twenty noisy input face images.
```
np.random.seed(42)
noisy_faces = np.random.normal(X, 15)
plot_noisy_faces(noisy_faces)
noisy_faces.shape
noisy_eigenvals, noisy_eigenvecs = FindEigen(noisy_faces)
```
### Q3.1
Show the above two results for a noisy face dataset.
How many principal components are required such that 95% of the vari-
ance in the data is preserved?
```
plotgraph(noisy_eigenvals)
PrincipalComponentsNum(noisy_faces, noisy_eigenvals, 0.95)
```
### Q3.2
Show the reconstruction of the first 10 face images using only 100 principal
components.
```
faces = 10
noisy_recons = reconstructMat(noisy_faces[:faces, :], noisy_eigenvecs, 100)
print('reconstructed faces for nosiy images only 100 principal components')
plotFace(noisy_recons, faces)
```
| github_jupyter |
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/mlops" target="_blank">top MLOps</a> repositories on GitHub
</div>
<br>
<hr>
# Optimize (GPU)
Use this notebooks to run hyperparameter optimization on Google Colab and utilize it's free GPUs.
## Clone repository
```
# Load repository
!git clone https://github.com/GokuMohandas/MLOps.git mlops
# Files
% cd mlops
!ls
```
## Setup
```
%%bash
!pip install --upgrade pip
!python -m pip install -e ".[dev]" --no-cache-dir
```
# Download data
We're going to download data directly from GitHub since our blob stores are local. But you can easily load the correct data versions from your cloud blob store using the *.json.dvc pointer files in the [data directory](https://github.com/GokuMohandas/MLOps/tree/main/data).
```
from app import cli
# Download data
cli.download_data()
# Check if data downloaded
!ls data
```
# Compute features
```
# Download data
cli.compute_features()
# Computed features
!ls data
```
## Optimize
Now we're going to perform hyperparameter optimization using the objective and parameter distributions defined in the [main script](https://github.com/GokuMohandas/MLOps/blob/main/tagifai/main.py). The best parameters will be written to [config/params.json](https://raw.githubusercontent.com/GokuMohandas/MLOps/main/config/params.json) which will be used to train the best model below.
```
# Optimize
cli.optimize(num_trials=100)
```
# Train
Once we're identified the best hyperparameters, we're ready to train our best model and save the corresponding artifacts (label encoder, tokenizer, etc.)
```
# Train best model
cli.train_model()
```
# Change metadata
In order to transfer our trained model and it's artifacts to our local model registry, we should change the metadata to match.
```
from pathlib import Path
from config import config
import yaml
def change_artifact_metadata(fp):
with open(fp) as f:
metadata = yaml.load(f)
for key in ["artifact_location", "artifact_uri"]:
if key in metadata:
metadata[key] = metadata[key].replace(
str(config.MODEL_REGISTRY), model_registry)
with open(fp, "w") as f:
yaml.dump(metadata, f)
# Change this as necessary
model_registry = "/Users/goku/Documents/madewithml/applied-ml/stores/model"
# Change metadata in all meta.yaml files
experiment_dir = Path(config.MODEL_REGISTRY, "1")
for fp in list(Path(experiment_dir).glob("**/meta.yaml")):
change_artifact_metadata(fp=fp)
```
## Download
Download and transfer the trained model's files to your local model registry. If you existing runs, just transfer that run's directory.
```
from google.colab import files
# Download
!zip -r model.zip model
!zip -r run.zip stores/model/1
files.download("run.zip")
```
| github_jupyter |
# Expressions and Arithmetic
**CS1302 Introduction to Computer Programming**
___
## Operators
The followings are common operators you can use to form an expression in Python:
| Operator | Operation | Example |
| --------: | :------------- | :-----: |
| unary `-` | Negation | `-y` |
| `+` | Addition | `x + y` |
| `-` | Subtraction | `x - y` |
| `*` | Multiplication | `x*y` |
| `/` | Division | `x/y` |
- `x` and `y` in the examples are called the *left and right operands* respectively.
- The first operator is a *unary operator*, which operates on just one operand.
(`+` can also be used as a unary operator, but that is not useful.)
- All other operators are *binary operators*, which operate on two operands.
Python also supports some more operators such as the followings:
| Operator | Operation | Example |
| -------: | :--------------- | :-----: |
| `//` | Integer division | `x//y` |
| `%` | Modulo | `x%y` |
| `**` | Exponentiation | `x**y` |
```
# ipywidgets to demonstrate the operations of binary operators
from ipywidgets import interact
binary_operators = {'+':' + ','-':' - ','*':'*','/':'/','//':'//','%':'%','**':'**'}
@interact(operand1=r'10',
operator=binary_operators,
operand2=r'3')
def binary_operation(operand1,operator,operand2):
expression = f"{operand1}{operator}{operand2}"
value = eval(expression)
print(f"""{'Expression:':>11} {expression}\n{'Value:':>11} {value}\n{'Type:':>11} {type(value)}""")
```
**Exercise** What is the difference between `/` and `//`?
- `/` is the usual division, and so `10/3` returns the floating-point number $3.\dot{3}$.
- `//` is integer division, and so `10//3` gives the integer quotient 3.
**What does the modulo operator `%` do?**
You can think of it as computing the remainder, but the [truth](https://docs.python.org/3/reference/expressions.html#binary-arithmetic-operations) is more complicated than required for the course.
**Exercise** What does `'abc' * 3` mean? What about `10 * 'a'`?
- The first expression means concatenating `'abc'` three times.
- The second means concatenating `'a'` ten times.
**Exercise** How can you change the default operands (`10` and `3`) for different operators so that the overall expression has type `float`.
Do you need to change all the operands to `float`?
- `/` already returns a `float`.
- For all other operators, changing at least one of the operands to `float` will return a `float`.
## Operator Precedence and Associativity
An expression can consist of a sequence of operations performed in a row such as `x + y*z`.
**How to determine which operation should be performed first?**
Like arithmetics, the order of operations is decided based on the following rules applied sequentially:
1. *grouping* by parentheses: inner grouping first
1. operator *precedence/priority*: higher precedence first
1. operator *associativity*:
- left associativity: left operand first
- right associativity: right operand first
**What are the operator precedence and associativity?**
The following table gives a concise summary:
| Operators | Associativity |
| :--------------- | :-----------: |
| `**` | right |
| `-` (unary) | right |
| `*`,`/`,`//`,`%` | left |
| `+`,`-` | left |
**Exercise** Play with the following widget to understand the precedence and associativity of different operators.
In particular, explain whether the expression `-10 ** 2*3` gives $(-10)^{2\times 3}= 10^6 = 1000000$.
```
from ipywidgets import fixed
@interact(operator1={'None':'','unary -':'-'},
operand1=fixed(r'10'),
operator2=binary_operators,
operand2=fixed(r'2'),
operator3=binary_operators,
operand3=fixed(r'3')
)
def three_operators(operator1,operand1,operator2,operand2,operator3,operand3):
expression = f"{operator1}{operand1}{operator2}{operand2}{operator3}{operand3}"
value = eval(expression)
print(f"""{'Expression:':>11} {expression}\n{'Value:':>11} {value}\n{'Type:':>11} {type(value)}""")
```
The expression evaluates to $(-(10^2))\times 3=-300$ instead because the exponentiation operator `**` has higher precedence than both the multiplication `*` and the negation operators `-`.
**Exercise** To avoid confusion in the order of operations, we should follow the [style guide](https://www.python.org/dev/peps/pep-0008/#other-recommendations) when writing expression.
What is the proper way to write `-10 ** 2*3`?
```
print(-10**2 * 3) # can use use code-prettify extension to fix incorrect styles
print((-10)**2 * 3)
```
## Augmented Assignment Operators
- For convenience, Python defines the [augmented assignment operators](https://docs.python.org/3/reference/simple_stmts.html#grammar-token-augmented-assignment-stmt) such as `+=`, where
- `x += 1` means `x = x + 1`.
The following widgets demonstrate other augmented assignment operators.
```
from ipywidgets import interact, fixed
@interact(initial_value=fixed(r'10'),
operator=['+=','-=','*=','/=','//=','%=','**='],
operand=fixed(r'2'))
def binary_operation(initial_value,operator,operand):
assignment = f"x = {initial_value}\nx {operator} {operand}"
_locals = {}
exec(assignment,None,_locals)
print(f"""Assignments:\n{assignment:>10}\nx: {_locals['x']} ({type(_locals['x'])})""")
```
**Exercise** Can we create an expression using (augmented) assignment operators? Try running the code to see the effect.
```
3*(x = 15)
```
Assignment operators are used in assignment statements, which are not expressions because they cannot be evaluated.
| github_jupyter |
```
%matplotlib inline
from IPython import display
import matplotlib.pyplot as plt
import torch
from torch import nn
import torchvision
import torchvision.transforms as transforms
import time
import sys
sys.path.append("../")
import d2lzh1981 as d2l
from tqdm import tqdm
print(torch.__version__)
print(torchvision.__version__)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
mnist_train = torchvision.datasets.FashionMNIST(root='/Users/nick/Documents/dataset/FashionMNIST2065',
train=True, download=False)
mnist_test = torchvision.datasets.FashionMNIST(root='/Users/nick/Documents/dataset/FashionMNIST2065',
train=False, download=False)
num_id = 0
for x, y in mnist_train:
if num_id % 1000 == 0:
print(num_id)
x.save("/Users/nick/Documents/dataset/FashionMNIST_img/train/{}_{}.png".format(y, num_id))
num_id += 1
num_id = 0
for x, y in mnist_test:
if num_id % 1000 == 0:
print(num_id)
x.save("/Users/nick/Documents/dataset/FashionMNIST_img/test/{}_{}.png".format(y, num_id))
num_id += 1
mnist_train = torchvision.datasets.FashionMNIST(root='/Users/nick/Documents/dataset/FashionMNIST2065',
train=True, download=False, transform=transforms.ToTensor())
mnist_test = torchvision.datasets.FashionMNIST(root='/Users/nick/Documents/dataset/FashionMNIST2065',
train=False, download=False, transform=transforms.ToTensor())
def vgg_block(num_convs, in_channels, out_channels): #卷积层个数,输入通道数,输出通道数
blk = []
for i in range(num_convs):
if i == 0:
blk.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
else:
blk.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
blk.append(nn.ReLU())
blk.append(nn.MaxPool2d(kernel_size=2, stride=2)) # 这里会使宽高减半
return nn.Sequential(*blk)
def vgg(conv_arch, fc_features, fc_hidden_units=4096):
net = nn.Sequential()
# 卷积层部分
for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
# 每经过一个vgg_block都会使宽高减半
net.add_module("vgg_block_" + str(i+1), vgg_block(num_convs, in_channels, out_channels))
# 全连接层部分
net.add_module("fc", nn.Sequential(d2l.FlattenLayer(),
nn.Linear(fc_features, fc_hidden_units),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(fc_hidden_units, fc_hidden_units),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(fc_hidden_units, 10)
))
return net
def evaluate_accuracy(data_iter, net, device=None):
if device is None and isinstance(net, torch.nn.Module):
# 如果没指定device就使用net的device
device = list(net.parameters())[0].device
acc_sum, n = 0.0, 0
with torch.no_grad():
for X, y in data_iter:
if isinstance(net, torch.nn.Module):
net.eval() # 评估模式, 这会关闭dropout
acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item()
net.train() # 改回训练模式
else: # 自定义的模型, 3.13节之后不会用到, 不考虑GPU
if('is_training' in net.__code__.co_varnames): # 如果有is_training这个参数
# 将is_training设置成False
acc_sum += (net(X, is_training=False).argmax(dim=1) == y).float().sum().item()
else:
acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
n += y.shape[0]
return acc_sum / n
batch_size = 100
if sys.platform.startswith('win'):
num_workers = 0
else:
num_workers = 4
train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size,
shuffle=False, num_workers=num_workers)
conv_arch = ((1, 1, 64), (1, 64, 128))
# 经过5个vgg_block, 宽高会减半5次, 变成 224/32 = 7
fc_features = 128 * 7 * 7 # c * w * h
fc_hidden_units = 4096 # 任意
# ratio = 8
# small_conv_arch = [(1, 1, 64//ratio), (1, 64//ratio, 128//ratio), (2, 128//ratio, 256//ratio),
# (2, 256//ratio, 512//ratio), (2, 512//ratio, 512//ratio)]
# net = vgg(small_conv_arch, fc_features // ratio, fc_hidden_units // ratio)
net = vgg(conv_arch, fc_features, fc_hidden_units)
lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
net = net.to(device)
print("training on ", device)
loss = torch.nn.CrossEntropyLoss()
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n, batch_count, start = 0.0, 0.0, 0, 0, time.time()
for X, y in tqdm(train_iter):
X = X.to(device)
y = y.to(device)
y_hat = net(X)
l = loss(y_hat, y)
optimizer.zero_grad()
l.backward()
optimizer.step()
train_l_sum += l.cpu().item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item()
n += y.shape[0]
batch_count += 1
test_acc = evaluate_accuracy(test_iter, net)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
% (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))
test_acc = evaluate_accuracy(test_iter, net)
test_acc
for X, y in train_iter:
X = X.to(device)
predict_y = net(X)
print(y)
print(predict_y.argmax(dim=1))
break
# predict_y.argmax(dim=1)
```
| github_jupyter |
```
import os
import numpy as np
np.random.seed(0)
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import set_config
set_config(display="diagram")
DATA_PATH = os.path.abspath(
r"C:\Users\jan\Dropbox\_Coding\UdemyML\Chapter13_CaseStudies\CaseStudyIncome\adult.xlsx"
)
```
### Dataset
```
df = pd.read_excel(DATA_PATH)
idx = np.where(df["native-country"] == "Holand-Netherlands")[0]
data = df.to_numpy()
x = data[:, :-1]
x = np.delete(x, idx, axis=0)
y = data[:, -1]
y = np.delete(y, idx, axis=0)
categorical_features = [1, 2, 3, 4, 5, 6, 7, 9]
numerical_features = [0, 8]
print(f"x shape: {x.shape}")
print(f"y shape: {y.shape}")
```
### y-Data
```
def one_hot(y):
return np.array([0 if val == "<=50K" else 1 for val in y], dtype=np.int32)
y = one_hot(y)
```
### Helper
```
def print_grid_cv_results(grid_result):
print(
f"Best model score: {grid_result.best_score_} "
f"Best model params: {grid_result.best_params_} "
)
means = grid_result.cv_results_["mean_test_score"]
stds = grid_result.cv_results_["std_test_score"]
params = grid_result.cv_results_["params"]
for mean, std, param in zip(means, stds, params):
mean = round(mean, 4)
std = round(std, 4)
print(f"{mean} (+/- {2 * std}) with: {param}")
```
### Sklearn Imports
```
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
```
### Classifier and Params
```
params = {
"classifier__n_estimators": [50, 100, 200],
"classifier__max_depth": [None, 100, 200]
}
clf = RandomForestClassifier()
```
### Ordinal Features
```
numeric_transformer = Pipeline(
steps=[
('scaler', StandardScaler())
]
)
categorical_transformer = Pipeline(
steps=[
('ordinal', OrdinalEncoder())
]
)
preprocessor_odinal = ColumnTransformer(
transformers=[
('numeric', numeric_transformer, numerical_features),
('categorical', categorical_transformer, categorical_features)
]
)
preprocessor_odinal
preprocessor_odinal.fit(x_train)
x_train_ordinal = preprocessor_odinal.transform(x_train)
x_test_ordinal = preprocessor_odinal.transform(x_test)
print(f"Shape of odinal data: {x_train_ordinal.shape}")
print(f"Shape of odinal data: {x_test_ordinal.shape}")
pipe_ordinal = Pipeline(
steps=[
('preprocessor_odinal', preprocessor_odinal),
('classifier', clf)
]
)
pipe_ordinal
grid_ordinal = GridSearchCV(pipe_ordinal, params, cv=3)
grid_results_ordinal = grid_ordinal.fit(x_train, y_train)
print_grid_cv_results(grid_results_ordinal)
```
### OneHot Features
```
numeric_transformer = Pipeline(
steps=[
('scaler', StandardScaler())
]
)
categorical_transformer = Pipeline(
steps=[
('onehot', OneHotEncoder(handle_unknown="ignore", sparse=False))
]
)
preprocessor_onehot = ColumnTransformer(
transformers=[
('numeric', numeric_transformer, numerical_features),
('categorical', categorical_transformer, categorical_features)
]
)
preprocessor_onehot
preprocessor_onehot.fit(x_train)
x_train_onehot = preprocessor_onehot.transform(x_train)
x_test_onehot = preprocessor_onehot.transform(x_test)
print(f"Shape of onehot data: {x_train_onehot.shape}")
print(f"Shape of onehot data: {x_test_onehot.shape}")
pipe_onehot = Pipeline(
steps=[
('preprocessor_onehot', preprocessor_odinal),
('classifier', clf)
]
)
pipe_onehot
grid_onehot = GridSearchCV(pipe_onehot, params, cv=3)
grid_results_onehot = grid_onehot.fit(x_train, y_train)
print_grid_cv_results(grid_results_onehot)
```
### TensorFlow Model
```
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD
y_train = y_train.reshape(-1, 1)
y_test = y_test.reshape(-1, 1)
def build_model(input_dim, output_dim):
model = Sequential()
model.add(Dense(units=128, input_dim=input_dim))
model.add(Activation("relu"))
model.add(Dense(units=64))
model.add(Activation("relu"))
model.add(Dense(units=output_dim))
model.add(Activation("sigmoid"))
return model
```
### Neural Network with Ordinal Features
```
model = build_model(
input_dim=x_test_ordinal.shape[1],
output_dim=y_train.shape[1]
)
model.compile(
loss="binary_crossentropy",
optimizer=SGD(learning_rate=0.001),
metrics=["binary_accuracy"]
)
history_ordinal = model.fit(
x=x_train_ordinal,
y=y_train,
epochs=20,
validation_data=(x_test_ordinal, y_test)
)
val_binary_accuracy = history_ordinal.history["val_binary_accuracy"]
plt.plot(range(len(val_binary_accuracy)), val_binary_accuracy)
plt.show()
```
### Neural Network with OneHot Features
```
model = build_model(
input_dim=x_train_onehot.shape[1],
output_dim=y_train.shape[1]
)
model.compile(
loss="binary_crossentropy",
optimizer=SGD(learning_rate=0.001),
metrics=["binary_accuracy"]
)
history_onehot = model.fit(
x=x_train_onehot,
y=y_train,
epochs=20,
validation_data=(x_test_onehot, y_test)
)
val_binary_accuracy = history_onehot.history["val_binary_accuracy"]
plt.plot(range(len(val_binary_accuracy)), val_binary_accuracy)
plt.show()
```
### Pass in user-data
```
pipe_ordinal.fit(x_train, y_train)
score = pipe_ordinal.score(x_test, y_test)
print(f"Score: {score}")
x_sample = [
25,
"Private",
"11th",
"Never-married",
"Machine-op-inspct",
"Own-child",
"Black",
"Male",
40,
"United-States"
]
y_sample = 0
y_pred_sample = pipe_ordinal.predict([x_sample])
print(f"Pred: {y_pred_sample}")
```
| github_jupyter |
# UK research networks with HoloViews+Bokeh+Datashader
[Datashader](http://datashader.readthedocs.org) makes it possible to plot very large datasets in a web browser, while [Bokeh](http://bokeh.pydata.org) makes those plots interactive, and [HoloViews](http://holoviews.org) provides a convenient interface for building these plots.
Here, let's use these three programs to visualize an example dataset of 600,000 collaborations between 15000 UK research institutions, previously laid out using a force-directed algorithm by [Ian Calvert](https://www.digital-science.com/people/ian-calvert).
First, we'll import the packages we are using and set up some defaults.
```
import pandas as pd
import holoviews as hv
import fastparquet as fp
from colorcet import fire
from datashader.bundling import directly_connect_edges, hammer_bundle
from holoviews.operation.datashader import datashade, dynspread
from holoviews.operation import decimate
from dask.distributed import Client
client = Client()
hv.notebook_extension('bokeh','matplotlib')
decimate.max_samples=20000
dynspread.threshold=0.01
datashade.cmap=fire[40:]
sz = dict(width=150,height=150)
%opts RGB [xaxis=None yaxis=None show_grid=False bgcolor="black"]
```
The files are stored in the efficient Parquet format:
```
r_nodes_file = '../data/calvert_uk_research2017_nodes.snappy.parq'
r_edges_file = '../data/calvert_uk_research2017_edges.snappy.parq'
r_nodes = hv.Points(fp.ParquetFile(r_nodes_file).to_pandas(index='id'), label="Nodes")
r_edges = hv.Curve( fp.ParquetFile(r_edges_file).to_pandas(index='id'), label="Edges")
len(r_nodes),len(r_edges)
```
We can render each collaboration as a single-line direct connection, but the result is a dense tangle:
```
%%opts RGB [tools=["hover"] width=400 height=400]
%time r_direct = hv.Curve(directly_connect_edges(r_nodes.data, r_edges.data),label="Direct")
dynspread(datashade(r_nodes,cmap=["cyan"])) + \
datashade(r_direct)
```
Detailed substructure of this graph becomes visible after bundling edges using a variant of [Hurter, Ersoy, & Telea (ECV-2012)](http://www.cs.rug.nl/~alext/PAPERS/EuroVis12/kdeeb.pdf), which takes several minutes even using multiple cores with [Dask](https://dask.pydata.org):
```
%time r_bundled = hv.Curve(hammer_bundle(r_nodes.data, r_edges.data),label="Bundled")
%%opts RGB [tools=["hover"] width=400 height=400]
dynspread(datashade(r_nodes,cmap=["cyan"])) + datashade(r_bundled)
```
Zooming into these plots reveals interesting patterns (if you are running a live Python server), but immediately one then wants to ask what the various groupings of nodes might represent. With a small number of nodes or a small number of categories one could color-code the dots (using datashader's categorical color coding support), but here we just have thousands of indistinguishable dots. Instead, let's use hover information so the viewer can at least see the identity of each node on inspection.
To do that, we'll first need to pull in something useful to hover, so let's load the names of each institution in the researcher list and merge that with our existing layout data:
```
node_names = pd.read_csv("../data/calvert_uk_research2017_nodes.csv", index_col="node_id", usecols=["node_id","name"])
node_names = node_names.rename(columns={"name": "Institution"})
node_names
r_nodes_named = pd.merge(r_nodes.data, node_names, left_index=True, right_index=True)
r_nodes_named.tail()
```
We can now overlay a set of points on top of the datashaded edges, which will provide hover information for each node. Here, the entire set of 15000 nodes would be reasonably feasible to plot, but to show how to work with larger datasets we wrap the `hv.Points()` call with `decimate` so that only a finite subset of the points will be shown at any one time. If a node of interest is not visible in a particular zoom, then you can simply zoom in on that region; at some point the number of visible points will be below the specified decimate limit and the required point should be revealed.
```
%%opts Points (color="cyan") [tools=["hover"] width=900 height=650]
datashade(r_bundled, width=900, height=650) * \
decimate( hv.Points(r_nodes_named),max_samples=10000)
```
If you click around and hover, you should see interesting groups of nodes, and can then set up further interactive tools using [HoloViews' stream support](http://holoviews.org/user_guide/Responding_to_Events.html) to reveal aspects relevant to your research interests or questions.
As you can see, datashader lets you work with very large graph datasets, though there are a number of decisions to make by trial and error, you do have to be careful when doing computationally expensive operations like edge bundling, and interactive information will only be available for a limited subset of the data at any one time due to data-size limitations of current web browsers.
| github_jupyter |
# Типы данных в Python
## 1. Числовые
### int
```
x = 5
print (x)
print(type(x))
a = 4 + 5
b = 4 * 5
c = 5 // 4
print(a, b, c)
print -5 / 4
print -(5 / 4)
```
### long
```
x = 5 * 1000000 * 1000000 * 1000000 * 1000000 + 1
print x
print type(x)
y = 5
print type(y)
y = x
print type(y)
```
### float
```
y = 5.7
print y
print type(y)
a = 4.2 + 5.1
b = 4.2 * 5.1
c = 5.0 / 4.0
print a, b, c
a = 5
b = 4
print float(a) / float(b)
print 5.0 / 4
print 5 / 4.0
print float(a) / b
```
### bool
```
a = True
b = False
print a
print type(a)
print b
print type(b)
print a + b
print a + a
print b + b
print int(a), int(b)
print True and False
print True and True
print False and False
print True or False
print True or True
print False or False
```
## 2. None
```
z = None
print z
print type(z)
print int(z)
```
## 3. Строковые
### str
```
x = "abc"
print x
print type(x)
a = 'Ivan'
b = "Ivanov"
s = a + " " + b
print s
print a.upper()
print a.lower()
print len(a)
print bool(a)
print bool("")
print int(a)
print a
print a[0]
print a[1]
print a[0:3]
print a[0:4:2]
```
### unicode
```
x = u"abc"
print x
print type(x)
x = u'Элеонора Михайловна'
print x, type(x)
y = x.encode('utf-8')
print y, type(y)
z = y.decode('utf-8')
print z, type(z)
q = y.decode('cp1251')
print q, type(q)
1
x = u'Элеонора Михайловна'
print x, type(x)
y = x.encode('utf-8')
print y, type(y)
z = y.decode('utf-8')
print z, type(z)
q = y.decode('cp1251')
print q, type(q)
print str(x)
print y[1:]
print len(y), type(y)
print len(x), type(x)
y = u'Иван Иванович'.encode('utf-8')
print y.decode('utf-8')
print y.decode('cp1251')
splitted_line = "Ivanov Ivan Ivanovich".split(' ')
print splitted_line
print type(splitted_line)
print "Иванов Иван Иванович".split(" ")
print "\x98"
print u"Иванов Иван Иванович".split(" ")
```
## 3. Массивы
### list
```
saled_goods_count = [33450, 34010, 33990, 33200]
print saled_goods_count
print type(saled_goods_count)
income = [u'Высокий', u'Средний', u'Высокий']
names = [u'Элеонора Михайловна', u'Иван Иванович', u'Михаил Абрамович']
print income
print names
print "---".join(income)
features = ['Ivan Ivanovich', 'Medium', 500000, 12, True]
print features
print features[0]
print features[1]
print features[3]
print features[0:5]
print features[:5]
print features[1:]
print features[2:5]
print features[:-1]
features.append('One more element in list')
print features
del features[-2]
print features
```
### tuple
```
features_tuple = ('Ivan Ivanovich', 'Medium', 500000, 12, True)
print type(features_tuple)
features_tuple[2:5]
features_tuple.append('one more element')
```
## 4. Множества и словари
### set
```
names = {'Ivan', 'Petr', 'Konstantin'}
print type(names)
print 'Ivan' in names
print 'Mikhail' in names
names.add('Mikhail')
print names
names.add('Mikhail')
print names
names.remove('Mikhail')
print names
names.add(['Vladimir', 'Vladimirovich'])
names.add(('Vladimir', 'Vladimirovich'))
print names
a = range(10000)
b = range(10000)
b = set(b)
print a[:5]
print a[-5:]
%%time
print 9999 in a
%%time
print 9999 in b
```
### dict
```
words_frequencies = dict()
words_frequencies['I'] = 1
words_frequencies['am'] = 1
words_frequencies['I'] += 1
print words_frequencies
print words_frequencies['I']
words_frequencies = {'I': 2, 'am': 1}
print words_frequencies
yet_another_dict = {'abc': 3.4, 5: 7.8, u'123': None}
print yet_another_dict
yet_another_dict[(1,2,5)] = [4, 5, 7]
print yet_another_dict
yet_another_dict[[1,2,7]] = [4, 5]
```
## Где еще можно познакомиться с Python
* https://www.coursera.org/courses?query=Python
* https://www.codeacademy.com
* http://www.pythontutor.ru
* http://www.learnpythonthehardway.org
* http://snakify.org
* https://www.checkio.org
| github_jupyter |
<img src="images/utfsm.png" alt="" width="100px" align="right"/>
# USM Numérica
## Licencia y configuración del laboratorio
Ejecutar la siguiente celda mediante *`Ctr-S`*.
```
"""
IPython Notebook v4.0 para python 3.0
Librerías adicionales:
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
```
## Introducción a BASH
Antes de comenzar debemos saber que Bash es un programa informático usado como intérprete de comandos
o instrucciones dadas por un usuario, las cuales son escritas en alguna interfaz gráfica o comúnmente
una terminal. Esas instrucciones son interpretadas por Bash para luego enviar dichas órdenes al Núcleo
o Kernel del sistema operativo.
Cada sistema operativo se encuentra conformado por un Núcleo particular, que se encarga de interactuar
con la computadora siendo una especie de cerebro capaz de organizar, administrar y distribuir los recursos
físicos de esta, tales como memoria, procesador, forma de almacenamiento, entre otros.
<img src="imbash.png"width="700px">
Bash (Bourne-Again-Shell) es un lenguaje de programación basado en Bourne-Shell, el cual fue creado para
sistemas Unix en la década de los 70, siendo el sustituto natural y de acceso libre de este a partir del
año 1987 siendo compatible con la mayoría de sistemas Unix, GNU/Linux y en algunos casos con Microsoft-Windows
y Apple.
## Objetivos
1. Operaciones básicas para crear, abrir y cambiarse de directorio
2. Operaciones para crear un archivo, copiar y cambiarlo de directorio
3. Visualizador gráfico de directorios y archivos
4. Visualizador de datos y editor de un archivo de texto
5. Ejercicio de práctica
### 1. Operaciones para crear, abrir y cambiar de directorio
Este es el tipo de operaciones más básicas que un usuario ejecuta en un sistema operativo, los siguientes comandos nos permiten ubicarnos en alguna carpeta para tener acceso a algún archivo o material en específico, crear carpetas o directorios para almacenar información deseada entre otros.
La acción más simple para comenzar será ingresar a un directorio o carpeta deseada usando el comando *`cd`* como sigue:
```
cd <directorio>
```
Una extensión de este recurso es la posibilidad de colocar una secuencia de directorios para llegar a la ubicación deseada, separando los nombres por un slash del siguiente modo.
```
cd <directorio_1>/<subdirectorio_2>/<subdirectorio_3>
```
Podemos visualizar en la terminal el contenido de este directorio con el comando *`ls`* y luego crear un nuevo sub-directorio (o carpeta) dentro del directorio en el cual nos ubicamos con *`mkdir`*:
```
mkdir <subdirectorio>
```
Mencionamos además una opción con el comando anterior, que nos permite crear varios sub-directorios a la vez escribiendo sus nombres respectivos uno al lado del otro, separados por un espacio.
```
mkdir <subdirectorio_1> <subdirectorio_2> ... <subdirectorio_N>
```
Además como detalle si el nombre de nuestro directorio se conforma por palabras separadas por espacio, entonces conviene por defecto escribir el nombre completo entre comillas, puesto que de lo contrario Bash considerará cada palabra separada por un espacio como un subdirectorio diferente.
```
mkdir <"nombre subdirectorio">
```
Si queremos regresar a una ubicación anterior basta los comandos *`cd ..`* o *`cd -`* y si queremos volver al directorio original desde donde se abrió la terminal usamos *`cd ~`*.
Es posible borrar un directorio con su contenido al interior escribiendo el siguiente comando:
```Bash
rm -r <directorio>
```
Finalmente un comando que nos permite visualizar rápidamente nuestra ubicación actual y las precedentes es *`pwd`*.
### 2. Operaciones para crear, copiar y eliminar archivos
Un paso siguiente a lo visto en el punto anterior es la creación de algún tipo de archivo, realizando operaciones básicas como copiarlo de un directorio a otro, cambiarlo de ubicación o borrarlo.
Para crear un archivo debemos ingresar al directorio en el cual deseamos guardarlo con el comando *`cd`* y luego de esto podemos crear el archivo con el argumento *`>`* de la siguiente manera.
```
> <archivo.tipo>
```
Por defecto el archivo se crea en el directorio actual de nuestra ubicación, recordar que con pwd podemos visualizar la cadena de directorios y subdirectorios hasta la ubicación actual.
Debemos hacer referencia al comando *`echo`*, este consiste en una función interna del intérprete de comandos que nos permite realizar más de una acción al combinarlo de distintas maneras con otros comandos o variables. Uno de los usos más comunes es para la impresión de algún texto en la terminal.
```
echo <texto a imprimir>
```
También nos permite imprimir un texto en un archivo específico agregando *`echo <texto a imprimir> < <archivo sobre el que se imprime>`*, entre muchas otras opciones que la función *`echo`* nos permite realizar y que es posible profundizar con la práctica, pero estas las postergaremos para la siguiente sección.
Continuamos con el comando *`mv`*, que refiere a "move", el cual sirve para mover algún archivo ya creado a un nuevo directorio.
```
mv <archivo.tipo> <directorio>
```
También sirve para mover un directorio dentro de otro (mover *directorio_1* al *direcotorio_2*), para que el comando se ejecute correctamente ambos directorios deben estar en una misma ubicación.
```
mv <directorio_1> <directorio_2>
```
Una operación similar a la anterior es copiar un archivo y llevarlo a un directorio particular, con la diferencia que una vez realizada la acción se tendrán 2 copias del mismo archivo, una en el directorio original y la segunda en el nuevo directorio.
```
cp <archivo.tipo> <directorio>
```
Supongamos que queremos copiar un archivo existente en otro directorio y reemplazarlo por un archivo del directorio actual, podemos hacer esto de la siguiente manera.
```
cp ~/directorio_fuente/<archivo_fuente> <archivo_local>
```
Lo anterior se hace desde el directorio al cual deseamos copiar el archivo fuente y *~/directoiro_fuente/* hace alusión al directorio en el cual se encuentra este archivo.
Si por otra parte queremos copiar un archivo fuente y nos encontramos en el directorio en el cual este se encuentra, para realizar la copia en otro directorio, sin necesariamente hacer un reemplazo por otro archivo, se puede con:
```
cp <archivo_fuente_1> <archivo_fuente_2> ~/directorio_desitno/
```
Del mismo modo que para un directorio, si queremos borrar un archivo creado podemos hacerlo con el comando *`rm -r`*.
```
rm -r <archivo.tipo>
```
Y si queremos borrar una serie de archivos lo hacemos escribiendo consecutivamente.
```
rm -r <archivo_1.tipo> <archivo_2.tipo> ... <archivo_N.tipo>
```
### 3. Visualizador de estructura de directorios y archivos
El comando *`tree`* es una forma útil y rápida de visualizar gráficamente la estructura de directorios y archivo pudiendo ver claramente la relación entre estos. Solo debemos escribir el comando para que automáticamente aparezca esta información en pantalla (dentro de la terminal) apareciendo en orden alfabético, por defecto debe ejecutarse ubicándose en el directorio deseado visualizando la estructura dentro de este.
En caso de que este no se encuentre instalado en nuestro sistema operativo, a modo de ejercicio, primero debemos escribir los siguientes comandos.
```
sudo apt-get install <tree>
```
### 4. Visualizar, editar y concatenar un archivo de texto
Para visualizar el contenido de un texto previamente creado, pudiendo hacerlo con el comando visto anteriormente, *`echo > arhcivo.tipo`*, utilizamos el comando *`cat`*.
```
cat <archivo.tipo>
```
Luego si queremos visualizar varios archivos en la terminal, lo hacemos agregando uno al lado del otro después del comando *`cat`*.
```
cat <archivo_1.tipo> <archivo_2.tipo> ... <archivo_N.tipo>
```
Existen muchos argumentos que nos permiten visualizar de distinta forma el contenido de un archivo en la terminal, por ejemplo enumerar las filas de algún texto, *`cat -n`*, otra opción sería que solo se enumerara las filas que tienen algún contenido, *`cat -b`*.
En caso de que queramos enumerar solo las filas con texto, pero este tiene demasiadas filas en blanco y buscamos reducirlas a una sola de modo de ahorrar espacio en la terminal, podemos hacerlo agregando el argumento *-s* como sigue.
```
cat -sb <archivo.tipo>
```
Editar o imprimir un texto en un archivo es posible hacerlo usando la función *`echo`* como sigue.
```
echo <texto a imprimir> ./archivo.txt
```
Similar a sudo, less es un programa usado como visualizador de archivos de texto que funciona como un comando interpretado desde la terminal. Este permite visualizar completamente el archivo de texto usando por defecto las flechas del teclado para avanzar o retroceder en el visualizador.
Una de las ventajas de un programa como less, es que puede añadirse comandos para ejecutar acciones de forma rápida en modo de comandos que resulta por defecto al ejecutar less, a continuación presentamos algunos comandos básicos.
```
G: permite visualizar el final del texto
```
```
g: permite visualizar el inicio del texto
```
```
h: nos proporciona ayuda respecto a comandos posibles
```
```
q: permite salir de la aplicación dentro del visualizador less
```
Para modificar el texto una de las formas es cargar algún editor de texto como por ejemplo el Visual.
```
v: ejecutar el editor de texto
```
### 5. Ejercicio de práctica
Para redondear lo visto en este tutorial se dejara como ejercicio las siguientes instrucciones:
* Crear una carpeta o directorio principal
* En ella se debe copiar 2 archivos de textos provenientes de cualquier dirección
* Crear un archivo de texto el cual tenga por nombre "Texto Principal" y se imprima "concatenación de textos"
* Crear una segunda carpeta dentro de la principal
* Concatenar los 2 archivos copiados con el archivo creado
* Mover el archivo "Texto Principal" a la nueva carpeta
* Eliminar las copias de los archivos concatenados
* Visualizar con Tree la estructura y relación de archivos y directorios creados
```
%%bash
```
| github_jupyter |
# Prudential Life Insurance Assessment
An example of the structured data lessons from Lesson 4 on another dataset.
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
from pathlib import Path
import pandas as pd
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from fastai import structured
from fastai.column_data import ColumnarModelData
from fastai.dataset import get_cv_idxs
from sklearn.metrics import cohen_kappa_score
from ml_metrics import quadratic_weighted_kappa
from torch.nn.init import kaiming_uniform, kaiming_normal
PATH = Path('./data/prudential')
PATH.mkdir(exist_ok=True)
```
## Download dataset
```
!kaggle competitions download -c prudential-life-insurance-assessment --path={PATH}
for file in os.listdir(PATH):
if not file.endswith('zip'):
continue
!unzip -q -d {PATH} {PATH}/{file}
train_df = pd.read_csv(PATH/'train.csv')
train_df.head()
```
Extra feature engineering taken from the forum
```
train_df['Product_Info_2_char'] = train_df.Product_Info_2.str[0]
train_df['Product_Info_2_num'] = train_df.Product_Info_2.str[1]
train_df['BMI_Age'] = train_df['BMI'] * train_df['Ins_Age']
med_keyword_columns = train_df.columns[train_df.columns.str.startswith('Medical_Keyword_')]
train_df['Med_Keywords_Count'] = train_df[med_keyword_columns].sum(axis=1)
train_df['num_na'] = train_df.apply(lambda x: sum(x.isnull()), 1)
categorical_columns = 'Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41'.split(', ')
categorical_columns += ['Product_Info_2_char', 'Product_Info_2_num']
cont_columns = 'Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5, Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32'.split(', ')
cont_columns += [c for c in train_df.columns if c.startswith('Medical_Keyword_')] + ['BMI_Age', 'Med_Keywords_Count', 'num_na']
train_df[categorical_columns].head()
train_df[cont_columns].head()
train_df = train_df[categorical_columns + cont_columns + ['Response']]
len(train_df.columns)
```
### Convert to categorical
```
for col in categorical_columns:
train_df[col] = train_df[col].astype('category').cat.as_ordered()
train_df['Product_Info_1'].dtype
train_df.shape
```
### Numericalise and process DataFrame
```
df, y, nas, mapper = structured.proc_df(train_df, 'Response', do_scale=True)
y = y.astype('float')
num_targets = len(set(y))
```
### Create ColumnData object (instead of ImageClassifierData)
```
cv_idx = get_cv_idxs(len(df))
cv_idx
model_data = ColumnarModelData.from_data_frame(
PATH, cv_idx, df, y, cat_flds=categorical_columns, is_reg=True)
model_data.trn_ds[0][0].shape[0] + model_data.trn_ds[0][1].shape[0]
model_data.trn_ds[0][1].shape
```
### Get embedding sizes
The formula Jeremy uses for getting embedding sizes is: cardinality / 2 (maxed out at 50).
We reproduce that below:
```
categorical_column_sizes = [
(c, len(train_df[c].cat.categories) + 1) for c in categorical_columns]
categorical_column_sizes[:5]
embedding_sizes = [(c, min(50, (c+1)//2)) for _, c in categorical_column_sizes]
embedding_sizes[:5]
def emb_init(x):
x = x.weight.data
sc = 2/(x.size(1)+1)
x.uniform_(-sc,sc)
class MixedInputModel(nn.Module):
def __init__(self, emb_sizes, num_cont):
super().__init__()
embedding_layers = []
for size, dim in emb_sizes:
embedding_layers.append(
nn.Embedding(
num_embeddings=size, embedding_dim=dim))
self.embeddings = nn.ModuleList(embedding_layers)
for emb in self.embeddings: emb_init(emb)
self.embedding_dropout = nn.Dropout(0.04)
self.batch_norm_cont = nn.BatchNorm1d(num_cont)
num_emb = sum(e.embedding_dim for e in self.embeddings)
self.fc1 = nn.Linear(
in_features=num_emb + num_cont,
out_features=1000)
kaiming_normal(self.fc1.weight.data)
self.dropout_fc1 = nn.Dropout(p=0.01)
self.batch_norm_fc1 = nn.BatchNorm1d(1000)
self.fc2 = nn.Linear(
in_features=1000,
out_features=500)
kaiming_normal(self.fc2.weight.data)
self.dropout_fc2 = nn.Dropout(p=0.01)
self.batch_norm_fc2 = nn.BatchNorm1d(500)
self.output_fc = nn.Linear(
in_features=500,
out_features=1
)
kaiming_normal(self.output_fc.weight.data)
self.sigmoid = nn.Sigmoid()
def forward(self, categorical_input, continuous_input):
# Add categorical embeddings together
categorical_embeddings = [e(categorical_input[:,i]) for i, e in enumerate(self.embeddings)]
categorical_embeddings = torch.cat(categorical_embeddings, 1)
categorical_embeddings_dropout = self.embedding_dropout(categorical_embeddings)
# Batch normalise continuos vars
continuous_input_batch_norm = self.batch_norm_cont(continuous_input)
# Create a single vector
x = torch.cat([
categorical_embeddings_dropout, continuous_input_batch_norm
], dim=1)
# Fully-connected layer 1
fc1_output = self.fc1(x)
fc1_relu_output = F.relu(fc1_output)
fc1_dropout_output = self.dropout_fc1(fc1_relu_output)
fc1_batch_norm = self.batch_norm_fc1(fc1_dropout_output)
# Fully-connected layer 2
fc2_output = self.fc2(fc1_batch_norm)
fc2_relu_output = F.relu(fc2_output)
fc2_batch_norm = self.batch_norm_fc2(fc2_relu_output)
fc2_dropout_output = self.dropout_fc2(fc2_batch_norm)
output = self.output_fc(fc2_dropout_output)
output = self.sigmoid(output)
output = output * 7
output = output + 1
return output
num_cont = len(df.columns) - len(categorical_columns)
model = MixedInputModel(
embedding_sizes,
num_cont
)
model
from fastai.column_data import StructuredLearner
def weighted_kappa_metric(probs, y):
return quadratic_weighted_kappa(probs[:,0], y[:,0])
learner = StructuredLearner.from_model_data(model, model_data, metrics=[weighted_kappa_metric])
learner.lr_find()
learner.sched.plot()
learner.fit(0.0001, 3, use_wd_sched=True)
learner.fit(0.0001, 5, cycle_len=1, cycle_mult=2, use_wd_sched=True)
learner.fit(0.00001, 3, cycle_len=1, cycle_mult=2, use_wd_sched=True)
```
There's either a bug in my implementation, or a NN doesn't do that well at this problem.
| github_jupyter |
# **Quality Control (QC) and filtering**
This notebooks serves for filtering of the second human testis sample. It is analogous to the filtering of the other sample, so feel free to go through it faster and just skimming through the text.
---------------------
**Motivation:**
Quality control and filtering is the most important steps of single cell data analysis. Allowing low quality cells into your analysis will compromise/mislead your conclusions by adding hundreds of meaningless data points to your workflow.
The main sources of low quality cells are
- broken cells for which some of their transcripts get lost
- cells isolated together with too much ambient RNA
- missing cell during isolation (e.g. empty droplet in microfluidic machines)
- multiple cells isolated together (multiplets, usually only two cells - doublets)
---------------------------
**Learning objectives:**
- Understand and discuss QC issues and measures from single cell data
- Explore QC graphs and set filtering tools and thresholds
- Analyze the results of QC filters and evaluate necessity for different filtering
----------------
**Execution time: 40 minutes**
------------------------------------
**Import the packages**
```
import scanpy as sc
import pandas as pd
import scvelo as scv
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import ipywidgets as widgets
sample_3 = sc.read_h5ad('../../Data/notebooks_data/sample_3.h5ad')
```
We calculate the percentage of mitocondrial genes into each cell. A high percentage denotes the possibility that material from broken cells has been captured during cell isolation, and then sequenced. Mitocondrial percentage is not usually calculated by `scanpy`, because there is need for an identifier for mitocondrial genes, and there is not a standard one. In our case, we look at genes that contain `MT-` into their ID, and calculate their transcript proportion into each cell. We save the result as an observation into `.obs['perc_mito']`
```
MT = ['MT' in i for i in sample_3.var_names]
perc_mito = np.sum( sample_3[:,MT].X, 1 ) / np.sum( sample_3.X, 1 )
sample_3.obs['perc_mito'] = perc_mito.copy()
```
## Visualize and evaluate quality measures
We can do some plots to have a look at quality measures combined together
**Counts vs Genes:** this is a typical plot, where you look at the total transcripts per cells (x axis) and detected genes per cell (y axis). Usually, those two measures grow together. Points with a lot of transcripts and genes might be multiplets (multiple cells sequenced together as one), while very few transcripts and genes denote the presence of only ambient RNA or very low quality sequencing of a cell. Below, the dots are coloured based on the percentage of mitocondrial transcripts. Note how a high proportion is often on cells with very low transcripts and genes (bottom left corner of the plot)
```
sc.pl.scatter(sample_3, x='total_counts', y='n_genes_by_counts', color='perc_mito',
title ='Nr of transcripts vs Nr detected genes, coloured by mitocondrial content',
size=50)
```
**Transcripts and Genes distribution:** Here we simply look at the distribution of transcripts per cells and detected genes per cell. Note how the distribution is bimodal. This usually denotes a cluster of low-quality cells and viable cells. Sometimes filtering out the data points on the left-most modes of those graphs removes a lot of cells from a dataset, but this is quite a normal thing not to be worried about. The right side of the distributions show a tail with few cells having a lot of transcripts and genes. It is also good to filter out some of those extreme values - for technical reasons, it will also help in having a better normalization of the data later on.
```
ax = sns.distplot(sample_3.obs['total_counts'], bins=50)
ax.set_title('Cells Transcripts distribution')
ax = sns.distplot(sample_3.obs['n_genes_by_counts'], bins=50)
ax.set_title('Distribution of detected genes per cell')
```
**Mitocondrial content**: In this dataset there are few cell with a high percentage of mitocondrial content. Those are precisely 245 if we set 0.1 (that is 10%) as a treshold. A value between 10% and 20% is the usual standard when filtering single cell datasets.
```
#subsetting to see how many cells have percentage of mitocondrial genes above 10%
sample_3[ sample_3.obs['perc_mito']>0.1, : ].shape
ax = sns.distplot(sample_3.obs['perc_mito'], bins=50)
ax.set_title('Distribution of mitocondrial content per cell')
```
## Choosing thresholds
Let's establish some filtering values by looking at the plots above, and then apply filtering
```
MIN_COUNTS = 5000 #minimum number of transcripts per cell
MAX_COUNTS = 40000 #maximum number of transcripts per cell
MIN_GENES = 2000 #minimum number of genes per cell
MAX_GENES = 6000 #maximum number of genes per cell
MAX_MITO = .1 #mitocondrial percentage treshold)
#plot cells filtered by max transcripts
a=sc.pl.scatter(sample_3[ sample_3.obs['total_counts']<MAX_COUNTS ],
x='total_counts', y='n_genes_by_counts', color='perc_mito', size=50,
title =f'Nr of transcripts vs Nr detected genes, coloured by mitocondrial content\nsubsetting with threshold MAX_COUNTS={MAX_COUNTS}')
#plot cells filtered by min genes
b=sc.pl.scatter(sample_3[ sample_3.obs['n_genes_by_counts'] > MIN_GENES ],
x='total_counts', y='n_genes_by_counts', color='perc_mito', size=50,
title =f'Nr of transcripts vs Nr detected genes, coloured by mitocondrial content\nsubsetting with treshold MIN_GENES={MIN_GENES}')
```
The following commands filter using the chosen tresholds.
Again, scanpy does not do the mitocondrial QC filtering,
so we do that on our own by subsetting the data.
Note for the last two filterings: the parameter `min_cells` remove
all those cells showing transcripts for only 10 genes or less -
standard values for this parameter are usually between 3 and 10,
and do not come from looking at the QC plots.
```
sc.preprocessing.filter_cells(sample_3, max_counts=MAX_COUNTS)
sc.preprocessing.filter_cells(sample_3, min_counts=MIN_COUNTS)
sc.preprocessing.filter_cells(sample_3, min_genes=MIN_GENES)
sc.preprocessing.filter_cells(sample_3, max_genes=MAX_GENES)
sc.preprocessing.filter_genes(sample_3, min_cells=10)
sample_3 = sample_3[sample_3.obs['perc_mito']<MAX_MITO].copy()
```
We have been reducing the data quite a lot from the original >8000 cells. Often, even more aggressive filterings are done. For example, one could have set the minimum number of genes detected to 3000. It would have been anyway in the area between the two modes of the QC plot.
```
print(f'Cells after filters: {sample_3.shape[0]}, Genes after filters: {sample_3.shape[1]}')
```
## Doublet filtering
Another important step consists in filtering out multiplets. Those are in the almost totality of the cases doublets, because triplets and above multiplets are extremely rare. Read [this more technical blog post](https://liorpachter.wordpress.com/2019/02/07/sub-poisson-loading-for-single-cell-rna-seq/) for more explanations about this.
The external tool `scrublet` simulates doublets by putting together the transcripts of random pairs of cells from the dataset. Then it assigns a score to each cell in the data, based on the similarity with the simulated doublets. An `expected_doublet_rate` of 0.06 (6%) is quite a typical value for single cell data, but if you have a better estimate from laboratory work, microscope imaging or a specific protocol/sequencing machine, you can also tweak the value.
`random_state` is a number choosing how the simulations are done. using a specific random state means that you will always simulate the same doublets whenever you run this code. This allows you to reproduce exactly the same results every time and is a great thing for reproducibility in your own research.
```
sc.external.pp.scrublet(sample_3,
expected_doublet_rate=0.06,
random_state=12345)
```
It seems that the doublet rate is likely to be lower than 6%, meaning that in this regard the data has been produced pretty well. We now plot the doublet scores assigned to each cell by the algorithm. We can see that most cells have a low score (the score is a value between 0 and 1). Datasets with many doublets show a more bimodal distribution, while here we just have a light tail beyond 0.1.
```
sns.distplot(sample_3.obs['doublet_score'])
```
We can choose 0.1 as filtering treshold for the few detected doublets or alternatively use the automatic selection of doublets by the algorithm. We will choose the last option and use the automatically chosen doublets.
```
sample_3 = sample_3[np.invert(sample_3.obs['predicted_doublet'])].copy()
```
## Evaluation of filtering
A quite basic but easy way to look at the results of our filtering is to normalize and plot the dataset on some projections. Here we use a standard normalization technique that consists of:
- **TPM normalization**: the transcripts of each cell are normalized, so that their total amounts to the same value in each cell. This should make cells more comparable independently of how many transcripts has been retained during cell isolation.
- **Logarithmization**: the logarithm of the normalized transcripts is calculated. This reduce the variability of transcripts values and highlights variations due to biological factors.
- **Standardization**: Each gene is standardized across all cells. This is useful for example for projecting the data onto a PCA.
```
# TPM normalization and storage of the matrix
sc.pp.normalize_per_cell(sample_3)
sample_3.layers['umi_tpm'] = sample_3.X.copy()
# Logarithmization and storage
sc.pp.log1p(sample_3)
sample_3.layers['umi_log'] = sample_3.X.copy()
# Select some of the most meaningful genes to calculate the PCA plot later
# This must be done on logarithmized values
sc.pp.highly_variable_genes(sample_3, n_top_genes=15000)
# save the dataset
sample_3.write('../../Data/notebooks_data/sample_3.filt.h5ad')
# standardization and matrix storage
sc.pp.scale(sample_3)
sample_3.layers['umi_gauss'] = sample_3.X.copy()
```
Now we calculate the PCA projection
```
sc.preprocessing.pca(sample_3, svd_solver='arpack', random_state=12345)
```
We can look at the PCA plot and color it by some quality measure and gene expression. We can already see how the PCA has a clear structure with only a few dots sparsed around. It seems the filtering has got a good result.
```
sc.pl.pca(sample_3, color=['total_counts','SYCP1'])
```
We plot the variance ratio to see how each component of the PCA changes in variability. Small changes in variability denote that the components are mostly modeling noise in the data. We can choose a threshold (for example 15 PCA components) to be used in all algorithms that use PCA to calculate any quantity.
```
sc.plotting.pca_variance_ratio(sample_3)
```
We project the data using the UMAP algorithm. This is very good in preserving the structure of a dataset in low dimension, if any is present. We first calculate the neighbors of each cell (that is, its most similar cells), those are then used for the UMAP. The neighbors are calculated using the PCA matrix instead of the full data matrix, so we can choose the number of PCA components to use (parameter `n_pcs`). Many algorithms work on the PCA, so you will see the parameter used again in other places.
```
sc.pp.neighbors(sample_3, n_pcs=15, random_state=12345)
sc.tools.umap(sample_3, random_state=54321)
```
The UMAP plot gives a pretty well-structured output for this dataset. We will keep working further with this filtering.
```
sc.plotting.umap(sample_3, color=['total_counts','SYCP1'])
```
-------------------------------
## Wrapping up
The second human testis dataset is now filtered and you can proceed to the normalize and integration part of the analysis (Notebook `Part03_normalize_and_integrate`).
| github_jupyter |
Carbon Insight: Carbon Emissions Visualization
==============================================
This tutorial aims to showcase how to visualize anthropogenic CO2 emissions with a near-global coverage and track correlations between global carbon emissions and socioeconomic factors such as COVID-19 and GDP.
```
# Requirements
%pip install numpy
%pip install pandas
%pip install matplotlib
```
# A. Process Carbon Emission Data
This notebook helps you to process and visualize carbon emission data provided by [Carbon-Monitor](https://carbonmonitor.org/), which records human-caused carbon emissions from different countries, sources, and timeframes that are of interest to you.
Overview:
- [Process carbon emission data](#a1)
- [Download data from Carbon Monitor](#a11)
- [Calculate the rate of change](#a12)
- [Expand country regions](#a13)
- [Visualize carbon emission data](#a2)
- [Observe carbon emission data from the perspective of time](#a21)
- [Compare carbon emission data of different sectors](#a22)
- [Examples](#a3)
- [World carbon emission data](#a31)
- [US carbon emission data](#a32)
```
import io
from urllib.request import urlopen
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
import os
# Optional Function: Export Data
def export_data(file_name: str, df: pd.DataFrame):
# df = country_region_name_to_code(df)
export_path = os.path.join('export_data', file_name)
print(f'Export Data to {export_path}')
if not os.path.exists('export_data'):
os.mkdir('export_data')
with open(export_path, 'w', encoding='utf-8') as f:
f.write(df.to_csv(index=None, line_terminator='\n', encoding='utf-8'))
```
## <a id="a1"></a> 1. Process Data
### <a id="a11"></a> 1.1. Download data from Carbon Monitor
We are going to download tabular carbon emission data and convert to Pandas Dataframe.
Supported data types:
- ```carbon_global``` includes carbon emission data of 11 countries and regions worldwide.
- ```carbon_us``` includes carbon emission data of 51 states of the United States.
- ```carbon_eu``` includes carbon emission data of 27 countries of the European Union.
- ```carbon_china``` includes carbon emission data of 31 cities and provinces of China.
```
def get_data_from_carbon_monitor(data_type='carbon_global'):
assert data_type in ['carbon_global', 'carbon_us', 'carbon_eu', 'carbon_china']
data_url = f'https://datas.carbonmonitor.org/API/downloadFullDataset.php?source={data_type}'
data = urlopen(data_url).read().decode('utf-8-sig')
df = pd.read_csv(io.StringIO(data))
df = df.drop(columns=['timestamp'])
df = df.loc[pd.notna(df['date'])]
df = df.rename(columns={'country': 'country_region'})
df['date'] = pd.to_datetime(df['date'], format='%d/%m/%Y')
if data_type == 'carbon_us':
df = df.loc[df['state'] != 'United States']
return df
```
### <a id="a12"></a> 1.2. Calculate the rate of change
The rate of change $\Delta(s, r, t)$ is defined as the ratio of current value and moving average of a certain window size:
$$\begin{aligned}
\Delta(s, r, t) = \left\{\begin{aligned}
&\frac{TX(s, r, t)}{\sum_{\tau=t-T}^{t-1}X(\tau)}, &t\geq T\\
&1, &0<t<T
\end{aligned}\right.
\end{aligned}$$
Where $X(s, r, t)$ is the carbon emission value of sector $s$, region $r$ and date $t$; $T$ is the window size with default value $T=14$.
```
def calculate_rate_of_change(df, window_size=14):
region_scope = 'state' if 'state' in df.columns else 'country_region'
new_df = pd.DataFrame()
for sector in set(df['sector']):
sector_mask = df['sector'] == sector
for region, values in df.loc[sector_mask].pivot(index='date', columns=region_scope, values='value').items():
values.fillna(0)
rates = values / values.rolling(window_size).mean()
rates.fillna(value=1, inplace=True)
tmp_df = pd.DataFrame(
index=values.index,
columns=['value', 'rate_of_change'],
data=np.array([values.to_numpy(), rates.to_numpy()]).T
)
tmp_df['sector'] = sector
tmp_df[region_scope] = region
new_df = new_df.append(tmp_df.reset_index())
return new_df
```
### <a id="a13"></a> 1.3. Expand country regions
*Note: This step applies only to the ```carbon_global``` dataset.*
The dataset ```carbon_global``` does not list all the countries/regions in the world. Instead, there are two groups which contains multiple countries/regions: ```ROW``` (i.e. the rest of the world) and ```EU27 & UK```.
In order to obtain the carbon emission data of countries/regions in these two groups, we can refer to [the EDGAR dataset](https://edgar.jrc.ec.europa.eu/dataset_ghg60) and use [the table of CO2 emissions of all world countries in 2019](./static_data/Baseline.csv) as the baseline.
Assume the the carbon emission of each non-listed country/region is linearly related to the carbon emission of the group it belongs to, we have:
$$\begin{aligned}
X(s, r, t) &= \frac{\sum_{r_i\in R(r)}X(s, r_i, t)}{\sum_{r_i\in R(r)}X(s, r_i, t_0)}X(s, r, t_0)\\
&= \frac{X_{Raw}(s, R(r), t)}{\sum_{r_i\in R(r)}X_{Baseline}(s, r_i)}X_{Baseline}(s, r)
\end{aligned}$$
Where
- $X(s, r, t)$ is the carbon emission value of sector $s$, country/region $r$ and date $t$.
- $t_0$ is the date of the baseline table.
- $R(r)$ is the group that contains country/region $r$.
- $X_{Raw}(s, R, t)$ is the carbon emission value of sector $s$, country/region group $R$ and date $t$ in the ```carbon_global``` dataset.
- $X_{Baseline}(s, r)$ is the carbon emission value of sector $s$ and country/region $r$ in the baseline table.
Note that the baseline table does not contain the ```International Aviation``` sector. Therefore, the data for ```International Aviation``` is only available to countries listed in the ```carbon_global``` dataset. When we expand the ```ROW``` and the ```EU27 & UK``` groups to other countries/regions of the world, only the other five sectors are considered.
```
def expand_country_regions(df):
sectors = set(df['sector'])
assert 'country_region' in df.columns
df = df.replace('US', 'United States')
df = df.replace('UK', 'United Kingdom')
original_country_regions = set(df['country_region'])
country_region_df = pd.read_csv('static_data/CountryRegion.csv')
base = {}
name_to_code = {}
for _, (name, code, source) in country_region_df.loc[:, ['Name', 'Code', 'DataSource']].iterrows():
if source.startswith('Simulated') and name not in original_country_regions:
name_to_code[name] = code
base[code] = 'ROW' if source.endswith('ROW') else 'EU27 & UK'
baseline = pd.read_csv('static_data/Baseline.csv')
baseline = baseline.set_index('CountryRegionCode')
baseline = baseline.loc[:, [sector for sector in baseline.columns if sector in sectors]]
group_baseline = {}
for group in original_country_regions & set(['ROW', 'EU27 & UK']):
group_baseline[group] = baseline.loc[[code for code, base_group in base.items() if base_group == group], :].sum()
new_df = pd.DataFrame()
sector_masks = {sector: df['sector'] == sector for sector in sectors}
for country_region in set(country_region_df['Name']):
if country_region in name_to_code:
code = name_to_code[country_region]
group = base[code]
group_mask = df['country_region'] == group
for sector, sum_value in group_baseline[group].items():
tmp_df = df.loc[sector_masks[sector] & group_mask, :].copy()
tmp_df['value'] = tmp_df['value'] / sum_value * baseline.loc[code, sector]
tmp_df['country_region'] = country_region
new_df = new_df.append(tmp_df)
elif country_region in original_country_regions:
new_df = new_df.append(df.loc[df['country_region'] == country_region])
return new_df
```
## 2. <a id="a2"></a> Visualize Data
This is a auxiliary module for displaying data, which can be modified arbitrarily.
### <a id="a21"></a> 2.1. Plot by date
In this part we are going to create a line chart, where the emission value and rate of change for given counties during the given time can be browsed.
```
def plot_by_date(df, start_date=None, end_date=None, sector=None, regions=None, title='Carbon Emission by Date'):
if start_date is None:
start_date = df['date'].min()
if end_date is None:
end_date = df['date'].max()
tmp_df = df.loc[(df['date'] >= start_date) & (df['date'] <= end_date)]
region_scope = 'state' if 'state' in tmp_df.columns else 'country_region'
if regions is None or type(regions) == int:
region_list = list(set(tmp_df[region_scope]))
sector_mask = True if sector is None else tmp_df['sector'] == sector
region_list.sort(key=lambda region: -tmp_df.loc[(tmp_df[region_scope] == region) & sector_mask, 'value'].sum())
regions = region_list[:3 if regions is None else regions]
tmp_df = pd.concat([tmp_df.loc[tmp_df[region_scope] == region] for region in regions])
if sector not in set(tmp_df['sector']):
tmp_df['rate_of_change'] = tmp_df['value'] / tmp_df['rate_of_change']
tmp_df = tmp_df.groupby(['date', region_scope]).sum().reset_index()
value_df = tmp_df.pivot(index='date', columns=region_scope, values='value')
rate_df = tmp_df.pivot(index='date', columns=region_scope, values='rate_of_change')
rate_df = value_df / rate_df
else:
tmp_df = tmp_df.loc[tmp_df['sector'] == sector, [region_scope, 'date', 'value', 'rate_of_change']]
value_df = tmp_df.pivot(index='date', columns=region_scope, values='value')
rate_df = tmp_df.pivot(index='date', columns=region_scope, values='rate_of_change')
value_df = value_df.loc[:, regions]
rate_df = rate_df.loc[:, regions]
fig = plt.figure(figsize=(10, 8))
fig.suptitle(title)
plt.subplot(2, 1, 1)
plt.plot(value_df)
plt.ylabel('Carbon Emission Value / Mt CO2')
plt.xticks(rotation=60)
plt.legend(regions, loc='upper right')
plt.subplot(2, 1, 2)
plt.plot(rate_df)
plt.ylabel('Rate of Change')
plt.xticks(rotation=60)
plt.legend(regions, loc='upper right')
plt.subplots_adjust(hspace=0.3)
```
### <a id="a22"></a> 2.2. Plot by sector
Generally, sources of emissions can be divided into five or six categories:
- Domestic Aviation
- Ground Transport
- Industry
- Power
- Residential
- International Aviation
Where the data of ```International Aviation``` are only available to ```carbon_global``` and ```carbon_us``` datasets. For ```carbon_global``` dataset, we can not expand the data for International Aviation of non-listed countries.
Let's create a pie chart and a stacked column chart, where you can focus on details of specific countiy/regions’ emission data, including quantity and percentage breakdown by above sectors.
```
def plot_by_sector(df, start_date=None, end_date=None, sectors=None, region=None, title='Carbon Emission Data by Sector'):
if start_date is None:
start_date = df['date'].min()
if end_date is None:
end_date = df['date'].max()
tmp_df = df.loc[(df['date'] >= start_date) & (df['date'] <= end_date)]
region_scope = 'state' if 'state' in df.columns else 'country_region'
if region in set(tmp_df[region_scope]):
tmp_df = tmp_df.loc[tmp_df[region_scope] == region]
if sectors is None:
sectors = list(set(tmp_df['sector']))
sectors.sort(key=lambda sector: -tmp_df.loc[tmp_df['sector'] == sector, 'value'].sum())
tmp_df = tmp_df.loc[[sector in sectors for sector in tmp_df['sector']]]
fig = plt.figure(figsize=(10, 8))
fig.suptitle(title)
plt.subplot(2, 1, 1)
data = np.array([tmp_df.loc[tmp_df['sector'] == sector, 'value'].sum() for sector in sectors])
total = tmp_df['value'].sum()
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
kw = dict(arrowprops=dict(arrowstyle="-"),
bbox=bbox_props, zorder=0, va="center")
wedges, texts = plt.pie(data, wedgeprops=dict(width=0.5), startangle=90)
for i, p in enumerate(wedges):
factor = data[i] / total * 100
if factor > 5:
ang = (p.theta2 - p.theta1)/2. + p.theta1
y = np.sin(np.deg2rad(ang))
x = np.cos(np.deg2rad(ang))
horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))]
connectionstyle = "angle,angleA=0,angleB={}".format(ang)
kw["arrowprops"].update({"connectionstyle": connectionstyle})
text = '{}\n{:.1f} Mt CO2 ({:.1f}%)'.format(sectors[i], data[i], factor)
plt.annotate(
text,
xy=(x, y),
xytext=(1.35 * np.sign(x), 1.4 * y),
horizontalalignment=horizontalalignment,
**kw
)
plt.axis('equal')
plt.subplot(2, 1, 2)
labels = []
data = [[] for _ in sectors]
date = pd.to_datetime(start_date)
delta = pd.DateOffset(months=1)
while date <= pd.to_datetime(end_date):
sub_df = tmp_df.loc[(tmp_df['date'] >= date) & (tmp_df['date'] < date + delta)]
for i, sector in enumerate(sectors):
data[i].append(sub_df.loc[sub_df['sector'] == sector, 'value'].sum())
labels.append(date.strftime('%Y-%m'))
date += delta
data = np.array(data)
for i, sector in enumerate(sectors):
plt.bar(labels, data[i], bottom=data[:i].sum(axis=0), label=sector)
plt.xticks(rotation=60)
plt.legend()
```
## <a id="a3"></a> 3. Examples
### <a id="a31"></a> 3.1. World carbon emission data
```
data_type = 'carbon_global'
print(f'Download {data_type} data')
global_df = get_data_from_carbon_monitor(data_type)
print('Calculate rate of change')
global_df = calculate_rate_of_change(global_df)
print('Expand country / regions')
global_df = expand_country_regions(global_df)
export_data('global_carbon_emission_data.csv', global_df)
global_df
plot_by_date(
global_df,
start_date='2019-01-01',
end_date='2020-12-31',
sector='Residential',
regions=['China', 'United States'],
title='Residential Carbon Emission, China v.s. United States, 2019-2020'
)
plot_by_sector(
global_df,
start_date='2019-01-01',
end_date='2020-12-31',
sectors=None,
region=None,
title='World Carbon Emission by Sectors, 2019-2020',
)
```
### <a id="a32"></a> 3.2. US carbon emission data
```
data_type = 'carbon_us'
print(f'Download {data_type} data')
us_df = get_data_from_carbon_monitor(data_type)
print('Calculate rate of change')
us_df = calculate_rate_of_change(us_df)
export_data('us_carbon_emission_data.csv', us_df)
us_df
plot_by_date(
us_df,
start_date='2019-01-01',
end_date='2020-12-31',
sector=None,
regions=3,
title='US Carbon Emission, Top 3 States, 2019-2020'
)
plot_by_sector(
us_df,
start_date='2019-01-01',
end_date='2020-12-31',
sectors = None,
region='California',
title='California Carbon Emission by Sectors, 2019-2020',
)
```
# B. Co-Analysis of Carbon Emission Data v.s. COVID-19 Data
This section will help you to visualize the relativity between carbon emissions in different countries and the trends of the COVID-19 pandemic since January 2020 provided by [Oxford COVID-19 Government Response Tracker](https://covidtracker.bsg.ox.ac.uk/). The severity of the epidemic can be shown in three aspects: the number of new diagnoses, the number of deaths and the stringency and policy indices of governments.
Overview:
- [Download data from Oxford COVID-19 Government Response Tracker](#b1)
- [Visualize COVID-19 data and carbon emission data](#b2)
- [Example: COVID-19 cases and stringency index v.s. carbon emission in US](#b3)
```
import json
import datetime
from urllib.request import urlopen
```
## 1. <a id="b1"></a> Download COVID-19 Data
We are going to download JSON-formatted COVID-19 data and convert to Pandas Dataframe. The Oxford COVID-19 Government Response Tracker dataset provides confirmed cases, deaths and stringency index data for all countries/regions since January 2020.
- The ```confirmed``` measurement records the total number of confirmed COVID-19 cases since January 2020. We will convert it into incremental data.
- The ```deaths``` measurement records the total number of patients who died due to infection with COVID-19 since January 2020. We will convert it into incremental data.
- The ```stringency``` measurement means the Stringency Index, which is a float number from 0 to 100 that reflects how strict a country’s measures were, including lockdown, school closures, travel bans, etc. A higher score indicates a stricter response (i.e. 100 = strictest response).
```
def get_covid_data_from_oxford_covid_tracker(data_type='carbon_global'):
data = json.loads(urlopen("https://covidtrackerapi.bsg.ox.ac.uk/api/v2/stringency/date-range/{}/{}".format(
"2020-01-22",
datetime.datetime.now().strftime("%Y-%m-%d")
)).read().decode('utf-8-sig'))
country_region_df = pd.read_csv('static_data/CountryRegion.csv')
code_to_name = {code: name for _, (name, code) in country_region_df.loc[:, ['Name', 'Code']].iterrows()}
last_df = 0
df = pd.DataFrame()
for date in sorted(data['data'].keys()):
sum_df = pd.DataFrame({name: data['data'][date][code] for code, name in code_to_name.items() if code in data['data'][date]})
sum_df = sum_df.T[['confirmed', 'deaths', 'stringency']].fillna(last_df).astype(np.float32)
tmp_df = sum_df - last_df
last_df = sum_df[['confirmed', 'deaths']]
last_df['stringency'] = 0
tmp_df = tmp_df.reset_index().rename(columns={'index': 'country_region'})
tmp_df['date'] = pd.to_datetime(date)
df = df.append(tmp_df)
return df
```
## <a id="b2"></a> 2. Visualize COVID-19 Data and Carbon Emission Data
This part will guide you to create a line-column chart, where you can view the specified COVID-19 measurement (```confirmed```, ```deaths``` or ```stringency```) and carbon emissions in the specified country/region throughout time.
```
def plot_covid_data_vs_carbon_emission_data(
covid_df, carbon_df, start_date=None, end_date=None,
country_region=None, sector=None, covid_measurement='confirmed',
title='Carbon Emission v.s. COVID-19 Confirmed Cases'
):
if start_date is None:
start_date = max(covid_df['date'].min(), carbon_df['date'].min())
if end_date is None:
end_date = min(covid_df['date'].max(), carbon_df['date'].max())
x = pd.to_datetime(start_date)
dates = [x]
while x <= pd.to_datetime(end_date):
x = x.replace(year=x.year+1, month=1) if x.month == 12 else x.replace(month=x.month+1)
dates.append(x)
dates = [f'{x.year}-{x.month}' for x in dates]
plt.figure(figsize=(10, 6))
plt.title(title)
plt.xticks(rotation=60)
if sector in set(carbon_df['sector']):
carbon_df = carbon_df[carbon_df['sector'] == sector]
else:
sector = 'All Sectors'
if 'country_region' not in carbon_df.columns:
raise ValueError('The carbon emission data need to be disaggregated by countries/regions.')
if country_region in set(carbon_df['country_region']):
carbon_df = carbon_df.loc[carbon_df['country_region'] == country_region]
else:
country_region = 'World'
carbon_df = carbon_df[['date', 'value']]
carbon_df = carbon_df.loc[(carbon_df['date'] >= f'{dates[0]}-01') & (carbon_df['date'] < f'{dates[-1]}-01')].set_index('date')
carbon_df = carbon_df.groupby(carbon_df.index.year * 12 + carbon_df.index.month).sum()
plt.bar(dates[:-1], carbon_df['value'], color='C1')
plt.ylim(0)
plt.legend([f'{country_region} {sector}\nCarbon Emission / Mt CO2'], loc='upper left')
plt.twinx()
if country_region in set(covid_df['country_region']):
covid_df = covid_df.loc[covid_df['country_region'] == country_region]
covid_df = covid_df[['date', covid_measurement]]
covid_df = covid_df.loc[(covid_df['date'] >= f'{dates[0]}-01') & (covid_df['date'] < f'{dates[-1]}-01')].set_index('date')
covid_df = covid_df.groupby(covid_df.index.year * 12 + covid_df.index.month)
covid_df = covid_df.mean() if covid_measurement == 'stringency' else covid_df.sum()
plt.plot(dates[:-1], covid_df[covid_measurement])
plt.ylim(0, 100 if covid_measurement == 'stringency' else None)
plt.legend([f'COVID-19\n{covid_measurement}'], loc='upper right')
```
## <a id="b3"></a> 3. Examples
```
print(f'Download COVID-19 data')
covid_df = get_covid_data_from_oxford_covid_tracker(data_type)
export_data('covid_data.csv', covid_df)
covid_df
plot_covid_data_vs_carbon_emission_data(
covid_df,
global_df,
start_date=None,
end_date=None,
country_region='United States',
sector=None,
covid_measurement='confirmed',
title = 'US Carbon Emission v.s. COVID-19 Confirmed Cases'
)
plot_covid_data_vs_carbon_emission_data(
covid_df,
global_df,
start_date=None,
end_date=None,
country_region='United States',
sector=None,
covid_measurement='stringency',
title = 'US Carbon Emission v.s. COVID-19 Stringency Index'
)
```
# C. Co-Analysis of Historical Carbon Emission Data v.s. Population & GDP Data
This section illustrates how to compare carbon intensity and per capita emissions of different countries/regions. Refer to [the EDGAR dataset](https://edgar.jrc.ec.europa.eu/dataset_ghg60) and [World Bank Open Data](https://data.worldbank.org/), carbon emissions, population and GDP data of countries/regions in the world from 1970 to 2018 are available.
Overview:
- [Process carbon emission & social economy data](#c1)
- [Download data from EDGAR](#c11)
- [Download data from World Bank](#c12)
- [Merge datasets](#c13)
- [Visualize carbon emission & social economy data](#c2)
- [See how per capita emissions change over time in different countries/regions](#c21)
- [Observe how *carbon intensity* reduced over time](#c22)
- [Example: relationships of carbon emission and social economy in huge countries](#c3)
*Carbon intensity* is the measure of CO2 produced per US dollar GDP. In other words, it’s a measure of how much CO2 we emit when we generate one dollar of domestic economy. A rapidly decreasing carbon intensity is beneficial for the environment and economy.
```
import zipfile
```
## <a id="c1"></a> 1. Process Carbon Emission & Social Economy Data
### <a id="c11"></a> 1.1. Download 1970-2018 yearly carbon emission data from the EDGAR dataset
```
def get_historical_carbon_emission_data_from_edgar():
if not os.path.exists('download_data'):
os.mkdir('download_data')
site = 'https://cidportal.jrc.ec.europa.eu/ftp/jrc-opendata/EDGAR/datasets'
dataset = 'v60_GHG/CO2_excl_short-cycle_org_C/v60_GHG_CO2_excl_short-cycle_org_C_1970_2018.zip'
with open('download_data/historical_carbon_emission.zip', 'wb') as f:
f.write(urlopen(f'{site}/{dataset}').read())
with zipfile.ZipFile('download_data/historical_carbon_emission.zip', 'r') as zip_ref:
zip_ref.extractall('download_data/historical_carbon_emission')
hist_carbon_df = pd.read_excel(
'download_data/historical_carbon_emission/v60_CO2_excl_short-cycle_org_C_1970_2018.xls',
sheet_name='TOTALS BY COUNTRY',
index_col=2,
header=9,
).iloc[:, 4:]
hist_carbon_df.columns = hist_carbon_df.columns.map(lambda x: pd.to_datetime(f'{x[-4:]}-01-01'))
hist_carbon_df.index = hist_carbon_df.index.rename('country_region')
hist_carbon_df *= 1000
return hist_carbon_df
```
### <a id="c12"></a> 1.2. Download 1960-pressent yearly population and GDP data from World Bank
```
def read_worldbank_data(data_id):
tmp_df = pd.read_excel(
f'https://api.worldbank.org/v2/en/indicator/{data_id}?downloadformat=excel',
sheet_name='Data',
index_col=1,
header=3,
).iloc[:, 3:]
tmp_df.columns = tmp_df.columns.map(lambda x: pd.to_datetime(x, format='%Y'))
tmp_df.index = tmp_df.index.rename('country_region')
return tmp_df
def get_population_and_gdp_data_from_worldbank():
return read_worldbank_data('SP.POP.TOTL'), read_worldbank_data('NY.GDP.MKTP.CD')
```
### <a id="c13"></a> 1.3. Merge the three datasets
```
def melt_table_by_years(df, value_name, country_region_codes, code_to_name, years):
return df.loc[country_region_codes, years].rename(index=code_to_name).reset_index().melt(
id_vars=['country_region'],
value_vars=years,
var_name='date',
value_name=value_name
)
def merge_historical_data(hist_carbon_df, pop_df, gdp_df):
country_region_df = pd.read_csv('static_data/CountryRegion.csv')
code_to_name = {code: name for _, (name, code) in country_region_df.loc[:, ['Name', 'Code']].iterrows()}
country_region_codes = sorted(set(pop_df.index) & set(gdp_df.index) & set(hist_carbon_df.index) & set(code_to_name.keys()))
years = sorted(set(pop_df.columns) & set(gdp_df.columns) & set(hist_carbon_df.columns))
pop_df = melt_table_by_years(pop_df, 'population', country_region_codes, code_to_name, years)
gdp_df = melt_table_by_years(gdp_df, 'gdp', country_region_codes, code_to_name, years)
hist_carbon_df = melt_table_by_years(hist_carbon_df, 'carbon_emission', country_region_codes, code_to_name, years)
hist_carbon_df['population'] = pop_df['population']
hist_carbon_df['gdp'] = gdp_df['gdp']
return hist_carbon_df.fillna(0)
```
## <a id="c2"></a> 2. Visualize Carbon Emission & Social Economy Data
## <a id="c21"></a> 2.1. Plot changes in per capita emissions
We now will walk you through how to plot a bubble chart of per capita GDP and per capita emissions of different countries/regions for a given year.
```
def plot_carbon_emission_data_vs_gdp(df, year=None, countries_regions=None, title='Carbon Emission per Capita v.s. GDP per Capita'):
if year is None:
date = df['date'].max()
else:
date = min(max(pd.to_datetime(year, format='%Y'), df['date'].min()), df['date'].max())
df = df[df['date'] == date]
if countries_regions is None or type(countries_regions) == int:
country_region_list = list(set(df['country_region']))
country_region_list.sort(key=lambda country_region: -df.loc[df['country_region'] == country_region, 'population'].to_numpy())
countries_regions = country_region_list[:10 if countries_regions is None else countries_regions]
plt.figure(figsize=(10, 6))
plt.title(title)
max_pop = df['population'].max()
for country_region in countries_regions:
row = df.loc[df['country_region'] == country_region]
plt.scatter(
x=row['gdp'] / row['population'],
y=row['carbon_emission'] / row['population'],
s=row['population'] / max_pop * 1000,
)
for lgnd in plt.legend(countries_regions).legendHandles:
lgnd._sizes = [50]
plt.xlabel('GDP per Capita (USD)')
plt.ylabel('Carbon Emission per Capita (tCO2)')
```
## <a id="c22"></a> 2.2. Plot changes in carbon intensity
To see changes in Carbon Intensity of different countries overtime, let’s plot a line chart.
```
def plot_carbon_indensity_data(df, start_year=None, end_year=None, countries_regions=None, title='Carbon Indensity'):
start_date = df['date'].min() if start_year is None else pd.to_datetime(start_year, format='%Y')
end_date = df['date'].max() if end_year is None else pd.to_datetime(end_year, format='%Y')
df = df[(df['date'] >= start_date) & (df['date'] <= end_date)]
if countries_regions is None or type(countries_regions) == int:
country_region_list = list(set(df['country_region']))
country_region_list.sort(key=lambda country_region: -df.loc[df['country_region'] == country_region, 'population'].sum())
countries_regions = country_region_list[:3 if countries_regions is None else countries_regions]
df = pd.concat([df[df['country_region'] == country_region] for country_region in countries_regions])
df['carbon_indensity'] = df['carbon_emission'] / df['gdp']
indensity_df = df.pivot(index='date', columns='country_region', values='carbon_indensity')[countries_regions]
emission_df = df.pivot(index='date', columns='country_region', values='carbon_emission')[countries_regions]
plt.figure(figsize=(10, 8))
plt.subplot(211)
plt.title(title)
plt.plot(indensity_df)
plt.legend(countries_regions)
plt.ylabel('Carbon Emission (tCO2) per Dollar GDP')
plt.subplot(212)
plt.plot(emission_df)
plt.legend(countries_regions)
plt.ylabel('Carbon Emission (tCO2)')
```
## <a id="c3"></a> 3. Examples
```
print('Download historical carbon emission data')
hist_carbon_df = get_historical_carbon_emission_data_from_edgar()
print('Download population & GDP data')
pop_df, gdp_df = get_population_and_gdp_data_from_worldbank()
print('Merge data')
hist_carbon_df = merge_historical_data(hist_carbon_df, pop_df, gdp_df)
export_data('historical_carbon_emission_data.csv', hist_carbon_df)
hist_carbon_df
plot_carbon_emission_data_vs_gdp(
hist_carbon_df,
year=2018,
countries_regions=10,
title = 'Carbon Emission per Capita v.s. GDP per Capita, Top 10 Populous Countries/Regions, 2018'
)
plot_carbon_indensity_data(
hist_carbon_df,
start_year=None,
end_year=None,
countries_regions=['United States', 'China'],
title='Carbon Indensity & Carbon Emission, US v.s. China, 1970-2018'
)
```
| github_jupyter |
```
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from tensorflow import keras
import os
import re
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = '../models'
DO_DELETE = False
USE_BUCKET = False
BUCKET = 'BUCKET_NAME'
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
train = load_dataset(os.path.join("../data/", "aclImdb", "train"))
test = load_dataset(os.path.join("../data/", "aclImdb", "test"))
train = train.sample(5000)
test = test.sample(5000)
DATA_COLUMN = 'sentence'
LABEL_COLUMN = 'polarity'
label_list = [0, 1]
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples,
label_list,
MAX_SEQ_LENGTH,
tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples,
label_list,
MAX_SEQ_LENGTH,
tokenizer)
```
| github_jupyter |
<a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a>
<h1 align=center><font size = 5>From Requirements to Collection</font></h1>
## Introduction
In this lab, we will continue learning about the data science methodology, and focus on the **Data Requirements** and the **Data Collection** stages.
## Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
1. [Data Requirements](#0)<br>
2. [Data Collection](#2)<br>
</div>
<hr>
# Data Requirements <a id="0"></a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab2_fig1_flowchart_data_requirements.png" width=500>
In the videos, we learned that the chosen analytic approach determines the data requirements. Specifically, the analytic methods to be used require certain data content, formats and representations, guided by domain knowledge.
In the **From Problem to Approach Lab**, we determined that automating the process of determining the cuisine of a given recipe or dish is potentially possible using the ingredients of the recipe or the dish. In order to build a model, we need extensive data of different cuisines and recipes.
Identifying the required data fulfills the data requirements stage of the data science methodology.
-----------
# Data Collection <a id="2"></a>
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab2_fig2_flowchart_data_collection.png" width=500>
In the initial data collection stage, data scientists identify and gather the available data resources. These can be in the form of structured, unstructured, and even semi-structured data relevant to the problem domain.
#### Web Scraping of Online Food Recipes
A researcher named Yong-Yeol Ahn scraped tens of thousands of food recipes (cuisines and ingredients) from three different websites, namely:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab2_fig3_allrecipes.png" width=500>
www.allrecipes.com
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab2_fig4_epicurious.png" width=500>
www.epicurious.com
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab2_fig5_menupan.png" width=500>
www.menupan.com
For more information on Yong-Yeol Ahn and his research, you can read his paper on [Flavor Network and the Principles of Food Pairing](http://yongyeol.com/papers/ahn-flavornet-2011.pdf).
Luckily, we will not need to carry out any data collection as the data that we need to meet the goal defined in the business understanding stage is readily available.
#### We have already acquired the data and placed it on an IBM server. Let's download the data and take a look at it.
<strong>Important note:</strong> Please note that you are not expected to know how to program in Python. The following code is meant to illustrate the stage of data collection, so it is totally fine if you do not understand the individual lines of code. We have a full course on programming in Python, <a href="http://cocl.us/PY0101EN_DS0103EN_LAB2_PYTHON_Coursera"><strong>Python for Data Science</strong></a>, which is also offered on Coursera. So make sure to complete the Python course if you are interested in learning how to program in Python.
### Using this notebook:
To run any of the following cells of code, you can type **Shift + Enter** to excute the code in a cell.
Get the version of Python installed.
```
# check Python version
!python -V
```
Read the data from the IBM server into a *pandas* dataframe.
```
import pandas as pd # download library to read data into dataframe
pd.set_option('display.max_columns', None)
recipes = pd.read_csv("https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/data/recipes.csv")
print("Data read into dataframe!") # takes about 30 seconds
```
Show the first few rows.
```
recipes.head()
```
Get the dimensions of the dataframe.
```
recipes.shape
```
So our dataset consists of 57,691 recipes. Each row represents a recipe, and for each recipe, the corresponding cuisine is documented as well as whether 384 ingredients exist in the recipe or not beginning with almond and ending with zucchini.
-----------
Now that the data collection stage is complete, data scientists typically use descriptive statistics and visualization techniques to better understand the data and get acquainted with it. Data scientists, essentially, explore the data to:
* understand its content,
* assess its quality,
* discover any interesting preliminary insights, and,
* determine whether additional data is necessary to fill any gaps in the data.
### Thank you for completing this lab!
This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson/). I hope you found this lab session interesting. Feel free to contact me if you have any questions!
This notebook is part of a course on **Coursera** called *Data Science Methodology*. If you accessed this notebook outside the course, you can take this course, online by clicking [here](http://cocl.us/DS0103EN_Coursera_LAB2).
<hr>
Copyright © 2019 [Cognitive Class](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
# # Lists
### == > it is same as array in c++ , but it can also store multiple data types at the same time
```
# creating lists
a = [1,2,3]
print(type(a))
a1 = list()
print(a1)
a2 = list(a)
print(a2)
a4 = [ i for i in range(10)] ## for range from 0 to 10 set i
print(a4)
a5 = [ i*i for i in range(10)]
print(a5)
a6 = [1,2,"as",True]
print(a6)
#how to access data
print(a[1])
print(a[-1])
# len of array
print("len : ",len(a))
```
## 1) slicing and fast iteration in lists
```
# slicing of array
print(a[1:2])
# fast iteration
for i in a :
print(i)
```
## 2) string spliting
```
## returns a list after the spliting that string on the basis of " "
str = " a abc d ef ghf "
print(str.split(" "))
str = " a df d dsds "
print(str.split())
str = "a,bcd,dsd,dwd"
print(str.split(","))
```
## 3) user input
```
########## 1st method ###########
list1 = input().strip().split()
print(list1)
for i in range(len(list1)):
list1[i] = int(list1[i])
print(list1)
print()
########## 2nd method ###########
list = input().split()
print(list)
for i in range(len(list)):
list[i] = int(list[i])
print(list)
print()
########## 3rd method ###########
### one line for taking array input
arr = [int(x) for x in input().split()]
print(arr)
```
## 4) add elements in a lists ( append , insert , extend )
```
l = [1,2,3]
l.append(9)
print(l)
l.insert(1,243)
print(l)
l2 = [2,3,4]
l.extend(l2)
print(l)
```
## 5) deleting elements ( pop , remove , del )
```
print(l)
l.pop() ## without arguments its going to remove the last element
print(l)
l.pop(2) ## remove element at index 2
print(l)
l.remove(2) ## remove first occurance of that elements
print(l)
del l[0:2] ## delete this range from the list
print(l)
```
## 6) concatenation of two lists
```
l = [1,2,3]
l = l + l
# l = l - l ## there is - operation possible
print(l)
l*=3
print(l)
```
## 7) some useful inbuild functions (sort , count , index , in , reverse , max , min , len)
```
#sorting
l = [2,5,1,3,64,13,0,1]
l.sort()
print(l)
print(len(l))
# count , index , reverse
print(l.count(1)) ## count element 1 in the array
print(l.index(64)) ## find index of element
l.reverse() ## reverse the array
print(l)
if 64 in l: ## boolean function is present or not
print("found")
else:
print("not_found")
print(max(l)) ## find the maximum element in the list
print(min(l)) ## find the minimum element in the list
```
## 8) bubble sort
```
## bubble sort
l = [int(x) for x in input().split()]
print(l)
n = len(l)
for j in range(n-1):
for i in range(0,n-1-j):
if(l[i] > l[i+1]):
l[i],l[i+1] = l[i+1],l[i]
print(l)
```
# # Dictionaries
### === > it is same as map in c++ , here we can store different types of keys and values but in c++ we have to create a different map for different type of keys and values . ex : map<pair<int,int> , int> mp ....... this will store pair as a key always and integer as value but in python we can store anything. keys are immutable therefore keys can be string , int ,float but it not be a list
```
d = {}
d[23] = 34
d[3 , 4] = 32
d["str"] = 31
print(d[23])
print(type(d[23]))
d = {23 : 34 , "str" : 31}
print(d)
### fast iteration on dictionaries
print("\n" , "traversing on map")
for i in d: ### in C++ i is pair of key and value but here it is key
print(i , ":" , d[i])
print("over")
### delete elements
del d[23]
print(d)
# common functions in dictionaries
d1 = {}
d1[1] = 1
d2 = {}
d2[2] = 3
print(d1 == d2)
print(len(d1))
d1.clear()
print(d1)
print(d2.keys()) ## behave like a list
print(d2.values()) ## behave like a list
print(23 in d2) ## is this key is present or not
```
| github_jupyter |
```
# default_exp data.unwindowed
```
# Unwindowed datasets
> This functionality will allow you to create a dataset that applies sliding windows to the input data on the fly. This heavily reduces the size of the input data files, as only the original, unwindowed data needs to be stored.
```
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
from tsai.data.core import *
#export
class TSUnwindowedDataset():
_types = TSTensor, TSLabelTensor
def __init__(self, X, y=None, y_func=None, window_size=1, stride=1, drop_start=0, drop_end=0, seq_first=True, **kwargs):
store_attr()
if X.ndim == 1: X = np.expand_dims(X, 1)
shape = X.shape
assert len(shape) == 2
if seq_first:
seq_len = shape[0]
else:
seq_len = shape[-1]
max_time = seq_len - window_size + 1 - drop_end
assert max_time > 0, 'you need to modify either window_size or drop_end as they are larger than seq_len'
self.all_idxs = np.expand_dims(np.arange(drop_start, max_time, step=stride), 0).T
self.window_idxs = np.expand_dims(np.arange(window_size), 0)
if 'split' in kwargs: self.split = kwargs['split']
else: self.split = None
self.n_inp = 1
if y is None: self.loss_func = MSELossFlat()
else:
_,yb=self[:2]
if (is_listy(yb[0]) and isinstance(yb[0][0], Integral)) or isinstance(yb[0], Integral): self.loss_func = CrossEntropyLossFlat()
else: self.loss_func = MSELossFlat()
def __len__(self):
if self.split is not None:
return len(self.split)
else:
return len(self.all_idxs)
def __getitem__(self, idxs):
if self.split is not None:
idxs = self.split[idxs]
widxs = self.all_idxs[idxs] + self.window_idxs
if self.seq_first:
xb = self.X[widxs]
if xb.ndim == 3: xb = xb.transpose(0,2,1)
else: xb = np.expand_dims(xb, 1)
else:
xb = self.X[:, widxs].transpose(1,0,2)
if self.y is None:
return (self._types[0](xb),)
else:
yb = self.y[widxs]
if self.y_func is not None:
yb = self.y_func(yb)
return (self._types[0](xb), self._types[1](yb))
@property
def vars(self):
s = self[0][0] if not isinstance(self[0][0], tuple) else self[0][0][0]
return s.shape[-2]
@property
def len(self):
s = self[0][0] if not isinstance(self[0][0], tuple) else self[0][0][0]
return s.shape[-1]
class TSUnwindowedDatasets(FilteredBase):
def __init__(self, dataset, splits):
store_attr()
def subset(self, i):
return type(self.dataset)(self.dataset.X, y=self.dataset.y, y_func=self.dataset.y_func, window_size=self.dataset.window_size,
stride=self.dataset.stride, drop_start=self.dataset.drop_start, drop_end=self.dataset.drop_end,
seq_first=self.dataset.seq_first, split=self.splits[i])
@property
def train(self):
return self.subset(0)
@property
def valid(self):
return self.subset(1)
def __getitem__(self, i): return self.subset(i)
def y_func(y): return y.astype('float').mean(1)
```
This approach works with both univariate and multivariate data.
* Univariate: we'll use a simple array with 20 values, one with the seq_len first (X0), the other with seq_len second (X1).
* Multivariate: we'll use 2 time series arrays, one with the seq_len first (X2), the other with seq_len second (X3). No sliding window has been applied to them yet.
```
# Univariate
X0 = np.arange(20)
X1 = np.arange(20).reshape(1, -1)
X0.shape, X0, X1.shape, X1
# Multivariate
X2 = np.arange(20).reshape(-1,1)*np.array([1, 10, 100]).reshape(1,-1)
X3 = np.arange(20).reshape(1,-1)*np.array([1, 10, 100]).reshape(-1,1)
X2.shape, X3.shape, X2, X3
```
Now, instead of applying SlidingWindow to create and save the time series that can be consumed by a time series model, we can use a dataset that creates the data on the fly. In this way we avoid the need to create and save large files. This approach is also useful when you want to test different sliding window sizes, as otherwise you would need to create files for every size you want to test.The dataset will create the samples correctly formatted and ready to be passed on to a time series architecture.
```
wds0 = TSUnwindowedDataset(X0, window_size=5, stride=2, seq_first=True)[:][0]
wds1 = TSUnwindowedDataset(X1, window_size=5, stride=2, seq_first=False)[:][0]
test_eq(wds0, wds1)
wds0, wds0.data, wds1, wds1.data
wds2 = TSUnwindowedDataset(X2, window_size=5, stride=2, seq_first=True)[:][0]
wds3 = TSUnwindowedDataset(X3, window_size=5, stride=2, seq_first=False)[:][0]
test_eq(wds2, wds3)
wds2, wds3, wds2.data, wds3.data
#hide
out = create_scripts(); beep(out)
```
| github_jupyter |
# Ungraded Lab: Build a Multi-output Model
In this lab, we'll show how you can build models with more than one output. The dataset we will be working on is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It is an Energy Efficiency dataset which uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. Let's see how we can build a model to train on this data.
## Imports
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
from sklearn.model_selection import train_test_split
```
## Utilities
We define a few utilities for data conversion and visualization to make our code more neat.
```
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
def plot_diff(y_true, y_pred, title=''):
plt.scatter(y_true, y_pred)
plt.title(title)
plt.xlabel('True Values')
plt.ylabel('Predictions')
plt.axis('equal')
plt.axis('square')
plt.xlim(plt.xlim())
plt.ylim(plt.ylim())
plt.plot([-100, 100], [-100, 100])
plt.show()
def plot_metrics(metric_name, title, ylim=5):
plt.title(title)
plt.ylim(0, ylim)
plt.plot(history.history[metric_name], color='blue', label=metric_name)
plt.plot(history.history['val_' + metric_name], color='green', label='val_' + metric_name)
plt.show()
```
## Prepare the Data
We download the dataset and format it for training.
```
# Specify data URI
URI = 'local_data/ENB2012_data.xls'
# link for dataset excel: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx
# Use pandas excel reader
df = pd.read_excel(URI)
# df.drop(columns=['Unnamed: 10', 'Unnamed: 11'], inplace=True)
df.head()
df = df.sample(frac=1).reset_index(drop=True)
# Split the data into train and test with 80 train / 20 test
train, test = train_test_split(df, test_size=0.2)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
test_Y = format_output(test)
# Normalize the training and test data
norm_train_X = norm(train)
norm_test_X = norm(test)
train
```
## Build the Model
Here is how we'll build the model using the functional syntax. Notice that we can specify a list of outputs (i.e. `[y1_output, y2_output]`) when we instantiate the `Model()` class.
```
# Define model layers.
input_layer = Input(shape=(len(train .columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
```
## Configure parameters
We specify the optimizer as well as the loss and metrics for each output.
```
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(lr=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
```
## Train the Model
```
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y,
epochs=10, batch_size=10, validation_data=(norm_test_X, test_Y))
```
## Evaluate the Model and Plot Metrics
```
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
# Plot the loss and mse
Y_pred = model.predict(norm_test_X)
plot_diff(test_Y[0], Y_pred[0], title='Y1')
plot_diff(test_Y[1], Y_pred[1], title='Y2')
plot_metrics(metric_name='y1_output_root_mean_squared_error', title='Y1 RMSE', ylim=6)
plot_metrics(metric_name='y2_output_root_mean_squared_error', title='Y2 RMSE', ylim=7)
```
| github_jupyter |
# Fine-tuning a Pretrained Network for Style Recognition
In this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.
The advantage of this approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the "semantics" of the general visual appearance. Think of it as a very powerful generic visual feature that you can treat as a black box. On top of that, only a relatively small amount of data is needed for good performance on the target task.
First, we will need to prepare the data. This involves the following parts:
(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.
(2) Download a subset of the overall Flickr style dataset for this demo.
(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.
```
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
import numpy as np
from pylab import *
%matplotlib inline
import tempfile
# Helper function for deprocessing preprocessed images, e.g., for display.
def deprocess_net_image(image):
image = image.copy() # don't modify destructively
image = image[::-1] # BGR -> RGB
image = image.transpose(1, 2, 0) # CHW -> HWC
image += [123, 117, 104] # (approximately) undo mean subtraction
# clamp values in [0, 255]
image[image < 0], image[image > 255] = 0, 255
# round and cast from float32 to uint8
image = np.round(image)
image = np.require(image, dtype=np.uint8)
return image
```
### 1. Setup and dataset download
Download data required for this exercise.
- `get_ilsvrc_aux.sh` to download the ImageNet data mean, labels, etc.
- `download_model_binary.py` to download the pretrained reference model
- `finetune_flickr_style/assemble_data.py` downloads the style training and testing data
We'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set `full_dataset = True` in the cell below.)
```
# Download just a small subset of the data for this exercise.
# (2000 of 80K images, 5 of 20 labels.)
# To download the entire dataset, set `full_dataset = True`.
full_dataset = False
if full_dataset:
NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1
else:
NUM_STYLE_IMAGES = 2000
NUM_STYLE_LABELS = 5
# This downloads the ilsvrc auxiliary data (mean file, etc),
# and a subset of 2000 images for the style recognition task.
import os
os.chdir(caffe_root) # run scripts from caffe root
!data/ilsvrc12/get_ilsvrc_aux.sh
!scripts/download_model_binary.py models/bvlc_reference_caffenet
!python examples/finetune_flickr_style/assemble_data.py \
--workers=-1 --seed=1701 \
--images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS
# back to examples
os.chdir('examples')
```
Define `weights`, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
```
import os
weights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')
assert os.path.exists(weights)
```
Load the 1000 ImageNet labels from `ilsvrc12/synset_words.txt`, and the 5 style labels from `finetune_flickr_style/style_names.txt`.
```
# Load ImageNet labels to imagenet_labels
imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t'))
assert len(imagenet_labels) == 1000
print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...'])
# Load style labels to style_labels
style_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt'
style_labels = list(np.loadtxt(style_label_file, str, delimiter='\n'))
if NUM_STYLE_LABELS > 0:
style_labels = style_labels[:NUM_STYLE_LABELS]
print '\nLoaded style labels:\n', ', '.join(style_labels)
```
### 2. Defining and running the nets
We'll start by defining `caffenet`, a function which initializes the *CaffeNet* architecture (a minor variant on *AlexNet*), taking arguments specifying the data and number of output classes.
```
from caffe import layers as L
from caffe import params as P
weight_param = dict(lr_mult=1, decay_mult=1)
bias_param = dict(lr_mult=2, decay_mult=0)
learned_param = [weight_param, bias_param]
frozen_param = [dict(lr_mult=0)] * 2
def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1,
param=learned_param,
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0.1)):
conv = L.Convolution(bottom, kernel_size=ks, stride=stride,
num_output=nout, pad=pad, group=group,
param=param, weight_filler=weight_filler,
bias_filler=bias_filler)
return conv, L.ReLU(conv, in_place=True)
def fc_relu(bottom, nout, param=learned_param,
weight_filler=dict(type='gaussian', std=0.005),
bias_filler=dict(type='constant', value=0.1)):
fc = L.InnerProduct(bottom, num_output=nout, param=param,
weight_filler=weight_filler,
bias_filler=bias_filler)
return fc, L.ReLU(fc, in_place=True)
def max_pool(bottom, ks, stride=1):
return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)
def caffenet(data, label=None, train=True, num_classes=1000,
classifier_name='fc8', learn_all=False):
"""Returns a NetSpec specifying CaffeNet, following the original proto text
specification (./models/bvlc_reference_caffenet/train_val.prototxt)."""
n = caffe.NetSpec()
n.data = data
param = learned_param if learn_all else frozen_param
n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param)
n.pool1 = max_pool(n.relu1, 3, stride=2)
n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75)
n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param)
n.pool2 = max_pool(n.relu2, 3, stride=2)
n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75)
n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param)
n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param)
n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param)
n.pool5 = max_pool(n.relu5, 3, stride=2)
n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param)
if train:
n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True)
else:
fc7input = n.relu6
n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param)
if train:
n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True)
else:
fc8input = n.relu7
# always learn fc8 (param=learned_param)
fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param)
# give fc8 the name specified by argument `classifier_name`
n.__setattr__(classifier_name, fc8)
if not train:
n.probs = L.Softmax(fc8)
if label is not None:
n.label = label
n.loss = L.SoftmaxWithLoss(fc8, n.label)
n.acc = L.Accuracy(fc8, n.label)
# write the net to a temporary file and return its filename
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(n.to_proto()))
return f.name
```
Now, let's create a *CaffeNet* that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
```
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
imagenet_net_filename = caffenet(data=dummy_data, train=False)
imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
```
Define a function `style_net` which calls `caffenet` on data from the Flickr style dataset.
The new network will also have the *CaffeNet* architecture, with differences in the input and output:
- the input is the Flickr style data we downloaded, provided by an `ImageData` layer
- the output is a distribution over 20 classes rather than the original 1000 ImageNet classes
- the classification layer is renamed from `fc8` to `fc8_flickr` to tell Caffe not to load the original classifier (`fc8`) weights from the ImageNet-pretrained model
```
def style_net(train=True, learn_all=False, subset=None):
if subset is None:
subset = 'train' if train else 'test'
source = caffe_root + 'data/flickr_style/%s.txt' % subset
transform_param = dict(mirror=train, crop_size=227,
mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto')
style_data, style_label = L.ImageData(
transform_param=transform_param, source=source,
batch_size=50, new_height=256, new_width=256, ntop=2)
return caffenet(data=style_data, label=style_label, train=train,
num_classes=NUM_STYLE_LABELS,
classifier_name='fc8_flickr',
learn_all=learn_all)
```
Use the `style_net` function defined above to initialize `untrained_style_net`, a *CaffeNet* with input images from the style dataset and weights from the pretrained ImageNet model.
Call `forward` on `untrained_style_net` to get a batch of style training data.
```
untrained_style_net = caffe.Net(style_net(train=False, subset='train'),
weights, caffe.TEST)
untrained_style_net.forward()
style_data_batch = untrained_style_net.blobs['data'].data.copy()
style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)
```
Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through `imagenet_net`, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the `batch_index` variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the *above* cell to load a fresh batch of data into `style_net`.)
```
def disp_preds(net, image, labels, k=5, name='ImageNet'):
input_blob = net.blobs['data']
net.blobs['data'].data[0, ...] = image
probs = net.forward(start='conv1')['probs'][0]
top_k = (-probs).argsort()[:k]
print 'top %d predicted %s labels =' % (k, name)
print '\n'.join('\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p])
for i, p in enumerate(top_k))
def disp_imagenet_preds(net, image):
disp_preds(net, image, imagenet_labels, name='ImageNet')
def disp_style_preds(net, image):
disp_preds(net, image, style_labels, name='style')
batch_index = 8
image = style_data_batch[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[style_label_batch[batch_index]]
disp_imagenet_preds(imagenet_net, image)
```
We can also look at `untrained_style_net`'s predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see `caffenet` definition -- no `weight_filler` is passed to the final `InnerProduct` layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.
```
disp_style_preds(untrained_style_net, image)
```
We can also verify that the activations in layer `fc7` immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the `conv1` through `fc7` layers.
```
diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0]
error = (diff ** 2).sum()
assert error < 1e-8
```
Delete `untrained_style_net` to save memory. (Hang on to `imagenet_net` as we'll use it again later.)
```
del untrained_style_net
```
### 3. Training the style classifier
Now, we'll define a function `solver` to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!
```
from caffe.proto import caffe_pb2
def solver(train_net_path, test_net_path=None, base_lr=0.001):
s = caffe_pb2.SolverParameter()
# Specify locations of the train and (maybe) test networks.
s.train_net = train_net_path
if test_net_path is not None:
s.test_net.append(test_net_path)
s.test_interval = 1000 # Test after every 1000 training iterations.
s.test_iter.append(100) # Test on 100 batches each time we test.
# The number of iterations over which to average the gradient.
# Effectively boosts the training batch size by the given factor, without
# affecting memory utilization.
s.iter_size = 1
s.max_iter = 100000 # # of times to update the net (training iterations)
# Solve using the stochastic gradient descent (SGD) algorithm.
# Other choices include 'Adam' and 'RMSProp'.
s.type = 'SGD'
# Set the initial learning rate for SGD.
s.base_lr = base_lr
# Set `lr_policy` to define how the learning rate changes during training.
# Here, we 'step' the learning rate by multiplying it by a factor `gamma`
# every `stepsize` iterations.
s.lr_policy = 'step'
s.gamma = 0.1
s.stepsize = 20000
# Set other SGD hyperparameters. Setting a non-zero `momentum` takes a
# weighted average of the current gradient and previous gradients to make
# learning more stable. L2 weight decay regularizes learning, to help prevent
# the model from overfitting.
s.momentum = 0.9
s.weight_decay = 5e-4
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained. Here, we'll
# snapshot every 10K iterations -- ten times during training.
s.snapshot = 10000
s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style'
# Train on the GPU. Using the CPU to train large networks is very slow.
s.solver_mode = caffe_pb2.SolverParameter.GPU
# Write the solver to a temporary file and return its filename.
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(s))
return f.name
```
Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command:
<code>
build/tools/caffe train \
-solver models/finetune_flickr_style/solver.prototxt \
-weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \
-gpu 0
</code>
However, we will train using Python in this example.
We'll first define `run_solvers`, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file.
```
def run_solvers(niter, solvers, disp_interval=10):
"""Run solvers for niter iterations,
returning the loss and accuracy recorded each iteration.
`solvers` is a list of (name, solver) tuples."""
blobs = ('loss', 'acc')
loss, acc = ({name: np.zeros(niter) for name, _ in solvers}
for _ in blobs)
for it in range(niter):
for name, s in solvers:
s.step(1) # run a single SGD step in Caffe
loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy()
for b in blobs)
if it % disp_interval == 0 or it + 1 == niter:
loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' %
(n, loss[n][it], np.round(100*acc[n][it]))
for n, _ in solvers)
print '%3d) %s' % (it, loss_disp)
# Save the learned weights from both nets.
weight_dir = tempfile.mkdtemp()
weights = {}
for name, s in solvers:
filename = 'weights.%s.caffemodel' % name
weights[name] = os.path.join(weight_dir, filename)
s.net.save(weights[name])
return loss, acc, weights
```
Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (`style_solver`) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the `copy_from` method), and the other (`scratch_style_solver`) will start from a *randomly* initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.
```
niter = 200 # number of iterations to train
# Reset style_solver as before.
style_solver_filename = solver(style_net(train=True))
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(weights)
# For reference, we also create a solver that isn't initialized from
# the pretrained ImageNet weights.
scratch_style_solver_filename = solver(style_net(train=True))
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained', style_solver),
('scratch', scratch_style_solver)]
loss, acc, weights = run_solvers(niter, solvers)
print 'Done.'
train_loss, scratch_train_loss = loss['pretrained'], loss['scratch']
train_acc, scratch_train_acc = acc['pretrained'], acc['scratch']
style_weights, scratch_style_weights = weights['pretrained'], weights['scratch']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
```
Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
```
plot(np.vstack([train_loss, scratch_train_loss]).T)
xlabel('Iteration #')
ylabel('Loss')
plot(np.vstack([train_acc, scratch_train_acc]).T)
xlabel('Iteration #')
ylabel('Accuracy')
```
Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.
```
def eval_style_net(weights, test_iters=10):
test_net = caffe.Net(style_net(train=False), weights, caffe.TEST)
accuracy = 0
for it in xrange(test_iters):
accuracy += test_net.forward()['acc']
accuracy /= test_iters
return test_net, accuracy
test_net, accuracy = eval_style_net(style_weights)
print 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights)
print 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, )
```
### 4. End-to-end finetuning for style
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in *all* layers of the network, starting from the RGB `conv1` filters directly applied to the input image. We pass the argument `learn_all=True` to the `style_net` function defined earlier in this notebook, which tells the function to apply a positive (non-zero) `lr_mult` value for all parameters. Under the default, `learn_all=False`, all parameters in the pretrained layers (`conv1` through `fc7`) are frozen (`lr_mult = 0`), and we learn only the classifier layer `fc8_flickr`.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure *without* the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!
```
end_to_end_net = style_net(train=True, learn_all=True)
# Set base_lr to 1e-3, the same as last time when learning only the classifier.
# You may want to play around with different values of this or other
# optimization parameters when fine-tuning. For example, if learning diverges
# (e.g., the loss gets very large or goes to infinity/NaN), you should try
# decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value
# for which learning does not diverge).
base_lr = 0.001
style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(style_weights)
scratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
scratch_style_solver.net.copy_from(scratch_style_weights)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained, end-to-end', style_solver),
('scratch, end-to-end', scratch_style_solver)]
_, _, finetuned_weights = run_solvers(niter, solvers)
print 'Done.'
style_weights_ft = finetuned_weights['pretrained, end-to-end']
scratch_style_weights_ft = finetuned_weights['scratch, end-to-end']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
```
Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).
```
test_net, accuracy = eval_style_net(style_weights_ft)
print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft)
print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )
```
We'll first look back at the image we started with and check our end-to-end trained model's predictions.
```
plt.imshow(deprocess_net_image(image))
disp_style_preds(test_net, image)
```
Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
```
batch_index = 1
image = test_net.blobs['data'].data[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]
disp_style_preds(test_net, image)
```
We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (*Pastel*), but is much less confident in its prediction than the pretrained net.
```
disp_style_preds(scratch_test_net, image)
```
Of course, we can again look at the ImageNet model's predictions for the above image:
```
disp_imagenet_preds(imagenet_net, image)
```
So we did finetuning and it is awesome. Let's take a look at what kind of results we are able to get with a longer, more complete run of the style recognition dataset. Note: the below URL might be occasionally down because it is run on a research machine.
http://demo.vislab.berkeleyvision.org/
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science
## Homework 4: Logistic Regression
**Harvard University**<br/>
**Fall 2019**<br/>
**Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner
<hr style="height:2pt">
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
### INSTRUCTIONS
- **This is an individual homework. No group collaboration.**
- To submit your assignment, follow the instructions given in Canvas.
- Restart the kernel and run the whole notebook again before you submit.
- As much as possible, try and stick to the hints and functions we import at the top of the homework, as those are the ideas and tools the class supports and are aiming to teach. And if a problem specifies a particular library, you're required to use that library, and possibly others from the import list.
- Please use .head() when viewing data. Do not submit a notebook that is excessively long because output was not suppressed or otherwise limited.
```
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
from sklearn.linear_model import LassoCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import zipfile
import seaborn as sns
sns.set()
from scipy.stats import ttest_ind
```
<div class='theme'> Cancer Classification from Gene Expressions </div>
In this problem, we will build a classification model to distinguish between two related classes of cancer, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), using gene expression measurements. The dataset is provided in the file `data/dataset_hw4.csv`. Each row in this file corresponds to a tumor tissue sample from a patient with one of the two forms of Leukemia. The first column contains the cancer type, with **0 indicating the ALL** class and **1 indicating the AML** class. Columns 2-7130 contain expression levels of 7129 genes recorded from each tissue sample.
In the following questions, we will use linear and logistic regression to build classification models for this data set.
<div class='exercise'><b> Question 1 [20 pts]: Data Exploration </b></div>
The first step is to split the observations into an approximate 80-20 train-test split. Below is some code to do this for you (we want to make sure everyone has the same splits). Print dataset shape before splitting and after splitting. `Cancer_type` is our target column.
**1.1** Take a peek at your training set: you should notice the severe differences in the measurements from one gene to the next (some are negative, some hover around zero, and some are well into the thousands). To account for these differences in scale and variability, normalize each predictor to vary between 0 and 1. **NOTE: for the entirety of this homework assignment, you will use these normalized values, not the original, raw values**.
**1.2** The training set contains more predictors than observations. What problem(s) can this lead to in fitting a classification model to such a dataset? Explain in 3 or fewer sentences.
**1.3** Determine which 10 genes individually discriminate between the two cancer classes the best (consider every gene in the dataset).
Plot two histograms of best predictor -- one using the training set and another using the testing set. Each histogram should clearly distinguish two different `Cancer_type` classes.
**Hint:** You may use t-testing to make this determination: #https://en.wikipedia.org/wiki/Welch%27s_t-test .
**1.4** Using your most useful gene from the previous part, create a classification model by simply eye-balling a value for this gene that would discriminate the two classes the best (do not use an algorithm to determine for you the optimal coefficient or threshold; we are asking you to provide a rough estimate / model by manual inspection). Justify your choice in 1-2 sentences. Report the accuracy of your hand-chosen model on the test set (write code to implement and evaluate your hand-created model).
<hr> <hr>
<hr>
### Solutions
**The first step is to split the observations into an approximate 80-20 train-test split. Below is some code to do this for you (we want to make sure everyone has the same splits). Print dataset shape before splitting and after splitting. `Cancer_type` is our target column.**
```
np.random.seed(10)
df = pd.read_csv('data/hw4_enhance.csv', index_col=0)
X_train, X_test, y_train, y_test = train_test_split(df.loc[:, df.columns != 'Cancer_type'],
df.Cancer_type, test_size=0.2,
random_state = 109,
stratify = df.Cancer_type)
print(df.shape)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
print(df.Cancer_type.value_counts(normalize=True))
```
**1.1 Take a peek at your training set: you should notice the severe differences in the measurements from one gene to the next (some are negative, some hover around zero, and some are well into the thousands). To account for these differences in scale and variability, normalize each predictor to vary between 0 and 1. **NOTE: for the entirety of this homework assignment, you will use these normalized values, not the original, raw values.**
```
#your code here
X_train.describe()
#your code here
min_vals = X_train.min()
max_vals = X_train.max()
X_train = (X_train - min_vals)/(max_vals - min_vals)
X_test = (X_test - min_vals)/(max_vals - min_vals)
```
**1.2 The training set contains more predictors than observations. What problem(s) can this lead to in fitting a classification model to such a dataset? Explain in 3 or fewer sentences.**
*your answer here*
p>>n - Linear Regression and Logisitic Regression does not work. We need to regularize or reduce dimensions.
The training set is improper as it contains many more columns compared to number of samples. If we fit models to the given dataset, they will be highly overfitted. This is called the curse of dimensionality.
Multicollinearity
**1.3 Determine which 10 genes individually discriminate between the two cancer classes the best (consider every gene in the dataset).**
**Plot two histograms of best predictor -- one using the training set and another using the testing set. Each histogram should clearly distinguish two different `Cancer_type` classes.**
**Hint:** You may use t-testing to make this determination: #https://en.wikipedia.org/wiki/Welch%27s_t-test.
```
#your code here
predictors = df.columns
predictors = predictors.drop('Cancer_type');
print(predictors.shape)
means_0 = X_train[y_train==0][predictors].mean()
means_1 = X_train[y_train==1][predictors].mean()
stds_0 = X_train[y_train==0][predictors].std()
stds_1 = X_train[y_train==1][predictors].std()
n1 = X_train[y_train==0].shape[0]
n2 = X_train[y_train==1].shape[0]
t_tests = np.abs(means_0-means_1)/np.sqrt( stds_0**2/n1 + stds_1**2/n2)
#your code here
best_preds_idx = np.argsort(-t_tests.values)
best_preds = t_tests.index[best_preds_idx]
print(t_tests[best_preds_idx[0:10]])
print(t_tests.index[best_preds_idx[0:10]])
best_pred = t_tests.index[best_preds_idx[0]]
print(best_pred)
#your code here
plt.figure(figsize=(12,8))
plt.subplot(211)
plt.hist( X_train[y_train==0][best_pred], bins=10, label='Class 0')
plt.hist( X_train[y_train==1][best_pred],bins=30, label='Class 1')
plt.title(best_pred + " train")
plt.legend()
plt.subplot(212)
plt.hist( X_test[y_test==0][best_pred], bins=30,label='Class 0')
plt.hist( X_test[y_test==1][best_pred], bins=30, label='Class 1')
plt.title(best_pred + " test")
plt.legend();
# #your code here
# from scipy.stats import ttest_ind
# predictors = df.columns
# predictors = predictors.drop('Cancer_type');
# print(predictors.shape)
# t_tests = ttest_ind(X_train[y_train==0],X_train[y_train==1],equal_var=False)
# best_preds_idx_t_tests = np.argsort(t_tests.pvalue)
# predictors[best_preds_idx_t_tests][0:15]
# # (7129,)
# # Index(['M31523_at', 'X95735_at', 'M84526_at', 'X61587_at', 'U50136_rna1_at',
# # 'X17042_at', 'U29175_at', 'Y08612_at', 'Z11793_at', 'J04615_at',
# # 'X76648_at', 'U72936_s_at', 'M80254_at', 'M29551_at', 'X62320_at'],
# # dtype='object')
```
**1.4 Using your most useful gene from the previous part, create a classification model by simply eye-balling a value for this gene that would discriminate the two classes the best (do not use an algorithm to determine for you the optimal coefficient or threshold; we are asking you to provide a rough estimate / model by manual inspection). Justify your choice in 1-2 sentences. Report the accuracy of your hand-chosen model on the test set (write code to implement and evaluate your hand-created model)**
```
#your code here
threshold = 0.45
train_score = accuracy_score(y_train.values, X_train[best_pred]<=threshold) #Check this!
test_score = accuracy_score(y_test.values, X_test[best_pred]<=threshold)
results = [['naive train', train_score], ['naive test', test_score]]
df_res = pd.DataFrame.from_dict(results)
df_res
```
By observing the distribution of 'M31523_at' in the training histogram above, we roughly estimate that 0.45 distinguishes the two classes, so we use the threshold of 0.45.
<div class='exercise'><b> Question 2 [25 pts]: Linear and Logistic Regression </b></div>
In class, we discussed how to use both linear regression and logistic regression for classification. For this question, you will explore these two models by working with the single gene that you identified above as being the best predictor.
**2.1** Fit a simple linear regression model to the training set using the single gene predictor "best_predictor" to predict cancer type (use the normalized values of the gene). We could interpret the scores predicted by the regression model for a patient as being an estimate of the probability that the patient has Cancer_type=1 (AML). Is this a reasonable interpretation? If not, what is the problem with such?
Create a figure with the following items displayed on the same plot (Use training data):
- the model's predicted value (the quantitative response from your linear regression model as a function of the normalized value of the best gene predictor)
- the true binary response.
**2.2** Use your estimated linear regression model to classify observations into 0 and 1 using the standard Bayes classifier. Evaluate the classification accuracy of this classification model on both the training and testing sets.
**2.3** Next, fit a simple logistic regression model to the training set. How do the training and test classification accuracies of this model compare with the linear regression model?
Remember, you need to set the regularization parameter for sklearn's logistic regression function to be a very large value in order to **not** regularize (use 'C=100000').
**2.4**
Print and interpret Logistic regression coefficient and intercept.
Create 2 plots (with training and testing data) with 4 items displayed on each plot.
- the quantitative response from the linear regression model as a function of the best gene predictor.
- the predicted probabilities of the logistic regression model as a function of the best gene predictor.
- the true binary response.
- a horizontal line at $y=0.5$.
Based on these plots, does one of the models appear better suited for binary classification than the other? Explain in 3 sentences or fewer.
<hr>
### Solutions
**2.1 Fit a simple linear regression model to the training set using the single gene predictor "best_predictor" to predict cancer type (use the normalized values of the gene). We could interpret the scores predicted by the regression model for a patient as being an estimate of the probability that the patient has Cancer_type=1 (AML). Is this a reasonable interpretation? If not, what is the problem with such?**
**Create a figure with the following items displayed on the same plot (Use training data):**
- the model's predicted value (the quantitative response from your linear regression model as a function of the normalized value of the best gene predictor)
- the true binary response.
```
# your code here
print(best_pred)
linreg = LinearRegression()
linreg.fit(X_train[best_pred].values.reshape(-1,1), y_train)
y_train_pred = linreg.predict(X_train[best_pred].values.reshape(-1,1))
y_test_pred = linreg.predict(X_test[best_pred].values.reshape(-1,1))
# your code here
fig = plt.figure();
host = fig.add_subplot(111)
par1 = host.twinx()
host.set_ylabel("Probability")
par1.set_ylabel("Class")
host.plot(X_train[best_pred], y_train_pred, '-');
host.plot(X_train[best_pred], y_train, 's');
host.set_xlabel('Normalized best_pred')
host.set_ylabel('Probability of being ALM')
labels = ['ALL', 'ALM'];
# You can specify a rotation for the tick labels in degrees or with keywords.
par1.set_yticks( [0.082, 0.81]);
par1.set_yticklabels(labels);
```
*your answer here*
Yes there is a problem with interpretation - seems like our probabilities are <0 and >1.
**2.2 Use your estimated linear regression model to classify observations into 0 and 1 using the standard Bayes classifier. Evaluate the classification accuracy of this classification model on both the training and testing sets.**
```
# your code here
train_score = accuracy_score(y_train, y_train_pred>0.5)
test_score = accuracy_score(y_test, y_test_pred>0.5)
print("train score:", train_score, "test score:", test_score)
df_res = df_res.append([['Linear Regression train', train_score], ['Linear Regression test', test_score]] )
df_res
```
**2.3** **Next, fit a simple logistic regression model to the training set. How do the training and test classification accuracies of this model compare with the linear regression model? Are the classifications substantially different? Explain why this is the case.**
**Remember, you need to set the regularization parameter for sklearn's logistic regression function to be a very large value in order to **not** regularize (use 'C=100000').
```
# your code here
logreg = LogisticRegression(C=100000, solver='lbfgs')
logreg.fit(X_train[[best_pred]], y_train)
y_train_pred_logreg = logreg.predict(X_train[[best_pred]])
y_test_pred_logreg = logreg.predict(X_test[[best_pred]])
y_train_pred_logreg_prob = logreg.predict_proba(X_train[[best_pred]])[:,1]
y_test_pred_logreg_prob = logreg.predict_proba(X_test[[best_pred]])[:,1]
train_score_logreg = accuracy_score(y_train, y_train_pred_logreg)
test_score_logreg = accuracy_score(y_test, y_test_pred_logreg)
print("train score:", train_score_logreg, "test score:", test_score_logreg)
df_res = df_res.append([['Logistic Regression train', train_score_logreg], ['Logistic Regression test', test_score_logreg]] )
df_res
```
*your answer here*
Results are not significantly different.
**2.4 Print and interpret Logistic regression coefficient and intercept.**
**Create 2 plots (with training and testing data) with 4 items displayed on each plot.**
- the quantitative response from the linear regression model as a function of the best gene predictor.
- the predicted probabilities of the logistic regression model as a function of the best gene predictor.
- the true binary response.
- a horizontal line at $y=0.5$.
**Based on these plots, does one of the models appear better suited for binary classification than the other? Explain in 3 sentences or fewer.**
$ \hat{p}(X) = \frac{e^{\hat{\beta_0}+\hat{\beta_1}X_1 } }{1 + e^{\hat{\beta_0}+\hat{\beta_1}X_1 }} $
```
# your code here
logreg.intercept_, logreg.coef_, -logreg.intercept_/logreg.coef_
```
The slope is how steep is the sigmoid function is. Negative slope indicates probability of predicting y = 1 decreases as X gets larger. The intercept offers an indication of how much right or left shifted the curve (inflection point) is by -intercept/slope: the curve is approx 0.4656 to the right in this case.
```
print("Intercept:",logreg.intercept_)
prob = logreg.predict_proba(np.array([0]).reshape(-1,1))[0,1] #Predictions when best_pred = 0
print("When %s is 0, log odds are %.5f "%(best_pred,logreg.intercept_))
print("In other words, we predict `cancer_type` with %.5f probability "%(prob))
#np.exp(4.07730445)/(1+np.exp(4.07730445)) = 0.98333
print("Coefficient: ",logreg.coef_)
print("A one-unit increase in coefficient (%s) is associated with an increase in the odds of `cancer_type` by %.5f"%(best_pred,np.exp(logreg.coef_)))
#print("A one-unit increase in coefficient (%s) is associated with an increase in the log odds of `cancer_type` by %.5f"%(best_pred,logreg.coef_))
#Explanation
# #Assume best_pred = 0.48
# prob = logreg.predict_proba(np.array([0.48]).reshape(-1,1))[0,1]
# print("Prob. when best_pred is 0.48 = ",prob)
# print("Log odds when best_pred is 0.48 = ", np.log(prob/(1-prob)))
# #Increase best_pred by 1, best_pred = 1.48
# prob1 = logreg.predict_proba(np.array([1.48]).reshape(-1,1))[0,1]
# print("Prob. when best_pred is 1.48 = ",prob1)
# print("Log odds when best_pred is 1.48 = ", np.log(prob1/(1-prob1)))
# np.log(prob1/(1-prob1)) - (np.log(prob/(1-prob))) #coefficient
# your code here
fig, ax = plt.subplots(1,2, figsize=(16,5))
sort_index = np.argsort(X_train[best_pred].values)
# plotting true binary response
ax[0].scatter(X_train[best_pred].iloc[sort_index].values, y_train.iloc[sort_index].values, color='red', label = 'Train True Response')
# plotting ols output
ax[0].plot(X_train[best_pred].iloc[sort_index].values, y_train_pred[sort_index], color='red', alpha=0.3, \
label = 'Linear Regression Predictions')
# plotting logreg prob output
ax[0].plot(X_train[best_pred].iloc[sort_index].values, y_train_pred_logreg_prob[sort_index], alpha=0.3, \
color='green', label = 'Logistic Regression Predictions Prob')
ax[0].axhline(0.5, c='c')
ax[0].legend()
ax[0].set_title('Train - True response v/s obtained responses')
ax[0].set_xlabel('Gene predictor value')
ax[0].set_ylabel('Cancer type response');
# Test
sort_index = np.argsort(X_test[best_pred].values)
# plotting true binary response
ax[1].scatter(X_test[best_pred].iloc[sort_index].values, y_test.iloc[sort_index].values, color='black', label = 'Test True Response')
# plotting ols output
ax[1].plot(X_test[best_pred].iloc[sort_index].values, y_test_pred[sort_index], color='red', alpha=0.3, \
label = 'Linear Regression Predictions')
# plotting logreg prob output
ax[1].plot(X_test[best_pred].iloc[sort_index].values, y_test_pred_logreg_prob[sort_index], alpha=0.3, \
color='green', label = 'Logistic Regression Predictions Prob')
ax[1].axhline(0.5, c='c')
ax[1].legend()
ax[1].set_title('Test - True response v/s obtained responses')
ax[1].set_xlabel('Gene predictor value')
ax[1].set_ylabel('Cancer type response');
```
Logistic Regression is better suited for this problem, our probabilities are within the range as expected.
<div class='exercise'> <b> Question 3 [20pts]: Multiple Logistic Regression </b> </div>
**3.1** Next, fit a multiple logistic regression model with **all** the gene predictors from the data set (reminder: for this assignment, we are always using the normalized values). How does the classification accuracy of this model compare with the models fitted in question 2 with a single gene (on both the training and test sets)?
**3.2** How many of the coefficients estimated by this multiple logistic regression in the previous part (P3.1) are significantly different from zero at a *significance level of 5%*? Use the same value of C=100000 as before.
**Hint:** To answer this question, use *bootstrapping* with 100 bootstrap samples/iterations.
**3.3** Comment on the classification accuracy of both the training and testing set. Given the results above, how would you assess the generalization capacity of your trained model? What other tests would you suggest to better guard against possibly having a false sense of the overall efficacy/accuracy of the model as a whole?
**3.4** Now let's use regularization to improve the predictions from the multiple logistic regression model. Specifically, use LASSO-like regularization and cross-validation to train the model on the training set. Report the classification accuracy on both the training and testing set.
**3.5** Do the 10 best predictors from Q1 hold up as important features in this regularized model? If not, explain why this is the case (feel free to use the data to support your explanation).
<hr>
### Solutions
**3.1 Next, fit a multiple logistic regression model with all the gene predictors from the data set (reminder: for this assignment, we are always using the normalized values). How does the classification accuracy of this model compare with the models fitted in question 2 with a single gene (on both the training and test sets)?**
```
# your code here
# fitting multi regression model
multi_regr = LogisticRegression(C=100000, solver = "lbfgs", max_iter=10000, random_state=109)
multi_regr.fit(X_train, y_train)
# predictions
y_train_pred_multi = multi_regr.predict(X_train)
y_test_pred_multi = multi_regr.predict(X_test)
# accuracy
train_score_multi = accuracy_score(y_train, y_train_pred_multi)
test_score_multi = accuracy_score(y_test, y_test_pred_multi)
print('Training set accuracy for multiple logistic regression = ', train_score_multi)
print('Test set accuracy for multiple logistic regression = ', test_score_multi)
df_res = df_res.append([['Multiple Logistic Regression train', train_score_multi],
['Multiple Logistic Regression test', test_score_multi]] )
df_res
```
*your answer here*
Better results, overfitted model.
**3.2** **How many of the coefficients estimated by this multiple logistic regression in the previous part (P3.1) are significantly different from zero at a *significance level of 5%*? Use the same value of C=100000 as before.**
**Hint:** To answer this question, use *bootstrapping* with 100 bootstrap samples/iterations.
```
# your code here
# bootstrapping code
n = 100 # Number of iterations
boot_coefs = np.zeros((X_train.shape[1],n)) # Create empty storage array for later use
# iteration for each sample
for i in range(n):
# Sampling WITH replacement the indices of a resampled dataset
sample_index = np.random.choice(range(y_train.shape[0]), size=y_train.shape[0], replace=True)
# finding subset
x_train_samples = X_train.values[sample_index]
y_train_samples = y_train.values[sample_index]
# finding logreg coefficient
logistic_mod_boot = LogisticRegression(C=100000, fit_intercept=True, solver = "lbfgs", max_iter=10000)
logistic_mod_boot.fit(x_train_samples, y_train_samples)
boot_coefs[:,i] = logistic_mod_boot.coef_
# your code here
ci_upper = np.percentile(boot_coefs, 97.5, axis=1)
ci_lower = np.percentile(boot_coefs, 2.5, axis=1)
# ct significant predictors
sig_b_ct = 0
sig_preds = []
cols = list(X_train.columns)
# if ci contains 0, then insignificant
for i in range(len(ci_upper)):
if ci_upper[i]<0 or ci_lower[i]>0:
sig_b_ct += 1
sig_preds.append(cols[i])
print("Significant coefficents at 5pct level = %i / %i" % (sig_b_ct, len(ci_upper)))
# print('Number of significant columns: ', len(sig_preds))
```
**3.3 Comment on the classification accuracy of both the training and testing set. Given the results above, how would you assess the generalization capacity of your trained model? What other tests would you suggest to better guard against possibly having a false sense of the overall efficacy/accuracy of the model as a whole?**
*your answer here*
Proper cross validation and/or regularization.
**3.4 Now let's use regularization to improve the predictions from the multiple logistic regression model. Specifically, use LASSO-like regularization and cross-validation to train the model on the training set. Report the classification accuracy on both the training and testing set.**
```
# your code here
# fitting regularized multi regression model - L1 penalty
# Any reason for using liblinear - Use 5 fold CV
multi_regr = LogisticRegressionCV( solver='liblinear', penalty='l1', cv=5)
multi_regr.fit(X_train, y_train)
# predictions
y_train_pred_multi = multi_regr.predict(X_train)
y_test_pred_multi = multi_regr.predict(X_test)
# accuracy
train_score_multi = accuracy_score(y_train, y_train_pred_multi)
test_score_multi = accuracy_score(y_test, y_test_pred_multi)
print('Training set accuracy for multiple logistic regression = ', train_score_multi)
print('Test set accuracy for multiple logistic regression = ', test_score_multi)
df_res = df_res.append([['Reg-loR train', train_score_multi], ['Reg-loR val', test_score_multi]] )
df_res
```
**3.5 Do the 10 best predictors from Q1 hold up as important features in this regularized model? If not, explain why this is the case (feel free to use the data to support your explanation).**
```
# your code here
best_pred_1_3 = set(t_tests.index[best_preds_idx[0:10]])
print(best_pred_1_3)
# your code here
multi_regr_coefs =multi_regr.coef_!=0
#Followin is a list of Lasso coefficients and # of Log Reg L1 coefficients
predictors[multi_regr_coefs[0]] , np.sum(multi_regr_coefs[0])
# your code here
best_pred_1_3.difference(predictors[multi_regr_coefs[0]])
#Following predictors were important using t-test, however not for Log Reg - L1.
# your code here
#checking correlation between above list and best predictor
df[['X17042_at', 'X76648_at', 'Y08612_at','M31523_at']].corr().style.background_gradient(cmap='Blues')
```
*your answer here*
Idea here is that the predictors that did not make it to the list of regularization ... are the ones strongly correlated with the the best predictor. Notice high (absolute) correlation values in last row / last column.
<div class='exercise'> <b> Question 4 [25pts]: Multiclass Logistic Regression </b> </div>
**4.1** Load the data `hw4_mc_enhance.csv.zip` and examine its structure. How many instances of each class are there in our dataset?
**4.2** Split the dataset into train and test, 80-20 split, random_state = 8.
We are going to use two particular features/predictors -- 'M31523_at', 'X95735_at'. Create a scatter plot of these two features using training set. We should be able to discern from the plot which sample belongs to which `cancer_type`.
**4.3** Fit the following two models using cross-validation:
- Logistic Regression Multiclass model with linear features.
- Logistic Regression Multiclass model with Polynomial features, degree = 2.
**4.4** Plot the decision boundary and interpret results. **Hint:** You may utilize the function `overlay_decision_boundary`
**4.5** Report and plot the CV scores for the two models and interpret the results.
<hr>
### Solutions
**4.1 Load the data `hw4_mc_enhance.csv.zip` and examine its structure. How many instances of each class are there in our dataset?**
```
#your code here
zf = zipfile.ZipFile('data/hw4_mc_enhance.csv.zip')
df = pd.read_csv(zf.open('hw4_mc_enhance.csv'))
display(df.describe())
display(df.head())
#your code here
print(df.columns)
#How many instances of each class are there in our dataset ?
print(df.cancer_type.value_counts())
```
**4.2 Split the dataset into train and test, 80-20 split, random_state = 8.**
**We are going to utilize these two features - 'M31523_at', 'X95735_at'. Create a scatter plot of these two features using training dataset. We should be able to discern from the plot which sample belongs to which `cancer_type`.**
```
# your code here
# Split data
from sklearn.model_selection import train_test_split
random_state = 8
data_train, data_test = train_test_split(df, test_size=.2, random_state=random_state)
data_train_X = data_train[best_preds[0:2]]
data_train_Y = data_train['cancer_type']
# your code here
print(best_preds[0:2])
# your code here
X = data_train_X.values
y = data_train_Y.values
pal = sns.utils.get_color_cycle()
class_colors = {0: pal[0], 1: pal[1], 2: pal[2]}
class_markers = {0: 'o', 1: '^', 2: 'v'}
class_names = {"ClassA": 0, "ClassB": 1, "ClassC": 2}
def plot_cancer_data(ax, X, y):
for class_name, response in class_names.items():
subset = X[y == response]
ax.scatter(
subset[:, 0],
subset[:, 1],
label=class_name,
alpha=.9, color=class_colors[response],
lw=.5, edgecolor='k', marker=class_markers[response])
ax.set(xlabel='Biomarker 1', ylabel='Biomarker 2')
ax.legend(loc="lower right")
fig, ax = plt.subplots(figsize=(10,6))
ax.set_title( 'M31523_at vs. X95735_at')
plot_cancer_data(ax, X, y)
```
**4.3 Fit the following two models using crossvalidation:**
**Logistic Regression Multiclass model with linear features.**
**Logistic Regression Multiclass model with Polynomial features, degree = 2.**
```
# your code here
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
polynomial_logreg_estimator = make_pipeline(
PolynomialFeatures(degree=2, include_bias=False),
LogisticRegressionCV(multi_class="ovr"))
# Since this is a Pipeline, you can call `.fit` and `.predict` just as if it were any other estimator.
#
# Note that you can access the logistic regression classifier itself by
# polynomial_logreg_estimator.named_steps['logisticregressioncv']
# your code here
standardize_before_logreg = True
if not standardize_before_logreg:
# without standardizing...
logreg_ovr = LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300).fit(X, y)
polynomial_logreg_estimator = make_pipeline(
PolynomialFeatures(degree=2, include_bias=False),
LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300)).fit(X, y);
else:
# with standardizing... since we want to standardize all features, it's really this easy:
logreg_ovr = make_pipeline(
StandardScaler(),
LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300)).fit(X, y)
polynomial_logreg_estimator = make_pipeline(
PolynomialFeatures(degree=2, include_bias=False),
StandardScaler(),
LogisticRegressionCV(multi_class="ovr", cv=5)).fit(X, y);
```
**4.4 Plot the decision boundary and interpret results. Hint: You may utilize the function `overlay_decision_boundary`**
```
def overlay_decision_boundary(ax, model, colors=None, nx=200, ny=200, desaturate=.5, xlim=None, ylim=None):
"""
A function that visualizes the decision boundaries of a classifier.
ax: Matplotlib Axes to plot on
model: Classifier to use.
- if `model` has a `.predict` method, like an sklearn classifier, we call `model.predict(X)`
- otherwise, we simply call `model(X)`
colors: list or dict of colors to use. Use color `colors[i]` for class i.
- If colors is not provided, uses the current color cycle
nx, ny: number of mesh points to evaluated the classifier on
desaturate: how much to desaturate each of the colors (for better contrast with the sample points)
xlim, ylim: range to plot on. (If the default, None, is passed, the limits will be taken from `ax`.)
"""
# Create mesh.
xmin, xmax = ax.get_xlim() if xlim is None else xlim
ymin, ymax = ax.get_ylim() if ylim is None else ylim
xx, yy = np.meshgrid(
np.linspace(xmin, xmax, nx),
np.linspace(ymin, ymax, ny))
X = np.c_[xx.flatten(), yy.flatten()]
# Predict on mesh of points.
model = getattr(model, 'predict', model)
y = model(X)
#print("Do I predict" , y)
# y[np.where(y=='aml')]=3
# y[np.where(y=='allT')]=2
# y[np.where(y=='allB')]=1
y = y.astype(int) # This may be necessary for 32-bit Python.
y = y.reshape((nx, ny))
# Generate colormap.
if colors is None:
# If colors not provided, use the current color cycle.
# Shift the indices so that the lowest class actually predicted gets the first color.
# ^ This is a bit magic, consider removing for next year.
colors = (['white'] * np.min(y)) + sns.utils.get_color_cycle()
if isinstance(colors, dict):
missing_colors = [idx for idx in np.unique(y) if idx not in colors]
assert len(missing_colors) == 0, f"Color not specified for predictions {missing_colors}."
# Make a list of colors, filling in items from the dict.
color_list = ['white'] * (np.max(y) + 1)
for idx, val in colors.items():
color_list[idx] = val
else:
assert len(colors) >= np.max(y) + 1, "Insufficient colors passed for all predictions."
color_list = colors
color_list = [sns.utils.desaturate(color, desaturate) for color in color_list]
cmap = matplotlib.colors.ListedColormap(color_list)
# Plot decision surface
ax.pcolormesh(xx, yy, y, zorder=-2, cmap=cmap, norm=matplotlib.colors.NoNorm(), vmin=0, vmax=y.max() + 1)
xx = xx.reshape(nx, ny)
yy = yy.reshape(nx, ny)
if len(np.unique(y)) > 1:
ax.contour(xx, yy, y, colors="black", linewidths=1, zorder=-1)
else:
print("Warning: only one class predicted, so not plotting contour lines.")
# Your code here
def plot_decision_boundary(x, y, model, title, ax):
plot_cancer_data(ax, x, y)
overlay_decision_boundary(ax, model, colors=class_colors)
ax.set_title(title)
# your code here
fig, axs = plt.subplots(1, 2, figsize=(12, 5))
named_classifiers = [
("Linear", logreg_ovr),
("Polynomial", polynomial_logreg_estimator)
]
for ax, (name, clf) in zip(axs, named_classifiers):
plot_decision_boundary(X, y, clf, name, ax)
```
**4.5 Report and plot the CV scores for the two models and interpret the results.**
```
# your code here
cv_scores = [
cross_val_score(model, X, y, cv=3)
for name, model in named_classifiers]
plt.boxplot(cv_scores);
plt.xticks(np.arange(1, 4), [name for name, model in named_classifiers])
plt.xlabel("Logistic Regression variant")
plt.ylabel("Validation-Set Accuracy");
# your code here
print("Cross-validation accuracy:")
pd.DataFrame(cv_scores, index=[name for name, model in named_classifiers]).T.aggregate(['mean', 'std']).T
```
We are looking for low standard deviations in cross validation scores. If standard deviation is low (like in this case), we expect accuracy on an unseen dataset/test datasets to be rougly equal to mean cross validation accuracy.
<div class='exercise'><b> Question 5: [10 pts] Including an 'abstain' option </b></div>
One of the reasons a hospital might be hesitant to use your cancer classification model is that a misdiagnosis by the model on a patient can sometimes prove to be very costly (e.g., missing a diagnosis or wrongly diagnosing a condition, and subsequently, one may file a lawsuit seeking a compensation for damages). One way to mitigate this concern is to allow the model to 'abstain' from making a prediction whenever it is uncertain about the diagnosis for a patient. However, when the model abstains from making a prediction, the hospital will have to forward the patient to a specialist, which would incur additional cost. How could one design a cancer classification model with an abstain option, such that the cost to the hospital is minimized?
**Hint:** Think of ways to build on top of the logistic regression model and have it abstain on patients who are difficult to classify.
**5.1** More specifically, suppose the cost incurred by a hospital when a model mis-predicts on a patient is $\$5000$ , and the cost incurred when the model abstains from making a prediction is \$1000. What is the average cost per patient for the OvR logistic regression model (without quadratic or interaction terms) from **Question 4**. Note that this needs to be evaluated on the patients in the testing set.
**5.2** Design a classification strategy (into the 3 groups plus the *abstain* group) that has a low cost as possible per patient (certainly lower cost per patient than the logistic regression model). Give a justification for your approach.
<hr>
### Solutions
**5.1 More specifically, suppose the cost incurred by a hospital when a model mis-predicts on a patient is $\$5000$ , and the cost incurred when the model abstains from making a prediction is \$1000. What is the average cost per patient for the OvR logistic regression model (without quadratic or interaction terms) from Question 4.** <br><bR> Note that this needs to be evaluated on the patients in the testing set.
*your answer here*
**Philosophy:** Assuming the OvR logistic regression model, we estimate $p_j$ for $j\in \{1,2,3\}$, the marginal probability of being in each class. `sklearn` handles the normalization for us, although the normalization step is not necessary for the multinomial model since the softmax function is already constrained to sum to 1.
Following the hint, we will proceed by using the trained OvR logistic regression model to estimate $\hat{p}_j$ and then use the missclassifications to estimate the cost of them.
```
data_test.head()
# predict only in two best predictors
dec = logreg_ovr.predict(data_test.loc[:,best_preds[0:2]].values)
dec = pd.Series(dec).astype('category').cat.codes
# true values in test, our y_test
vl = np.array(data_test.cancer_type.astype('category').cat.codes)
# your code here
def cost(predictions, truth):
''' Counts the cost when we have missclassifications in the predictions vs. the truth set.
Option = -1 is the abstain option and is only relevant when the values include the abstain option,
otherwise initial cost defaults to 0 (for question 5.1).
Arguments: prediction values and true values
Returns: the numerical cost
'''
cost = 1000 * len(predictions[predictions == -1]) # defaults to 0 for 5.1
true_vals = truth[predictions != -1] # defaults to truth for 5.1
predicted_vals = predictions[predictions != -1] # defaults to predictions for 5.1
cost += 5000 * np.sum(true_vals != predicted_vals)
return cost
print("Cost incurred for OvR Logistic Regression Model without abstain: $", cost(dec,vl)/len(vl))
```
**5.2 Design a classification strategy (into the 3 groups plus the *abstain* group) that has a low cost as possible per patient (certainly lower cost per patient than the logistic regression model). Give a justification for your approach.**
Following 5.1, we make the decision to abstain or not based on minimizing the expected cost.
<br><br>
The expected cost for abstaining is $\$1000$. The expected cost for predicting is $ \$5000 * P(\text{misdiagnosis}) = 5000 * (1 - \hat{p}_k)$ where $k$ is the label of the predicted class.
So our decision rule is if the cost of making a missdiagnosis is less than the cost of abstaining (expressed by the formula $5000 * (1 - \hat{p}_k) < 1000$), then attempt a prediction. Otherwise, abstain.
```
# your code here
def decision_rule(lrm_mod,input_data):
probs = lrm_mod.predict_proba(input_data)
predicted_class = np.argmax(probs,axis = 1)
conf = 1.0 - np.max(probs,axis = 1)
predicted_class[5000*conf > 1000.0] = -1 #Abstain
return predicted_class
inp = data_test.loc[:,best_preds[0:2]].values
dec2 = decision_rule(logreg_ovr,inp)
print("Cost incurred for new model: $", cost(dec2,vl)/len(vl))
```
| github_jupyter |
# Homework 2 - Deep Learning
## Liberatori Benedetta
```
import torch
import numpy as np
# A class defining the model for the Multi Layer Perceptron
class MLP(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(in_features=6, out_features=2, bias= True)
self.layer2 = torch.nn.Linear(in_features=2, out_features=1, bias= True)
def forward(self, X):
out = self.layer1(X)
out = self.layer2(out)
out = torch.nn.functional.sigmoid(out)
return out
# Initialization of weights: uniformly distributed between -0.3 and 0.3
W = (0.3 + 0.3) * torch.rand(6, 1 ) - 0.3
# Inizialization of Data: 50% symmetric randomly generated tensors
# 50% not necessarily symmetric
firsthalf= torch.rand([32,3])
secondhalf=torch.zeros([32,3])
secondhalf[:, 2:3 ]=firsthalf[:, 0:1]
secondhalf[:, 1:2 ]=firsthalf[:, 1:2]
secondhalf[:, 0:1 ]=firsthalf[:, 2:3]
y1=torch.ones([32,1])
y0=torch.zeros([32,1])
simmetric = torch.cat((firsthalf, secondhalf, y1), dim=1)
notsimmetric = torch.rand([32,6])
notsimmetric= torch.cat((notsimmetric, y0), dim=1)
data= torch.cat((notsimmetric, simmetric), dim=0)
# Permutation of the concatenated dataset
data= data[torch.randperm(data.size()[0])]
def train_epoch(model, data, loss_fn, optimizer):
X=data[:,0:6]
y=data[:,6]
# 1. reset the gradients previously accumulated by the optimizer
# this will avoid re-using gradients from previous loops
optimizer.zero_grad()
# 2. get the predictions from the current state of the model
# this is the forward pass
y_hat = model(X)
# 3. calculate the loss on the current mini-batch
loss = loss_fn(y_hat, y.unsqueeze(1))
# 4. execute the backward pass given the current loss
loss.backward()
# 5. update the value of the params
optimizer.step()
return model
def train_model(model, data, loss_fn, optimizer, num_epochs):
model.train()
for epoch in range(num_epochs):
model=train_epoch(model, data, loss_fn, optimizer)
for i in model.state_dict():
print(model.state_dict()[i])
# Parameters set as defined in the paper
learn_rate = 0.1
num_epochs = 1425
beta= 0.9
model = MLP()
# I have judged the loss function (3) reported in the paper paper a general one for the discussion
# Since the problem of interest is a binary classification and that loss is mostly suited for
# regression problems I have used instead a Binary Cross Entropy loss
loss_fn = torch.nn.BCELoss()
# Gradient descent optimizer with momentum
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate, momentum=beta)
train_model(model, data, loss_fn, optimizer, num_epochs)
```
## Some conclusions:
Even if the original protocol has been followed as deep as possible, the results obtained in the same number of epochs are fare from the ones stated in the paper. Not only the numbers, indeed those are not even near to be symmetric. I assume this could depend on the inizialization of the data, which was not reported and thus a completely autonomous choice.
| github_jupyter |
# Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
**Instructions:**
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**Expected Output for m_train, m_test and num_px**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
**Expected Output**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
```
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
<font color='blue'>
**What you need to remember:**
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
**Mathematical expression of the algorithm**:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
**Expected Output**:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = 1 / (1 + np.exp(-(np.dot(w.T, X) + b))) # compute activation
cost = -1 / m * np.sum(np.multiply(Y, np.log(A)) + np.multiply(1 - Y,np.log(1 - A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1 / m * np.dot(X, (A - Y).T)
db = 1 / m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
### 4.4 - Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0][i] = 1 if A[0][i] >= 0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
```
**Expected Output**:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
**What to remember:**
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction_test for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = False)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```
**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
## 7 - Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
<font color='blue'>
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!
Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
| github_jupyter |
# 40 kotlin-dataframe puzzles
inspired by [100 pandas puzzles](https://github.com/ajcr/100-pandas-puzzles)
## Importing kotlin-dataframe
### Getting started
Difficulty: easy
**1.** Import kotlin-dataframe
```
%use dataframe(0.8.0-dev-595-0.11.0.13)
```
## DataFrame Basics
### A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty: easy
Consider the following columns:
```[kotlin]
val animal by column("cat", "cat", "snake", "dog", "dog", "cat", "snake", "cat", "dog", "dog")
val age by column(2.5, 3.0, 0.5, Double.NaN, 5.0, 2.0, 4.5, Double.NaN, 7, 3)
val visits by column(1, 3, 2, 3, 2, 3, 1, 1, 2, 1)
val priority by column("yes", "yes", "no", "yes", "no", "no", "no", "yes", "no", "no")
```
**2.** Create a DataFrame df from this columns.
```
val animal by columnOf("cat", "cat", "snake", "dog", "dog", "cat", "snake", "cat", "dog", "dog")
val age by columnOf(2.5, 3.0, 0.5, Double.NaN, 5.0, 2.0, 4.5, Double.NaN, 7.0, 3.0)
val visits by columnOf(1, 3, 2, 3, 2, 3, 1, 1, 2, 1)
val priority by columnOf("yes", "yes", "no", "yes", "no", "no", "no", "yes", "no", "no")
var df = dataFrameOf(animal, age, visits, priority)
df
```
**3.** Display a summary of the basic information about this DataFrame and its data.
```
df.schema()
df.describe()
```
**4.** Return the first 3 rows of the DataFrame df.
```
df[0 until 3] // df[0..2]
// or equivalently
df.head(3)
// or
df.take(3)
```
**5.** Select "animal" and "age" columns from the DataFrame df.
```
df[animal, age]
```
**6.** Select the data in rows [3, 4, 8] and in columns ["animal", "age"].
```
df[3, 4, 8][animal, age]
```
**7.** Select only the rows where the number of visits is grater than 2.
```
df.filter { visits > 2 }
```
**8.** Select the rows where the age is missing, i.e. it is NaN.
```
df.filter { age.isNaN() }
```
**9.** Select the rows where the animal is a cat and the age is less than 3.
```
df.filter { animal == "cat" && age < 3 }
```
**10.** Select the rows where age is between 2 and 4 (inclusive).
```
df.filter { age in 2.0..4.0 }
```
**11.** Change tha age in row 5 to 1.5
```
df.update { age }.at(5).withValue(1.5)
```
**12.** Calculate the sum of all visits in df (i.e. the total number of visits).
```
df.visits.sum()
```
**13.** Calculate the mean age for each different animal in df.
```
df.groupBy { animal }.mean { age }
```
**14.** Append a new row to df with your choice of values for each column. Then delete that row to return the original DataFrame.
```
val modifiedDf = df.append("dog", 5.5, 2, "no")
modifiedDf.dropLast()
```
**15.** Count the number of each type of animal in df.
```
df.groupBy { animal }.count()
```
**16.** Sort df first by the values in the 'age' in decending order, then by the value in the 'visits' column in ascending order.
```
df.sortBy { age.desc() and visits }
```
**17.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
```
df.convert { priority }.with { it == "yes" }
```
**18.** In the 'animal' column, change the 'dog' entries to 'corgi'.
```
df.update { animal }.where { it == "dog" }.with { "corgi" }
```
**19.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
```
df.pivot(inward = false) { visits }.groupBy { animal }.mean(skipNA = true) { age }
```
## DataFrame: beyond the basics
### Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: medium
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
**20.** You have a DataFrame df with a column 'A' of integers. For example:
```kotlin
val df = dataFrameOf("A")(1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7)
```
How do you filter out rows which contain the same integer as the row immediately above?
You should be left with a column containing the following values:
```
1, 2, 3, 4, 5, 6, 7
```
```
val df = dataFrameOf("A")(1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7)
df
df.filter { prev()?.A != A }
df.filter { diff { A } != 0 }
```
We could use `distinct()` here but it won't work as desired if A is [1, 1, 2, 2, 1, 1] for example.
```
df.distinct()
```
**21.** Given a DataFrame of random numetic values:
```kotlin
val df = dataFrameOf("a", "b", "c").randomDouble(5) // this is a 5x3 DataFrame of double values
```
how do you subtract the row mean from each element in the row?
```
val df = dataFrameOf("a", "b", "c").randomDouble(5)
df
df.update { colsOf<Double>() }
.with { it - rowMean() }
```
**22.** Suppose you have DataFrame with 10 columns of real numbers, for example:
```kotlin
val names = ('a'..'j').map { it.toString() }
val df = dataFrameOf(names).randomDouble(5)
```
Which column of number has the smallest sum? Return that column's label.
```
val names = ('a'..'j').map { it.toString() }
val df = dataFrameOf(names).randomDouble(5)
df
df.sum().transpose().minBy("value")["name"]
```
**23.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
```
val df = dataFrameOf("a", "b", "c").randomInt(30, 0..2)
df.distinct().nrow()
```
**24.** In the cell below, you have a DataFrame `df` that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values.
For each row of the DataFrame, find the *column* which contains the *third* NaN value.
You should return a Series of column labels: `e, c, d, h, d`
```
val nan = Double.NaN
val names = ('a'..'j').map { it.toString() }
val data = listOf(
0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan,
nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16,
nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01,
0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan,
nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68
)
val df = dataFrameOf(names)(*data.toTypedArray())
df
df.map("res") {
namedValuesOf<Double>()
.filter { it.value.isNaN() }.drop(2)
.firstOrNull()?.name
}
```
**25.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals':
```kotlin
val grps by column("a", "a", "a", "b", "b", "c", "a", "a", "b", "c", "c", "c", "b", "b", "c")
val vals by column(12, 345, 3, 1, 45, 14, 4, 52, 54, 23, 235, 21, 57, 3, 87)
val df = dataFrameOf(grps, vals)
```
For each group, find the sum of the three greatest values. You should end up with the answer as follows:
```
grps
a 409
b 156
c 345
```
```
val grps by columnOf("a", "a", "a", "b", "b", "c", "a", "a", "b", "c", "c", "c", "b", "b", "c")
val vals by columnOf(12, 345, 3, 1, 45, 14, 4, 52, 54, 23, 235, 21, 57, 3, 87)
val df = dataFrameOf(grps, vals)
df
df.groupBy { grps }.aggregate {
vals.sortDesc().take(3).sum() into "res"
}
```
**26.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive).
For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.
The answer as follows:
```
A
(0, 10] 635
(10, 20] 360
(20, 30] 315
(30, 40] 306
(40, 50] 750
(50, 60] 284
(60, 70] 424
(70, 80] 526
(80, 90] 835
(90, 100] 852
```
```
import kotlin.random.Random
val random = Random(42)
val list = List(200) { random.nextInt(1, 101)}
val df = dataFrameOf("A", "B")(*list.toTypedArray())
df
df.groupBy { A.map { (it - 1) / 10 } }.sum { B }
.sortBy { A }
.convert { A }.with { "(${it * 10}, ${it * 10 + 10}]" }
```
## DataFrames: harder problems
### These might require a bit of thinking outside the box...
Difficulty: hard
**27.** Consider a DataFrame `df` where there is an integer column 'X':
```kotlin
val df = dataFrameOf("X")(7, 2, 0, 3, 4, 2, 5, 0, 3 , 4)
```
For each value, count the difference back to the previous zero (or the start of the column, whichever is closer). These values should therefore be
```
[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]
```
Make this a new column 'Y'.
```
val df = dataFrameOf("X")(7, 2, 0, 3, 4, 2, 5, 0, 3 , 4)
df
df.map("Y") {
if(it.X == 0) 0 else (prev()?.new() ?: 0) + 1
}
```
**28.** Consider the DataFrame constructed below which contains rows and columns of numerical data.
Create a list of the column-row index locations of the 3 largest values in this DataFrame. In thi case, the answer should be:
```
[(0, d), (2, c), (3, f)]
```
```
val names = ('a'..'h').map { it.toString() } // val names = (0..7).map { it.toString() }
val random = Random(30)
val list = List(64) { random.nextInt(1, 101) }
val df = dataFrameOf(names)(*list.toTypedArray())
df
df.add("index") { index() }
.gather { dropLast() }.into("name", "vals")
.sortByDesc("vals").take(3)["index", "name"]
```
**29.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.
```kotlin
val random = Random(31)
val lab = listOf("A", "B")
val vals by columnOf(List(15) { random.nextInt(-30, 30) })
val grps by columnOf(List(15) { lab[random.nextInt(0, 2)] })
val df = dataFrameOf(vals, grps)
```
Create a new column 'patched_values' which contains the same values as the 'vals' any negative values in 'vals' with the group mean:
```
vals grps patched_vals
-17 B 21.000000
-7 B 21.000000
28 B 28.000000
16 B 16.000000
-21 B 21.000000
19 B 19.000000
-2 B 21.000000
-19 B 21.000000
16 A 16.000000
9 A 9.000000
-14 A 16.000000
-19 A 16.000000
-22 A 16.000000
-1 A 16.000000
23 A 23.000000
```
```
val random = Random(31)
val lab = listOf("A", "B")
val vals by columnOf(*Array(15) { random.nextInt(-30, 30) })
val grps by columnOf(*Array(15) { lab[random.nextInt(0, 2)] })
val df = dataFrameOf(vals, grps)
df
val means = df.filter { vals >= 0 }
.groupBy { grps }.mean()
.pivot { grps }.values { vals }
df.add("patched_values") {
if(vals < 0) means[grps] else vals
}
```
**30.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:
```kotlin
val group by columnOf("a", "a", "b", "b", "a", "b", "b", "b", "a", "b", "a", "b")
val value by columnOf(1.0, 2.0, 3.0, Double.NaN, 2.0, 3.0, Double.NaN, 1.0, 7.0, 3.0, Double.NaN, 8.0)
val df = dataFrameOf(group, value)
df
group value
a 1.0
a 2.0
b 3.0
b NaN
a 2.0
b 3.0
b NaN
b 1.0
a 7.0
b 3.0
a NaN
b 8.0
```
The goal is:
```
1.000000
1.500000
3.000000
3.000000
1.666667
3.000000
3.000000
2.000000
3.666667
2.000000
4.500000
4.000000
```
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)
```
val groups by columnOf("a", "a", "b", "b", "a", "b", "b", "b", "a", "b", "a", "b")
val value by columnOf(1.0, 2.0, 3.0, Double.NaN, 2.0, 3.0, Double.NaN, 1.0, 7.0, 3.0, Double.NaN, 8.0)
val df = dataFrameOf(groups, value)
df
df.add("id"){ index() }
.groupBy { groups }.add("res") {
near(-2..0).map { it.value }.filter { !it.isNaN() }.average()
}.concat()
.sortBy("id")
.remove("id")
```
## Date
Difficulty: easy/medium
**31.** Create a column Of LocalDate that contains each day of 2015 and column of random numbers.
```
@file:DependsOn("org.jetbrains.kotlinx:kotlinx-datetime-jvm:0.3.1")
import kotlinx.datetime.*
class DateRangeIterator(first: LocalDate, last: LocalDate, val step: Int) : Iterator<LocalDate> {
private val finalElement: LocalDate = last
private var hasNext: Boolean = if (step > 0) first <= last else first >= last
private var next: LocalDate = if (hasNext) first else finalElement
override fun hasNext(): Boolean = hasNext
override fun next(): LocalDate {
val value = next
if (value == finalElement) {
if (!hasNext) throw kotlin.NoSuchElementException()
hasNext = false
}
else {
next = next.plus(step, DateTimeUnit.DayBased(1))
}
return value
}
}
operator fun ClosedRange<LocalDate>.iterator() = DateRangeIterator(this.start, this.endInclusive, 1)
fun ClosedRange<LocalDate>.toList(): List<LocalDate> {
return when (val size = this.start.daysUntil(this.endInclusive)) {
0 -> emptyList()
1 -> listOf(iterator().next())
else -> {
val dest = ArrayList<LocalDate>(size)
for (item in this) {
dest.add(item)
}
dest
}
}
}
val start = LocalDate(2015, 1, 1)
val end = LocalDate(2016, 1, 1)
val days = (start..end).toList()
val dti = days.toColumn("dti")
val s = List(dti.size()) { Random.nextDouble() }.toColumn("s")
val df = dataFrameOf(dti, s)
df.head()
```
**32.** Find the sum of the values in s for every Wednesday.
```
df.filter { dti.dayOfWeek.ordinal == 2 }.sum { s }
```
**33.** For each calendar month in s, find the mean of values.
```
df.groupBy { dti.map { it.month } named "month" }.mean()
```
**34.** For each group of four consecutive calendar months in s, find the date on which the highest value occurred.
```
df.add("month4") {
when(dti.monthNumber) {
in 1..4 -> 1
in 5..8 -> 2
else -> 3
}
}.groupBy("month4").aggregate { maxBy(s) into "max" }.flatten()
```
**35.** Create a column consisting of the third Thursday in each month for the years 2015 and 2016.
```
import java.time.temporal.WeekFields
import java.util.*
val start = LocalDate(2015, 1, 1)
val end = LocalDate(2016, 12, 31)
(start..end).toList().toColumn("3thu").filter {
it.toJavaLocalDate()[WeekFields.of(Locale.ENGLISH).weekOfMonth()] == 3
&& it.dayOfWeek.value == 4
}
```
## Cleaning Data
### Making a DataFrame easier to work with
Difficulty: *easy/medium*
It happens all the time: someone gives you data containing malformed strings, lists and missing data. How do you tidy it up so you can get on with the analysis?
Take this monstrosity as the DataFrame to use in the following puzzles:
```kotlin
val fromTo = listOf("LoNDon_paris", "MAdrid_miLAN", "londON_StockhOlm", "Budapest_PaRis", "Brussels_londOn").toColumn("From_To")
val flightNumber = listOf(10045.0, Double.NaN, 10065.0, Double.NaN, 10085.0).toColumn("FlightNumber")
val recentDelays = listOf(listOf(23, 47), listOf(), listOf(24, 43, 87), listOf(13), listOf(67, 32)).toColumn("RecentDelays")
val airline = listOf("KLM(!)", "<Air France> (12)", "(British Airways. )", "12. Air France", "'Swiss Air'").toColumn("Airline")
val df = dataFrameOf(fromTo, flightNumber, recentDelays, airline)
```
It looks like this:
```
From_To FlightNumber RecentDelays Airline
LoNDon_paris 10045.000000 [23, 47] KLM(!)
MAdrid_miLAN NaN [] {Air France} (12)
londON_StockhOlm 10065.000000 [24, 43, 87] (British Airways. )
Budapest_PaRis NaN [13] 12. Air France
Brussels_londOn 10085.000000 [67, 32] 'Swiss Air'
```
```
val fromTo = listOf("LoNDon_paris", "MAdrid_miLAN", "londON_StockhOlm", "Budapest_PaRis", "Brussels_londOn").toColumn("From_To")
val flightNumber = listOf(10045.0, Double.NaN, 10065.0, Double.NaN, 10085.0).toColumn("FlightNumber")
val recentDelays = listOf(listOf(23, 47), listOf(), listOf(24, 43, 87), listOf(13), listOf(67, 32)).toColumn("RecentDelays")
val airline = listOf("KLM(!)", "{Air France} (12)", "(British Airways. )", "12. Air France", "'Swiss Air'").toColumn("Airline")
var df = dataFrameOf(fromTo, flightNumber, recentDelays, airline)
df
```
**36.** Some values in the FlightNumber column are missing (they are NaN). These numbers are meant to increase by 10 with each row, so 10055 and 10075 need to be put in place. Modify df to fill in these missing numbers and make the column an integer column (instead of a float column).
```
val df1 = df.update { FlightNumber }
.where { it.isNaN() }.with { prev()!!.FlightNumber + (next()!!.FlightNumber - prev()!!.FlightNumber) / 2 }
.convert { FlightNumber }.toInt()
df1
```
**37.** The **From_To** column would be better as two separate columns! Split each string on the underscore delimiter **_** to give a new two columns. Assign the correct names 'From' and 'To' to this columns.
```
var df2 = df.split { From_To }.by("_").into("From", "To")
df2
```
**38.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame 'temp'. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
```
df2 = df2.update { From and To }.with { it.lowercase().replaceFirstChar(Char::uppercase) }
df2
```
**39.** In the **Airline** column, you can see some extra punctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`.
```
df2 = df2.update { Airline }.with {
"([a-zA-Z\\s]+)".toRegex().find(it)?.value ?: ""
}
df2
```
**40.** In the **RecentDelays** column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the column of lists into columns named 'delays_' and replace the unwanted RecentDelays column in `df` with 'delays'.
```
val prep_df = df2
.convert { RecentDelays }.with { it.map { it.toDouble() } }
.split { RecentDelays }.default(Double.NaN).into { "delay_$it" }
prep_df
```
The DataFrame should look much better now:
|From |To |FlightNumber |delay_1 |delay_2 |delay_3 |Airline |
|------------|----------|-----------------|---------------|---------------|---------------|----------------|
|London |Paris |10045 |23.000000 |47.000000 |NaN |KLM |
|Madrid |Milan |10055 |NaN |NaN |NaN |Air France |
|London |Stockholm |10065 |24.000000 |43.000000 |87.000000 |British Airways |
|Budapest |Paris |10075 |13.000000 |NaN |NaN | Air France |
|Brussels |London |10085 |67.000000 |32.000000 |NaN |Swiss Air |
| github_jupyter |
# Veg ET validation
```
import pandas as pd
from time import time
import xarray as xr
import numpy as np
def _get_year_month(product, tif):
fn = tif.split('/')[-1]
fn = fn.replace(product,'')
fn = fn.replace('.tif','')
fn = fn.replace('_','')
print(fn)
return fn
def _file_object(bucket_prefix,product_name,year,day):
if product_name == 'NDVI':
decade = str(year)[:3]+'0'
variable_prefix = bucket_prefix + 'NDVI_FORE_SCE_MED/delaware_basin_FS_'
file_object = variable_prefix + str(decade) + '/' + 'FS_{0}_{1}_med_{2}.tif'.format(str(decade), product_name, day)
elif product_name == 'ETo':
decade = str(year)[:3]+'0'
variable_prefix = bucket_prefix +'ETo_Moving_Average_byDOY/'
file_object = variable_prefix + '{0}_{1}/'.format(str(decade), str(int(decade)+10)) + '{0}_DOY{1}.tif'.format(product_name,day)
elif product_name == 'Tasavg' or product_name == 'Tasmax' or product_name == 'Tasmin':
variable_prefix = bucket_prefix + 'Temp/' + product_name + '/'
#variable_prefix = bucket_prefix + 'TempCelsius/' + product_name + '/'
file_object = variable_prefix + str(year) + '/' + '{}_'.format(product_name) + str(year) + day + '.tif'
elif product_name == 'PPT':
variable_prefix = bucket_prefix + product_name + '/'
file_object = variable_prefix + str(year) + '/' + '{}_'.format(product_name) + str(year) + day + '.tif'
else:
file_object = bucket_prefix + str(start_year) + '/' + f'{product_name}_' + str(start_year) + day + '.tif'
return file_object
def create_s3_list_of_days_start_end(main_bucket_prefix, start_year,start_day, end_year, end_day, product_name):
the_list = []
years = []
for year in (range(int(start_year),int(end_year)+1)):
years.append(year)
if len(years) == 1:
for i in range(int(start_day),int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
elif len(years) == 2:
for i in range(int(start_day),366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
for i in range(1,int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,end_year,day)
the_list.append(file_object)
else:
for i in range(int(start_day),366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
for year in years[1:-1]:
for i in range(1,366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,year,day)
the_list.append(file_object)
for i in range(1,int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,end_year,day)
the_list.append(file_object)
return the_list
def xr_build_cube_concat_ds_one(tif_list, product, x, y):
start = time()
my_da_list =[]
year_month_list = []
for tif in tif_list:
#tiffile = 's3://dev-et-data/' + tif
tiffile = tif
print(tiffile)
da = xr.open_rasterio(tiffile)
daSub = da.sel(x=x, y=y, method='nearest')
#da = da.squeeze().drop(labels='band')
#da.name=product
my_da_list.append(daSub)
tnow = time()
elapsed = tnow - start
print(tif, elapsed)
year_month_list.append(_get_year_month(product, tif))
da = xr.concat(my_da_list, dim='band')
da = da.rename({'band':'year_month'})
da = da.assign_coords(year_month=year_month_list)
DS = da.to_dataset(name=product)
return(DS)
main_bucket_prefix='s3://dev-et-data/in/DelawareRiverBasin/'
start_year = '1950'
start_day = '1'
end_year = '1950'
end_day = '11'
x=-75
y =41
```
## Step 1: Get pixel values for input variables
```
df_list=[]
for product in ['PPT','Tasavg', 'Tasmin', 'Tasmax', 'NDVI', 'ETo']:
print("==="*30)
print("processing product",product)
tif_list = create_s3_list_of_days_start_end(main_bucket_prefix, start_year,start_day, end_year, end_day, product)
print (tif_list)
ds_pix=xr_build_cube_concat_ds_one(tif_list, product, x, y)
my_index = ds_pix['year_month'].values
my_array = ds_pix[product].values
df = pd.DataFrame(my_array, columns=[product,], index=my_index)
df_list.append(df)
df_reset_list = []
for dframe in df_list:
print (dframe)
df_reset = dframe.set_index(df_list[0].index)
print (df_reset)
df_reset_list.append(df_reset)
df_veget = pd.concat(df_reset_list, axis=1)
df_veget['NDVI'] *= 0.0001
df_veget['Tasavg'] -= 273.15
df_veget['Tasmin'] -= 273.15
df_veget['Tasmax'] -= 273.15
df_veget
for static_product in ['awc', 'por', 'fc', 'intercept', 'water']:
if static_product == 'awc' or static_product == 'por' or static_product == 'fc':
file_object = ['s3://dev-et-data/in/NorthAmerica/Soil/' + '{}_NA_mosaic.tif'.format(static_product)]
elif static_product == 'intercept':
file_object = ['s3://dev-et-data/in/NorthAmerica/Soil/' + 'Intercept2016_nowater_int.tif']
else:
file_object = ['s3://dev-et-data/in/DelawareRiverBasin/' + 'DRB_water_mask_inland.tif']
ds_pix=xr_build_cube_concat_ds_one(file_object, static_product, x, y)
df_veget['{}'.format(static_product)] = ds_pix[static_product].values[0]
print (df_veget)
df_veget
```
## Step 2: Run Veg ET model for a selected pixel
```
pptcorr = 1
rf_value = 0.167
rf_low_thresh_temp = 0
rf_high_thresh_temp = 6
melt_factor = 0.06
dc_coeff: 0.65
rf_coeff = 0.35
k_factor = 1.25
ndvi_factor = 0.2
water_factor = 0.7
bias_corr = 0.85
alfa_factor = 1.25
df_veget['PPTcorr'] = df_veget['PPT']*pptcorr
df_veget['PPTeff'] = df_veget['PPTcorr']*(1-df_veget['intercept']/100)
df_veget['PPTinter'] = df_veget['PPTcorr']*(df_veget['intercept']/100)
df_veget['Tmin0'] = np.where(df_veget['Tasmin']<0,0,df_veget['Tasmin'])
df_veget['Tmax0'] = np.where(df_veget['Tasmax']<0,0,df_veget['Tasmax'])
rain_frac_conditions = [(df_veget['Tasavg']<=rf_low_thresh_temp),
(df_veget['Tasavg']>=rf_low_thresh_temp)&(df_veget['Tasavg']<=rf_high_thresh_temp),
(df_veget['Tasavg']>=rf_high_thresh_temp)]
rain_frac_values = [0,df_veget['Tasavg']*rf_value,1]
df_veget['rain_frac'] = np.select(rain_frac_conditions,rain_frac_values)
df_veget['melt_rate'] = melt_factor*(df_veget['Tmax0']**2 - df_veget['Tmax0']*df_veget['Tmin0'])
df_veget['snow_melt_rate'] = np.where(df_veget['Tasavg']<0,0,df_veget['melt_rate'])
df_veget['rain']=df_veget['PPTeff']*df_veget['rain_frac']
def _snow_water_equivalent(rain_frac, PPTeff):
swe_value = (1-rain_frac)*PPTeff
return swe_value
def _snow_melt(melt_rate,swe,snowpack):
if melt_rate <= (swe + snowpack):
snowmelt_value = melt_rate
else:
snowmelt_value = swe_value + snowpack
return snowmelt_value
def _snow_pack(snowpack_prev,swe,snow_melt):
if (snowpack_prev + swe - snow_melt) < 0:
SNOW_pack_value = 0
else:
SNOW_pack_value = snowpack_prev + swe - snow_melt
return SNOW_pack_value
def _runoff(snow_melt,awc,swi):
if snow_melt<awc:
rf_value = 0
else:
rf_value = swi-awc
return rf_value
def _surface_runoff(rf, por,fc,rf_coeff):
if rf <= por - fc:
srf_value = rf*rf_coeff
else:
srf_value = (rf - (por - fc)) + rf_coeff*(por - fc)
return srf_value
def _etasw_calc(k_factor, ndvi, ndvi_factor, eto, bias_corr, swi, awc, water, water_factor, alfa_factor):
etasw1A_value = (k_factor*ndvi+ndvi_factor)*eto*bias_corr
etasw1B_value = (k_factor*ndvi)*eto*bias_corr
if ndvi > 0.4:
etasw1_value = etasw1A_value
else:
etasw1_value = etasw1B_value
etasw2_value = swi/(0.5*awc)*etasw1_value
if swi>0.5*awc:
etasw3_value = etasw1_value
else:
etasw3_value = etasw2_value
if etasw3_value>swi:
etasw4_value = swi
else:
etasw4_value = etasw3_value
if etasw4_value> awc:
etasw5_value = awc
else:
etasw5_value = etasw4_value
etc_value = etasw1A_value
if water == 0:
etasw_value = etasw5_value
else:
etasw_value = water_factor*alfa_factor*bias_corr*eto
if (etc_value - etasw_value)<0:
netet_value = 0
else:
netet_value = etc_value - etasw_value
return [etasw1A_value, etasw1B_value, etasw1_value, etasw2_value, etasw3_value, etasw4_value, etasw5_value, etasw_value, etc_value, netet_value]
def _soil_water_final(swi,awc,etasw5):
if swi> awc:
swf_value = awc - etasw5
elif (swi> awc) & (swi-etasw5<0):
swf_value = 0
else:
swf_value = swi-etasw5
return swf_value
swe_list = []
snowmelt_list = []
snwpk_list = []
swi_list = []
rf_list = []
srf_list = []
dd_list = []
etasw1A_list = []
etasw1B_list = []
etasw1_list = []
etasw2_list = []
etasw3_list = []
etasw4_list = []
etasw5_list = []
etasw_list = []
etc_list = []
netet_list = []
swf_list = []
for index, row in df_veget.iterrows():
if index == df_veget.index[0]:
swe_value = 0
swe_list.append(swe_value)
snowmelt_value = swe_value
snowmelt_list.append(snowmelt_value)
snwpk_value = 0
snwpk_list.append(snwpk_value)
swi_value = 0.5*row['awc']+ row['PPTeff'] + snowmelt_value
swi_list.append(swi_value)
rf_value = _runoff(snowmelt_value,row['awc'],swi_value)
rf_list.append(rf_value)
srf_value = _surface_runoff(rf_value, row['por'],row['fc'],rf_coeff)
srf_list.append(srf_value)
dd_value = rf_value - srf_value
dd_list.append(dd_value)
eta_variables = _etasw_calc(k_factor, row['NDVI'], ndvi_factor, row['ETo'], bias_corr, swi_value, row['awc'], row['water'], water_factor, alfa_factor)
etasw1A_list.append(eta_variables[0])
etasw1B_list.append(eta_variables[1])
etasw1_list.append(eta_variables[2])
etasw2_list.append(eta_variables[3])
etasw3_list.append(eta_variables[4])
etasw4_list.append(eta_variables[5])
etasw5_list.append(eta_variables[6])
etasw_list.append(eta_variables[7])
etc_list.append(eta_variables[8])
netet_list.append(eta_variables[9])
swf_value = _soil_water_final(swi_value,row['awc'],eta_variables[7])
swf_list.append(swf_value)
else:
swe_value = _snow_water_equivalent(row['rain_frac'],row['PPTeff'])
swe_list.append(swe_value)
snowmelt_value = _snow_melt(row['melt_rate'],swe_value,snwpk_list[-1])
snowmelt_list.append(snowmelt_value)
snwpk_value = _snow_pack(snwpk_list[-1],swe_value,snowmelt_value)
snwpk_list.append(snwpk_value)
swi_value = swf_list[-1] + row['rain'] + snowmelt_value
swi_list.append(swi_value)
rf_value = _runoff(snowmelt_value,row['awc'],swi_value)
rf_list.append(rf_value)
srf_value = _surface_runoff(rf_value, row['por'],row['fc'],rf_coeff)
srf_list.append(srf_value)
dd_value = rf_value - srf_value
dd_list.append(dd_value)
eta_variables = _etasw_calc(k_factor, row['NDVI'], ndvi_factor, row['ETo'], bias_corr, swi_value, row['awc'], row['water'], water_factor, alfa_factor)
etasw1A_list.append(eta_variables[0])
etasw1B_list.append(eta_variables[1])
etasw1_list.append(eta_variables[2])
etasw2_list.append(eta_variables[3])
etasw3_list.append(eta_variables[4])
etasw4_list.append(eta_variables[5])
etasw5_list.append(eta_variables[6])
etasw_list.append(eta_variables[7])
etc_list.append(eta_variables[8])
netet_list.append(eta_variables[9])
swf_value = _soil_water_final(swi_value,row['awc'],eta_variables[7])
swf_list.append(swf_value)
df_veget['swe'] = swe_list
df_veget['snowmelt'] = snowmelt_list
df_veget['snwpk'] = snwpk_list
df_veget['swi'] = swi_list
df_veget['rf'] = rf_list
df_veget['srf'] = srf_list
df_veget['dd'] = dd_list
df_veget['etasw1A'] = etasw1A_list
df_veget['etasw1B'] = etasw1B_list
df_veget['etasw1'] = etasw1_list
df_veget['etasw2'] = etasw2_list
df_veget['etasw3'] = etasw3_list
df_veget['etasw4'] = etasw4_list
df_veget['etasw5'] = etasw5_list
df_veget['etasw'] = etasw_list
df_veget['etc'] = etc_list
df_veget['netet'] = netet_list
df_veget['swf'] = swf_list
pd.set_option('display.max_columns', None)
df_veget
```
## Step 3: Sample output data computed in the cloud
```
output_bucket_prefix='s3://dev-et-data/enduser/DelawareRiverBasin/r_01_29_2021_drb35pct/'
#output_bucket_prefix = 's3://dev-et-data/out/DelawareRiverBasin/Run03_11_2021/run_drbcelsius_5yr_0311_chip39.84N-73.72E_o/'
df_list_cloud=[]
for product_out in ['rain', 'swe', 'snowmelt', 'snwpk','srf', 'dd', 'etasw5', 'etasw', 'netet', 'swf', 'etc']:
print("==="*30)
print("processing product",product_out)
tif_list = create_s3_list_of_days_start_end(output_bucket_prefix, start_year,start_day, end_year, end_day, product_out)
ds_pix=xr_build_cube_concat_ds_one(tif_list, product_out, x, y)
my_index = ds_pix['year_month'].values
my_array = ds_pix[product_out].values
df = pd.DataFrame(my_array, columns=['{}_cloud'.format(product_out),], index=my_index)
df_list_cloud.append(df)
for dframe in df_list_cloud:
print(dframe)
df_veget_cloud = pd.concat(df_list_cloud, axis=1)
df_veget_cloud
df_validation = pd.concat([df_veget,df_veget_cloud], axis=1)
df_validation
```
## Step 4: Visualization of validation results
### Import Visualization libraries
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as mtick
from scipy import stats
import matplotlib.patches as mpatches
```
### Visualize Veg ET input variables
```
fig, axs = plt.subplots(3, 1, figsize=(15,12))
axs[0].bar(df_validation.index, df_validation["PPT"], color = 'lightskyblue', width = 0.1)
ax0 = axs[0].twinx()
ax0.plot(df_validation.index, df_validation["NDVI"], color = 'seagreen')
axs[0].set_ylabel("PPT, mm")
ax0.set_ylabel("NDVI")
ax0.set_ylim([0,1])
low_threshold = np.array([0 for i in range(len(df_validation))])
axs[1].plot(df_validation.index, low_threshold, '--', color = 'dimgray', linewidth=0.8)
high_threshold = np.array([6 for i in range(len(df_validation))])
axs[1].plot(df_validation.index, high_threshold, '--', color = 'dimgray', linewidth=0.8)
axs[1].plot(df_validation.index, df_validation["Tasmin"], color = 'navy', linewidth=2.5)
axs[1].plot(df_validation.index, df_validation["Tasavg"], color = 'slategray', linewidth=2.5)
axs[1].plot(df_validation.index, df_validation["Tasmax"], color = 'red', linewidth=2.5)
axs[1].set_ylabel("T, deg C")
axs[2].plot(df_validation.index, df_validation["ETo"], color = 'goldenrod')
axs[2].plot(df_validation.index, df_validation["etasw"], color = 'royalblue')
axs[2].set_ylabel("ET, mm")
ppt = mpatches.Patch(color='lightskyblue', label='PPT')
ndvi = mpatches.Patch(color='seagreen', label='NDVI')
tmax = mpatches.Patch(color='red', label='Tmax')
tavg = mpatches.Patch(color='slategray', label='Tavg')
tmin = mpatches.Patch(color='navy', label='Tmin')
eto = mpatches.Patch(color='goldenrod', label='ETo')
eta = mpatches.Patch(color='royalblue', label='ETa')
plt.legend(handles=[ppt, ndvi, tmax, tavg, tmin, eto,eta])
```
### Compare Veg ET output variables computed with data frames vs output variables computed in the cloud
```
fig, axs = plt.subplots(5, 2, figsize=(20,25))
axs[0, 0].bar(df_validation.index, df_validation["rain"], color = 'skyblue')
axs[0, 0].plot(df_validation.index, df_validation["rain_cloud"], 'ro', color = 'crimson')
axs[0, 0].set_title("Rain amount from precipitation (rain)")
axs[0, 0].set_ylabel("rain, mm/day")
axs[0, 1].bar(df_validation.index, df_validation["swe"], color = 'skyblue')
axs[0, 1].plot(df_validation.index, df_validation["swe_cloud"], 'ro', color = 'crimson')
axs[0, 1].set_title("Snow water equivalent from precipiation (swe)")
axs[0, 1].set_ylabel("swe, mm/day")
axs[1, 0].bar(df_validation.index, df_validation["snowmelt"], color = 'skyblue')
axs[1, 0].plot(df_validation.index, df_validation["snowmelt_cloud"], 'ro', color = 'crimson')
axs[1, 0].set_title("Amount of melted snow (snowmelt)")
axs[1, 0].set_ylabel("snowmelt, mm/day")
axs[1, 1].bar(df_validation.index, df_validation["snwpk"], color = 'skyblue')
axs[1, 1].plot(df_validation.index, df_validation["snwpk_cloud"], 'ro', color = 'crimson')
axs[1, 1].set_title("Snow pack amount (snwpk)")
axs[1, 1].set_ylabel("snpk, mm/day")
axs[2, 0].bar(df_validation.index, df_validation["srf"], color = 'skyblue')
axs[2, 0].plot(df_validation.index, df_validation["srf_cloud"], 'ro', color = 'crimson')
axs[2, 0].set_title("Surface runoff (srf)")
axs[2, 0].set_ylabel("srf, mm/day")
axs[2, 1].bar(df_validation.index, df_validation["dd"], color = 'skyblue')
axs[2, 1].plot(df_validation.index, df_validation["dd_cloud"], 'ro', color = 'crimson')
axs[2, 1].set_title("Deep drainage (dd)")
axs[2, 1].set_ylabel("dd, mm/day")
axs[3, 0].bar(df_validation.index, df_validation["etasw"], color = 'skyblue')
axs[3, 0].plot(df_validation.index, df_validation["etasw_cloud"], 'ro', color = 'crimson')
axs[3, 0].set_title("ETa value (etasw)")
axs[3, 0].set_ylabel("etasw, mm/day")
axs[3, 1].bar(df_validation.index, df_validation["etc"], color = 'skyblue')
axs[3, 1].plot(df_validation.index, df_validation["etc_cloud"], 'ro', color = 'crimson')
axs[3, 1].set_title("Optimal crop ETa value (etc)")
axs[3, 1].set_ylabel("etc, mm/day")
axs[4, 0].bar(df_validation.index, df_validation["netet"], color = 'skyblue')
axs[4, 0].plot(df_validation.index, df_validation["netet_cloud"], 'ro', color = 'crimson')
axs[4, 0].set_title("Additional ETa requirement for optimal crop condition (netet)")
axs[4, 0].set_ylabel("netet, mm/day")
axs[4, 1].plot(df_validation.index, df_validation["swf"], color = 'skyblue')
axs[4, 1].plot(df_validation.index, df_validation["swf_cloud"], 'ro', color = 'crimson')
axs[4, 1].set_title("Final soil water amount at the end of the day (swf)")
axs[4, 1].set_ylabel("swf, mm/m")
manual = mpatches.Patch(color='skyblue', label='manual')
cloud = mpatches.Patch(color='crimson', label='cloud')
plt.legend(handles=[manual,cloud])
```
| github_jupyter |
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
sys.path.append('../')
from loglizer.models import SVM
from loglizer import dataloader, preprocessing
import numpy as np
struct_log = '../data/HDFS/HDFS_100k.log_structured.csv' # The structured log file
label_file = '../data/HDFS/anomaly_label.csv' # The anomaly label file
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = dataloader.load_HDFS(struct_log,
label_file=label_file,
window='session',
train_ratio=0.5,
split_type='uniform')
feature_extractor = preprocessing.FeatureExtractor()
x_train = feature_extractor.fit_transform(x_train, term_weighting='tf-idf')
x_test = feature_extractor.transform(x_test)
print(np.array(x_train).shape)
model = SVM()
model.fit(x_train, y_train)
print(np.array(x_train).shape)
# print('Train validation:')
# precision, recall, f1 = model.evaluate(x_train, y_train)
# print('Test validation:')
# precision, recall, f1 = model.evaluate(x_test, y_test)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
sys.path.append('../')
from loglizer.models import PCA
from loglizer import dataloader, preprocessing
struct_log = '../data/HDFS/HDFS_100k.log_structured.csv' # The structured log file
label_file = '../data/HDFS/anomaly_label.csv' # The anomaly label file
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = dataloader.load_HDFS(struct_log,
label_file=label_file,
window='session',
train_ratio=0.5,
split_type='uniform')
feature_extractor = preprocessing.FeatureExtractor()
x_train = feature_extractor.fit_transform(x_train, term_weighting='tf-idf',
normalization='zero-mean')
x_test = feature_extractor.transform(x_test)
# print("输入后的训练数据:",x_train)
# print("尺寸:",x_train.shape)
# print("输入后的测试数据:",x_test)
# print("尺寸:",x_test.shape)
model = PCA()
model.fit(x_train)
# print('Train validation:')
# precision, recall, f1 = model.evaluate(x_train, y_train)
# print('Test validation:')
# precision, recall, f1 = model.evaluate(x_test, y_test)
help(model.fit())
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
data=pd.read_csv('F:\\bank-additional-full.csv',sep=';')
data.shape
tot=len(set(data.index))
last=data.shape[0]-tot
last
data.isnull().sum()
print(data.y.value_counts())
sns.countplot(x='y', data=data)
plt.show()
cat=data.select_dtypes(include=['object']).columns
cat
for c in cat:
print(c)
print("-"*50)
print(data[c].value_counts())
print("-"*50)
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
le=LabelEncoder()
data['y']=le.fit_transform(data['y'])
data.drop('poutcome',axis=1,inplace=True)
print( data['age'].quantile(q = 0.75) +
1.5*(data['age'].quantile(q = 0.75) - data['age'].quantile(q = 0.25)))
data['age']=data[data['age']<69.6]
data['age'].fillna(int(data['age'].mean()),inplace=True)
data['age'].values
data[['age','y']].groupby(['age'],as_index=False).mean().sort_values(by='y', ascending=False)
# for x in data:
# x['Sex'] = x['Sex'].map( {'female': 1, 'male': 0}).astype(int)
data['age_slice'] = pd.cut(data['age'],5)
data[['age_slice', 'y']].groupby(['age_slice'], as_index=False).mean().sort_values(by='age_slice', ascending=True)
data['age'] = data['age'].astype(int)
data.loc[(data['age'] >= 16) & (data['age'] <= 28), 'age'] = 1
data.loc[(data['age'] > 28) & (data['age'] <= 38), 'age'] = 2
data.loc[(data['age'] > 38) & (data['age'] <= 49), 'age'] = 3
data.loc[ (data['age'] > 49) & (data['age'] <= 59), 'age'] = 4
data.loc[ (data['age'] > 59 )& (data['age'] <= 69), 'age'] = 5
data.drop('age_slice',axis=1,inplace=True)
data['marital'].replace(['divorced' ,'married' , 'unknown' , 'single'] ,['single','married','unknown','single'], inplace=True)
data['marital']=le.fit_transform(data['marital'])
data
data['job'].replace(['student'] ,['unemployed'], inplace=True)
data[['education', 'y']].groupby(['education'], as_index=False).mean().sort_values(by='education', ascending=True)
fig, ax = plt.subplots()
fig.set_size_inches(20, 5)
sns.countplot(x = 'education', hue = 'loan', data = data)
ax.set_xlabel('Education', fontsize=15)
ax.set_ylabel('y', fontsize=15)
ax.set_title('Education Count Distribution', fontsize=15)
ax.tick_params(labelsize=15)
sns.despine()
fig, ax = plt.subplots()
fig.set_size_inches(20, 5)
sns.countplot(x = 'job', hue = 'loan', data = data)
ax.set_xlabel('job', fontsize=17)
ax.set_ylabel('y', fontsize=17)
ax.set_title('Education Count Distribution', fontsize=17)
ax.tick_params(labelsize=17)
sns.despine()
data['education'].replace(['basic.4y','basic.6y','basic.9y','professional.course'] ,['not_reach_highschool','not_reach_highschool','not_reach_highschool','university.degree'], inplace=True)
ohe=OneHotEncoder()
data['default']=le.fit_transform(data['default'])
data['housing']=le.fit_transform(data['housing'])
data['loan']=le.fit_transform(data['loan'])
data['month']=le.fit_transform(data['month'])
ohe=OneHotEncoder(categorical_features=data['month'])
data['contact']=le.fit_transform(data['contact'])
data['day_of_week']=le.fit_transform(data['day_of_week'])
data['job']=le.fit_transform(data['job'])
data['education']=le.fit_transform(data['education'])
cat=data.select_dtypes(include=['object']).columns
cat
def outlier_detect(data,feature):
q1 = data[feature].quantile(0.25)
q3 = data[feature].quantile(0.75)
iqr = q3-q1 #Interquartile range
lower = q1-1.5*iqr
upper = q3+1.5*iqr
data = data.loc[(data[feature] > lower) & (data[feature] < upper)]
print('lower IQR and upper IQR of',feature,"are:", lower, 'and', upper, 'respectively')
return data
data.columns
data['pdays'].unique()
data['pdays'].replace([999] ,[0], inplace=True)
data['previous'].unique()
fig, ax = plt.subplots()
fig.set_size_inches(15, 5)
sns.countplot(x = 'campaign', palette="rocket", data = data)
ax.set_xlabel('campaign', fontsize=25)
ax.set_ylabel('y', fontsize=25)
ax.set_title('campaign', fontsize=25)
sns.despine()
sns.countplot(x = 'pdays', palette="rocket", data = data)
ax.set_xlabel('pdays', fontsize=25)
ax.set_ylabel('y', fontsize=25)
ax.set_title('pdays', fontsize=25)
sns.despine()
data[['pdays', 'y']].groupby(['pdays'], as_index=False).mean().sort_values(by='pdays', ascending=True)
sns.countplot(x = 'emp.var.rate', palette="rocket", data = data)
ax.set_xlabel('emp.var.rate', fontsize=25)
ax.set_ylabel('y', fontsize=25)
ax.set_title('emp.var.rate', fontsize=25)
sns.despine()
outlier_detect(data,'duration')
#outlier_detect(data,'emp.var.rate')
outlier_detect(data,'nr.employed')
#outlier_detect(data,'euribor3m')
X = data.iloc[:,:-1]
X = X.values
y = data['y'].values
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
algo = {'LR': LogisticRegression(),
'DT':DecisionTreeClassifier(),
'RFC':RandomForestClassifier(n_estimators=100),
'SVM':SVC(gamma=0.01),
'KNN':KNeighborsClassifier(n_neighbors=10)
}
for k, v in algo.items():
model = v
model.fit(X_train, y_train)
print('Acurracy of ' + k + ' is {0:.2f}'.format(model.score(X_test, y_test)*100))
```
| github_jupyter |
# Requirements Documentation and Notes
# SQL Samples
2. Total monthly commits
```sql
SELECT
date_trunc( 'month', commits.cmt_author_timestamp AT TIME ZONE'America/Chicago' ) AS DATE,
repo_name,
rg_name,
cmt_author_name,
cmt_author_email,
COUNT ( cmt_author_email ) AS author_count
FROM
commits,
repo,
repo_groups
WHERE
commits.repo_id = repo.repo_id
AND repo.repo_group_id = repo_groups.repo_group_id
AND commits.cmt_author_timestamp AT TIME ZONE'America/Chicago' BETWEEN '2019-11-01'
AND '2019-11-30'
GROUP BY
DATE,
repo_name,
rg_name,
cmt_author_name,
cmt_author_email
ORDER BY
DATE,
cmt_author_name,
cmt_author_email;
```
### Metrics: Lines of Code and Commit Summaries by Week, Month and Year
There are six summary tables :
1. dm_repo_annual
2. dm_repo_monthly
3. dm_repo_weekly
4. dm_repo_group_annual
5. dm_repo_group_monthly
6. dm_repo_group_weekly
```sql
SELECT
repo.repo_id,
repo.repo_name,
repo_groups.rg_name,
dm_repo_annual.YEAR,
SUM ( dm_repo_annual.added ) AS lines_added,
SUM ( dm_repo_annual.whitespace ) AS whitespace_added,
SUM ( dm_repo_annual.removed ) AS lines_removed,
SUM ( dm_repo_annual.files ) AS files,
SUM ( dm_repo_annual.patches ) AS commits
FROM
dm_repo_annual,
repo,
repo_groups
WHERE
dm_repo_annual.repo_id = repo.repo_id
AND repo.repo_group_id = repo_groups.repo_group_id
GROUP BY
repo.repo_id,
repo.repo_name,
repo_groups.rg_name,
YEAR
ORDER BY
YEAR,
rg_name,
repo_name
```
### Metrics: Value / Labor / Lines of Code (Total, NOT Commits)
1. Total lines in a repository by language and line type. This is like software as an asset. Its lines of code, at a point in time, ++
```sql
SELECT
repo.repo_id,
repo.repo_name,
programming_language,
SUM ( total_lines ) AS repo_total_lines,
SUM ( code_lines ) AS repo_code_lines,
SUM ( comment_lines ) AS repo_comment_lines,
SUM ( blank_lines ) AS repo_blank_lines,
AVG ( code_complexity ) AS repo_lang_avg_code_complexity
FROM
repo_labor,
repo,
repo_groups
WHERE
repo.repo_group_id = repo_groups.repo_group_id
and
repo.repo_id = repo_labor.repo_id
GROUP BY
repo.repo_id,
programming_language
ORDER BY
repo_id
--
```
#### Estimated Labor Hours by Repository
```sql
SELECT C
.repo_id,
C.repo_name,
SUM ( estimated_labor_hours )
FROM
(
SELECT A
.repo_id,
b.repo_name,
programming_language,
SUM ( total_lines ) AS repo_total_lines,
SUM ( code_lines ) AS repo_code_lines,
SUM ( comment_lines ) AS repo_comment_lines,
SUM ( blank_lines ) AS repo_blank_lines,
AVG ( code_complexity ) AS repo_lang_avg_code_complexity,
AVG ( code_complexity ) * SUM ( code_lines ) + 20 AS estimated_labor_hours
FROM
repo_labor A,
repo b
WHERE
A.repo_id = b.repo_id
GROUP BY
A.repo_id,
programming_language,
repo_name
ORDER BY
repo_name,
A.repo_id,
programming_language
) C
GROUP BY
repo_id,
repo_name;
```
#### Estimated Labor Hours by Language
```sql
SELECT C
.repo_id,
C.repo_name,
programming_language,
SUM ( estimated_labor_hours )
FROM
(
SELECT A
.repo_id,
b.repo_name,
programming_language,
SUM ( total_lines ) AS repo_total_lines,
SUM ( code_lines ) AS repo_code_lines,
SUM ( comment_lines ) AS repo_comment_lines,
SUM ( blank_lines ) AS repo_blank_lines,
AVG ( code_complexity ) AS repo_lang_avg_code_complexity,
AVG ( code_complexity ) * SUM ( code_lines ) + 20 AS estimated_labor_hours
FROM
repo_labor A,
repo b
WHERE
A.repo_id = b.repo_id
GROUP BY
A.repo_id,
programming_language,
repo_name
ORDER BY
repo_name,
A.repo_id,
programming_language
) C
GROUP BY
repo_id,
repo_name,
programming_language
ORDER BY
programming_language;
```
## Issues
### Issue Collection Status
1. Currently 100% Complete
```sql
SELECT a.repo_id, a.repo_name, a.repo_git,
b.issues_count,
d.repo_id AS issue_repo_id,
e.last_collected,
COUNT ( * ) AS issues_collected_count,
(
b.issues_count - COUNT ( * )) AS issues_missing,
ABS (
CAST (( COUNT ( * )) AS DOUBLE PRECISION ) / CAST ( b.issues_count AS DOUBLE PRECISION )) AS ratio_abs,
(
CAST (( COUNT ( * )) AS DOUBLE PRECISION ) / CAST ( b.issues_count AS DOUBLE PRECISION )) AS ratio_issues
FROM
augur_data.repo a,
augur_data.issues d,
augur_data.repo_info b,
( SELECT repo_id, MAX ( data_collection_date ) AS last_collected FROM augur_data.repo_info GROUP BY repo_id ORDER BY repo_id ) e
WHERE
a.repo_id = b.repo_id
AND a.repo_id = d.repo_id
AND b.repo_id = d.repo_id
AND e.repo_id = a.repo_id
AND b.data_collection_date = e.last_collected
AND d.pull_request_id IS NULL
GROUP BY
a.repo_id,
d.repo_id,
b.issues_count,
e.last_collected,
a.repo_git
ORDER BY
repo_name, repo_id, ratio_abs;
```
### Repositories with GitHub Issue Tracking
```sql
select repo_id, count(*) from repo_info where issues_count > 0
group by repo_id;
```
| github_jupyter |
```
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
# load in the dataset into a pandas dataframe
diamonds = pd.read_csv('./data/diamonds.csv')
# convert cut, color, and clarity into ordered categorical types
ordinal_var_dict = {'cut': ['Fair','Good','Very Good','Premium','Ideal'],
'color': ['J', 'I', 'H', 'G', 'F', 'E', 'D'],
'clarity': ['I1', 'SI2', 'SI1', 'VS2', 'VS1', 'VVS2', 'VVS1', 'IF']}
for var in ordinal_var_dict:
pd_ver = pd.__version__.split(".")
if (int(pd_ver[0]) > 0) or (int(pd_ver[1]) >= 21): # v0.21 or later
ordered_var = pd.api.types.CategoricalDtype(ordered = True,
categories = ordinal_var_dict[var])
diamonds[var] = diamonds[var].astype(ordered_var)
else: # pre-v0.21
diamonds[var] = diamonds[var].astype('category', ordered = True,
categories = ordinal_var_dict[var])
```
## Multivariate Exploration
In the previous workspace, you looked at various bivariate relationships. You saw that the log of price was approximately linearly related to the cube root of carat weight, as analogy to its length, width, and depth. You also saw that there was an unintuitive relationship between price and the categorical quality measures of cut, color, and clarity, that the median price decreased with increasing quality. Investigating the distributions more clearly and looking at the relationship between carat weight with the three categorical variables showed that this was due to carat size tending to be smaller for the diamonds with higher categorical grades.
The goal of this workspace will be to depict these interaction effects through the use of multivariate plots.
To start off with, create a plot of the relationship between price, carat, and clarity. In the previous workspace, you saw that clarity had the clearest interactions with price and carat. How clearly does this show up in a multivariate visualization?
```
def cube_trans(x, inverse=False):
if not inverse:
return np.cbrt(x)
else:
return x**3
diamonds['carat_cube'] = diamonds['carat'].apply(cube_trans)
# multivariate plot of price by carat weight, and clarity
g = sb.FacetGrid(data=diamonds, hue='clarity', height=5)
g = g.map(sb.regplot, 'carat_cube', 'price', fit_reg=False)
plt.yscale('log')
y_ticks = [300, 800, 2000, 4000, 10000, 20000]
plt.yticks(y_ticks, y_ticks)
g.add_legend();
```
Price by Carat and Clarity Comment 1: <span style="color:black">With two numeric variables and one categorical variable, there are two main plot types that make sense. A scatterplot with points colored by clarity level makes sense on paper, but the sheer number of points causes overplotting that suggests a different plot type. A faceted scatterplot or heat map is a better choice in this case.</span>
```
g = sb.FacetGrid(data=diamonds, col='clarity')
g.map(plt.scatter, 'carat_cube', 'price', alpha=1/5)
plt.yscale('log');
```
Price by Carat and Clarity Comment 2: <span style="color:black">You should see across facets the general movement of the points upwards and to the left, corresponding with smaller diamond sizes, but higher value for their sizes. As a final comment, did you remember to apply transformation functions to the price and carat values?</span>
Let's try a different plot, for diamond price against cut and color quality features. To avoid the trap of higher quality grades being associated with smaller diamonds, and thus lower prices, we should focus our visualization on only a small range of diamond weights. For this plot, select diamonds in a small range around 1 carat weight. Try to make it so that your plot shows the effect of each of these categorical variables on the price of diamonds.
```
diamonds_flag = (diamonds['carat'] >= 0.99) & (diamonds['carat'] <= 1.03)
diamonds_sub = diamonds.loc[diamonds_flag,:]
diamonds_sub['cut'].unique()
diamonds_sub['color'].unique()
sb.pointplot(data=diamonds_sub, x='color', y='price', hue='cut', palette='mako')
plt.yscale('log')
plt.yticks([3000, 4000, 6000], ['3K', '4K', '6K']);
# multivariate plot of price by cut and color, for approx. 1 carat diamonds
plt.figure(figsize=(10,6))
sb.boxplot(data=diamonds_sub, x='color', y='price', hue='cut', palette='mako')
plt.yscale('log')
plt.yticks([3000, 4000, 6000, 10000], ['3K', '4K', '6K', '10K']);
```
Price by Cut and Color Comment 1: <span style="color:black">There's a lot of ways that you could plot one numeric variable against two categorical variables. I think that the clustered box plot or the clustered point plot are the best choices in this case. With the number of category combinations to be plotted (7x5 = 35), it's hard to make full sense of a violin plot's narrow areas; simplicity is better. A clustered bar chart could work, but considering that price should be on a log scale, there isn't really a nice baseline that would work well.</span>
Price by Cut and Color Comment 2: <span style="color:black">Assuming you went with a clustered plot approach, you should see a gradual increase in price across the main x-value clusters, as well as generally upwards trends within each cluster for the third variable. Aesthetically, did you remember to choose a sequential color scheme for whichever variable you chose for your third variable, to override the default qualitative scheme? If you chose a point plot, did you set a dodge parameter to spread the clusters out? </span>
| github_jupyter |
# Revisiting Lambert's problem in Python
```
import numpy as np
import matplotlib.pyplot as plt
from cycler import cycler
from poliastro.core import iod
from poliastro.iod import izzo
plt.ion()
plt.rc('text', usetex=True)
```
## Part 1: Reproducing the original figure
```
x = np.linspace(-1, 2, num=1000)
M_list = 0, 1, 2, 3
ll_list = 1, 0.9, 0.7, 0, -0.7, -0.9, -1
fig, ax = plt.subplots(figsize=(10, 8))
ax.set_prop_cycle(cycler('linestyle', ['-', '--']) *
(cycler('color', ['black']) * len(ll_list)))
for M in M_list:
for ll in ll_list:
T_x0 = np.zeros_like(x)
for ii in range(len(x)):
y = iod._compute_y(x[ii], ll)
T_x0[ii] = iod._tof_equation(x[ii], y, 0.0, ll, M)
if M == 0 and ll == 1:
T_x0[x > 0] = np.nan
elif M > 0:
# Mask meaningless solutions
T_x0[x > 1] = np.nan
l, = ax.plot(x, T_x0)
ax.set_ylim(0, 10)
ax.set_xticks((-1, 0, 1, 2))
ax.set_yticks((0, np.pi, 2 * np.pi, 3 * np.pi))
ax.set_yticklabels(('$0$', '$\pi$', '$2 \pi$', '$3 \pi$'))
ax.vlines(1, 0, 10)
ax.text(0.65, 4.0, "elliptic")
ax.text(1.16, 4.0, "hyperbolic")
ax.text(0.05, 1.5, "$M = 0$", bbox=dict(facecolor='white'))
ax.text(0.05, 5, "$M = 1$", bbox=dict(facecolor='white'))
ax.text(0.05, 8, "$M = 2$", bbox=dict(facecolor='white'))
ax.annotate("$\lambda = 1$", xy=(-0.3, 1), xytext=(-0.75, 0.25), arrowprops=dict(arrowstyle="simple", facecolor="black"))
ax.annotate("$\lambda = -1$", xy=(0.3, 2.5), xytext=(0.65, 2.75), arrowprops=dict(arrowstyle="simple", facecolor="black"))
ax.grid()
ax.set_xlabel("$x$")
ax.set_ylabel("$T$");
```
## Part 2: Locating $T_{min}$
```
for M in M_list:
for ll in ll_list:
x_T_min, T_min = iod._compute_T_min(ll, M, 10, 1e-8)
ax.plot(x_T_min, T_min, 'kx', mew=2)
fig
```
## Part 3: Try out solution
```
T_ref = 1
ll_ref = 0
(x_ref, _), = iod._find_xy(ll_ref, T_ref, 0, 10, 1e-8)
x_ref
ax.plot(x_ref, T_ref, 'o', mew=2, mec='red', mfc='none')
fig
```
## Part 4: Run some examples
```
from astropy import units as u
from poliastro.bodies import Earth
```
### Single revolution
```
k = Earth.k
r0 = [15945.34, 0.0, 0.0] * u.km
r = [12214.83399, 10249.46731, 0.0] * u.km
tof = 76.0 * u.min
expected_va = [2.058925, 2.915956, 0.0] * u.km / u.s
expected_vb = [-3.451569, 0.910301, 0.0] * u.km / u.s
(v0, v), = izzo.lambert(k, r0, r, tof)
v
k = Earth.k
r0 = [5000.0, 10000.0, 2100.0] * u.km
r = [-14600.0, 2500.0, 7000.0] * u.km
tof = 1.0 * u.h
expected_va = [-5.9925, 1.9254, 3.2456] * u.km / u.s
expected_vb = [-3.3125, -4.1966, -0.38529] * u.km / u.s
(v0, v), = izzo.lambert(k, r0, r, tof)
v
```
### Multiple revolutions
```
k = Earth.k
r0 = [22592.145603, -1599.915239, -19783.950506] * u.km
r = [1922.067697, 4054.157051, -8925.727465] * u.km
tof = 10 * u.h
expected_va = [2.000652697, 0.387688615, -2.666947760] * u.km / u.s
expected_vb = [-3.79246619, -1.77707641, 6.856814395] * u.km / u.s
expected_va_l = [0.50335770, 0.61869408, -1.57176904] * u.km / u.s
expected_vb_l = [-4.18334626, -1.13262727, 6.13307091] * u.km / u.s
expected_va_r = [-2.45759553, 1.16945801, 0.43161258] * u.km / u.s
expected_vb_r = [-5.53841370, 0.01822220, 5.49641054] * u.km / u.s
(v0, v), = izzo.lambert(k, r0, r, tof, M=0)
v
(_, v_l), (_, v_r) = izzo.lambert(k, r0, r, tof, M=1)
v_l
v_r
```
| github_jupyter |
```
# DATAFRAMES INITIALISATION
import os
os.chdir('C:\\Users\\asus\\OneDrive\\Documenti\\University Docs\\MSc Computing\\Final Project\\RainbowFood(JN)\\Rainbow-Food-Collaborative-Filtering-')
import pandas as pd
# vegetables file
col_list_veg = ["Vegetables", "Serving", "Calories"]
df_veg = pd.read_csv("Vegetables.csv", usecols = col_list_veg)
# allergies file
col_list_all = ["Class", "Type", "Group", "Food", "Allergy"]
df_all = pd.read_csv("FoodAllergies.csv", usecols = col_list_all)
# got rid of the Nan values because it considered as it was float instead of string (could not apply lower case)
df_all.dropna(inplace = True)
# recipe file
col_list_rec = ['Link', 'Title', 'Total Time', 'Servings', 'Ingredients', 'Instructions']
df_rec = pd.read_csv("Recipes.csv", usecols = col_list_rec)
# ratings
col_list_rat = ["userId", "recipeId", "rating"]
df_rat = pd.read_csv("Ratings_small.csv", usecols = col_list_rat)
# NLP FOR CLEANING UP INPUTS
# lemmatising
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
# FUNCTIONS
# function to extract lists from pandas columns
def list_maker(column):
return column.tolist()
# function to make lower case in lists
def lower_case(column_list):
for x in range(len(column_list)):
column_list[x] = column_list[x].lower()
return column_list
# function to cut duplicates from a list
def no_duplicates(column_list):
no_duplicates_list = []
for x in column_list:
if x not in no_duplicates_list:
no_duplicates_list.append(x)
return no_duplicates_list
# function to make dictionaries
def dictionary_maker(list1, list2):
zip_iterator = zip(list1, list2)
dictionary = dict(zip_iterator)
return dictionary
# function to lemmatise words in lists
def lemmatise(list_of_words):
lemmatised_words = []
for word in list_of_words:
lemmatised_words.append(lemmatizer.lemmatize(word))
return lemmatised_words
# function user inputs veggies
mylist = []
mybasket = []
def user_inputs_veggies():
print("Enter 3 veggies: ")
for x in range(1,4):
basket = input("%d " % x)
mylist.append(basket.lower())
for veg in lemmatise(mylist):
if veg in veg_list:
print(veg, "= got it")
mybasket.append(veg)
else:
print(veg, "= we don't have it")
# function user inputs quantities (NOT USED FOR GETTING RECIPES YET)
veg_quantity = {}
def user_input_quantity():
for x in mybasket:
# Ask for the quantity, until it's correct
while True:
# Quantity?
quantity = input("%s grams " % x)
# Is it an integer?
try:
int(quantity)
break
# No...
except ValueError:
# Is it a float?
try:
float(quantity)
break
# No...
except ValueError:
print("Please, use numbers in grams only")
# If it's valid, add it
veg_quantity[x] = quantity
return veg_quantity
# CODE
# Extracting lists from pandas columns
veg_list = list_maker(df_veg['Vegetables'])
food_list = list_maker(df_all["Food"])
allergy_list = list_maker(df_all["Allergy"])
recipe_titles_list = list_maker(df_rec['Title'])
ingredients_list = list_maker(df_rec['Ingredients'])
users_id_list = list_maker(df_rat["userId"])
recipes_id_list = list_maker(df_rat["recipeId"])
ratings_list = list_maker(df_rat["rating"])
# Lower case in lists
veg_list = lower_case(veg_list)
food_list = lower_case(food_list)
allergy_list = lower_case(allergy_list)
#recipe_titles_list = lower_case(recipe_titles_list)
ingredients_list = lower_case(ingredients_list)
# Dictionaries
food_allergy_dictionary = dictionary_maker(food_list, allergy_list)
recipe_titles_ingredients_dictionary = dictionary_maker(recipe_titles_list, ingredients_list)
recipes_id_ratings_dictionary = dictionary_maker(recipes_id_list, ratings_list)
# User inputs veggies
user_inputs_veggies()
if mybasket == []:
print("Your basket is empty")
else:
print("Here's what we have", mybasket)
# User inputs quantities
user_input_quantity()
# REST OF THE CODE (Still to change...)
# USER INPUTS ALLERGIES (NOT USED FOR GETTING RECIPES YET)
print("Any allergies or intolerances? Please enter them here or leave it blank. \n")
print("Please, specify if you have allergy or intolerance for generic terms \n")
print("(e.g. 'nut allergy', 'gluten allergy', but not for 'strawberry' or 'strawberries'): ")
# add allergies in the list
myallergies = []
# empty basket to break
basket = " "
# indefinite iteration over not empty basket
while basket != "":
# over input
basket = input()
# if input = num
if basket.isnumeric() == True:
# then print you don't want num
print("No numbers, please")
# otherwise if it's a word
elif basket.isnumeric() == False:
# and the basket is not empty
if basket != "":
# append allergies to my list
myallergies.append(basket)
my_allergies = lower_case(myallergies)
my_allergies = lemmatise(myallergies)
my_allergies = no_duplicates(my_allergies)
for al in my_allergies:
if al in food_allergy_dictionary.keys() or al in food_allergy_dictionary.values():
print("You said: ", al)
else:
print(al, ", got it, I will update my database")
# OUTPUT = RECIPES BASED ON USER'S VEGGIES
# RegEx to find matches
import re
recipe_titles_list = []
recipe_title_to_matched_ingredient_list_dict_with_duplicates = {}
recipes_ingredients = {}
recipes = []
input_vegetable_list = mybasket
recipe_title_to_ingredient_list_dict = recipe_titles_ingredients_dictionary
for input_vegetable in input_vegetable_list:
for recipe_title in recipe_title_to_ingredient_list_dict:
ingredient_list_string = recipe_title_to_ingredient_list_dict[ recipe_title ]
# df not perfect, values looked like list but it was a string...
# with eval is list of lists
ingredient_list = eval(ingredient_list_string)
for ingredient in ingredient_list:
find = re.search(input_vegetable, ingredient)
if find:
recipe_titles_list.append( recipe_title )
if recipe_title in recipe_title_to_matched_ingredient_list_dict_with_duplicates:
recipe_title_to_matched_ingredient_list_dict_with_duplicates[recipe_title].append(input_vegetable)
else:
recipe_title_to_matched_ingredient_list_dict_with_duplicates[recipe_title] = [input_vegetable]
# duplicates removed
for key, value in recipe_title_to_matched_ingredient_list_dict_with_duplicates.items():
recipes_ingredients[key] = list(set(value))
print("\n")
for recipe_title in recipe_titles_list:
if recipe_title not in recipes:
recipes.append(recipe_title)
print("These are all the recipes that contain : ", mybasket)
print("\n")
index = 1
for recipe in recipes:
print(index, recipe)
index += 1
import random
recipes_ingredients_items = list(recipes_ingredients.items())
random.shuffle(recipes_ingredients_items)
recipes_ingredients = dict(recipes_ingredients_items)
index = 1
for recipe in recipes_ingredients.items():
index += 1
recipes = {}
i_have_processed_these_already = []
for vegetable in input_vegetable_list:
for key, values in recipes_ingredients.items():
if vegetable in values:
if not key in i_have_processed_these_already:
if not vegetable in recipes:
i_have_processed_these_already.append(key)
recipes[vegetable] = key
import pprint
print("\n")
print("I would recommend you to try these recipes: \n")
pprint.pprint(recipes)
print("\n")
print("Here you can see the ingredients for the recipes selected: \n")
for recipe in recipes.values():
for recipeT, ingredient in recipe_title_to_ingredient_list_dict.items():
if recipe in recipeT:
print(recipeT, "\n",ingredient, "\n")
```
| github_jupyter |
# Download data for a functional layer of Spatial Signatures
This notebook downloads and prepares data for a functional layer of Spatial Signatures.
```
from download import download
import geopandas as gpd
import pandas as pd
import osmnx as ox
from tqdm import tqdm
from glob import glob
import rioxarray as ra
import pyproj
import zipfile
import tarfile
from shapely.geometry import box, mapping
import requests
import datetime
```
## Population estimates
Population estimates for England, Scotland and Wales. England is split into regions.
### ONS data
```
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthesouthwestregionofengland%2fmid2019sape22dt10g/sape22dt10gmid2019southwest.zip',
'../../urbangrammar_samba/functional_data/population_estimates/south_west_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesintheyorkshireandthehumberregionofengland%2fmid2019sape22dt10c/sape22dt10cmid2019yorkshireandthehumber.zip',
'../../urbangrammar_samba/functional_data/population_estimates/yorkshire_humber_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthesoutheastregionofengland%2fmid2019sape22dt10i/sape22dt10imid2019southeast.zip',
'../../urbangrammar_samba/functional_data/population_estimates/south_east_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesintheeastmidlandsregionofengland%2fmid2019sape22dt10f/sape22dt10fmid2019eastmidlands.zip',
'../../urbangrammar_samba/functional_data/population_estimates/east_midlands_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthenorthwestregionofengland%2fmid2019sape22dt10b/sape22dt10bmid2019northwest.zip',
'../../urbangrammar_samba/functional_data/population_estimates/north_west_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesintheeastregionofengland%2fmid2019sape22dt10h/sape22dt10hmid2019east.zip',
'../../urbangrammar_samba/functional_data/population_estimates/east_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinwales%2fmid2019sape22dt10j/sape22dt10jmid2019wales.zip',
'../../urbangrammar_samba/functional_data/population_estimates/wales', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthenortheastregionofengland%2fmid2019sape22dt10d/sape22dt10dmid2019northeast.zip',
'../../urbangrammar_samba/functional_data/population_estimates/north_east_england', kind='zip')
download('https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthewestmidlandsregionofengland%2fmid2019sape22dt10e/sape22dt10emid2019westmidlands.zip',
'../../urbangrammar_samba/functional_data/population_estimates/west_midlands_england', kind='zip')
```
### Geometries
```
download('https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/England_oa_2011.zip', '../../urbangrammar_samba/functional_data/population_estimates/oa_geometry_england', kind='zip')
download('https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/Wales_oac_2011.zip', '../../urbangrammar_samba/functional_data/population_estimates/oa_geometry_wales', kind='zip')
```
### Data cleaning and processing
```
england = gpd.read_file('../../urbangrammar_samba/functional_data/population_estimates/oa_geometry_england/england_oa_2011.shp')
wales = gpd.read_file('../../urbangrammar_samba/functional_data/population_estimates/oa_geometry_wales/wales_oac_2011.shp')
oa = england.append(wales[['code', 'label', 'name', 'geometry']])
files = glob('../../urbangrammar_samba/functional_data/population_estimates/*/*.xlsx', recursive=True)
%time merged = pd.concat([pd.read_excel(f, sheet_name='Mid-2019 Persons', header=0, skiprows=4) for f in files])
population_est = oa.merge(merged, left_on='code', right_on='OA11CD', how='left')
```
### Add Scotland
Scottish data are shipped differently.
#### Data
```
download('http://statistics.gov.scot/downloads/file?id=438c9dc6-dca0-48d5-995c-e3bb1d34e29e%2FSAPE_2011DZ_2001-2019_Five_and_broad_age_groups.zip', '../../urbangrammar_samba/functional_data/population_estimates/scotland', kind='zip')
pop_scot = pd.read_csv('../../urbangrammar_samba/functional_data/population_estimates/scotland/data - statistics.gov.scot - SAPE_2011DZ_2019_Five.csv')
pop_scot = pop_scot[pop_scot.Sex == 'All']
counts = pop_scot[['GeographyCode', 'Value']].groupby('GeographyCode').sum()
```
#### Geometry
```
download('http://sedsh127.sedsh.gov.uk/Atom_data/ScotGov/ZippedShapefiles/SG_DataZoneBdry_2011.zip', '../../urbangrammar_samba/functional_data/population_estimates/dz_geometry_scotland', kind='zip')
data_zones = gpd.read_file('../../urbangrammar_samba/functional_data/population_estimates/dz_geometry_scotland')
scotland = data_zones.merge(counts, left_on='DataZone', right_index=True)
scotland = scotland[['DataZone', 'Value', 'geometry']].rename(columns={'DataZone': 'code', 'Value': 'population'})
population_est = population_est[['code', 'All Ages', 'geometry']].rename(columns={'All Ages': 'population'}).append(scotland)
population_est.to_parquet('../../urbangrammar_samba/functional_data/population_estimates/gb_population_estimates.pq')
```
## WorldPop
Data is dowloaded clipped to GB, so we only have to reproject to OSGB.
```
download('ftp://ftp.worldpop.org.uk/GIS/Population/Global_2000_2020_Constrained/2020/BSGM/GBR/gbr_ppp_2020_constrained.tif', '../../urbangrammar_samba/functional_data/population_estimates/world_pop/gbr_ppp_2020_constrained.tif')
```
### Reproject to OSGB
```
wp = ra.open_rasterio("../../urbangrammar_samba/functional_data/population_estimates/world_pop/gbr_ppp_2020_constrained.tif")
wp.rio.crs
%time wp_osgb = wp.rio.reproject(pyproj.CRS(27700).to_wkt())
wp_osgb.rio.crs
wp_osgb.rio.to_raster("../../urbangrammar_samba/functional_data/population_estimates/world_pop/gbr_ppp_2020_constrained_osgb.tif")
```
## POIs
### Geolytix retail
Geolytix retail POIs: https://drive.google.com/u/0/uc?id=1B8M7m86rQg2sx2TsHhFa2d-x-dZ1DbSy (no idea how to get them programatically, so they were downloaded manually)
```
geolytix = pd.read_csv('../../urbangrammar_samba/functional_data/pois/GEOLYTIX - RetailPoints/geolytix_retailpoints_v17_202008.csv')
geolytix.head(2)
```
We already have coordinates in OSGB, no need to preprocess.
### Listed buildings
We have to merge English, Scottish and Welsh data.
England downloaded manually from https://services.historicengland.org.uk/NMRDataDownload/OpenPages/Download.aspx
```
download('https://inspire.hes.scot/AtomService/DATA/lb_scotland.zip', '../../urbangrammar_samba/functional_data/pois/listed_buildings/scotland', kind='zip')
download('http://lle.gov.wales/catalogue/item/ListedBuildings.zip', '../../urbangrammar_samba/functional_data/pois/listed_buildings/wales', kind='zip')
```
#### Processing
```
with zipfile.ZipFile("../../urbangrammar_samba/functional_data/pois/listed_buildings/Listed Buildings.zip", 'r') as zip_ref:
zip_ref.extractall("../../urbangrammar_samba/functional_data/pois/listed_buildings/england")
england = gpd.read_file('../../urbangrammar_samba/functional_data/pois/listed_buildings/england/ListedBuildings_23Oct2020.shp')
england.head(2)
scotland = gpd.read_file('../../urbangrammar_samba/functional_data/pois/listed_buildings/scotland/Listed_Buildings.shp')
scotland.head(2)
wales = gpd.read_file('../../urbangrammar_samba/functional_data/pois/listed_buildings/wales/Cadw_ListedBuildingsMPoint.shp')
wales.head(2)
listed = pd.concat([england[['geometry']], scotland[['geometry']], wales[['geometry']]])
listed.reset_index(drop=True).to_parquet("../../urbangrammar_samba/functional_data/pois/listed_buildings/listed_buildings_gb.pq")
```
## Night lights
We need to clip it to the extent of GB (dataset has a global coverage) and reproject to OSGB.
```
with open('../../urbangrammar_samba/functional_data/employment/SVDNB_npp_20190301-20190331_75N060W_vcmcfg_v10_c201904071900.tgz', "wb") as down:
down.write(requests.get('https://data.ngdc.noaa.gov/instruments/remote-sensing/passive/spectrometers-radiometers/imaging/viirs/dnb_composites/v10//201903/vcmcfg/SVDNB_npp_20190301-20190331_75N060W_vcmcfg_v10_c201904071900.tgz').content)
down.close()
with tarfile.open('../../urbangrammar_samba/functional_data/employment/SVDNB_npp_20190301-20190331_75N060W_vcmcfg_v10_c201904071900.tgz', 'r') as zip_ref:
zip_ref.extractall("../../urbangrammar_samba/functional_data/employment")
```
### Clip and reproject
```
nl = ra.open_rasterio('../../urbangrammar_samba/functional_data/employment/SVDNB_npp_20190301-20190331_75N060W_vcmcfg_v10_c201904071900.avg_rade9h.tif')
nl.rio.crs
extent = gpd.read_parquet("../../urbangrammar_samba/spatial_signatures/local_auth_chunks.pq")
extent = extent.to_crs(4326)
%time nl_clipped = nl.rio.clip([mapping(box(*extent.total_bounds))], all_touched=True)
%time nl_osgb = nl_clipped.rio.reproject(pyproj.CRS(27700).to_wkt())
nl_osgb.rio.to_raster("../../urbangrammar_samba/functional_data/employment/night_lights_osgb.tif")
nl_osgb.plot(figsize=(12, 12), vmin=0, vmax=7)
```
## Postcodes
Keeping only active postcodes, relevant columns and determining their age.
```
download('https://www.arcgis.com/sharing/rest/content/items/b6e6715fa1984648b5e690b6a8519e53/data', '../../urbangrammar_samba/functional_data/postcode/nhspd', kind='zip')
postcodes = pd.read_csv("../../urbangrammar_samba/functional_data/postcode/nhspd/Data/nhg20aug.csv", header=None)
postcodes = postcodes.iloc[:, :6]
existing = postcodes[postcodes[3].isna()]
located = existing[existing[4].notna()]
located = located.rename(columns={0: 'postcode', 1: 'postcode2', 2:'introduced', 3:'terminated', 4:'x', 5:'y'})
located.introduced = pd.to_datetime(located.introduced, format="%Y%m")
located['age'] = (pd.to_datetime('today') - located.introduced).dt.days
located.drop(columns=['postcode2', 'terminated']).to_parquet('../../urbangrammar_samba/functional_data/postcode/postcodes_gb.pq')
```
## Food hygiene rating scheme
FHRS https://data.cdrc.ac.uk/dataset/food-hygiene-rating-scheme-fhrs-ratings (requires login)
```
fhrs = pd.read_csv('../../urbangrammar_samba/functional_data/fhrs/Data/fhrs_location_20200528.csv')
fhrs
```
No need to preprocess at the moment. Contains OSGB coordinates for each point.
## Business census
https://data.cdrc.ac.uk/dataset/business-census (requires login)
`encoding = "ISO-8859-1"`
- get gemetries
- either geocode addresses (could be expensive
- or link to postcode points
## Workplace density
Dowload workplace population data from scottish census and english census, combine together and link to geometry.
```
download('http://www.scotlandscensus.gov.uk/documents/additional_tables/WP605SCwz.csv', '../../urbangrammar_samba/functional_data/employment/workplace/scotland_industry.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1314_1.bulk.csv?time=latest&measures=20100&geography=TYPE262', '../../urbangrammar_samba/functional_data/employment/workplace/england_wales_industry.csv', timeout=60)
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265922TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/north_west.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265926TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/east.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265924TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/east_midlands.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265927TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/london.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265921TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/north_east.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265928TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/south_east.csv', timeout=30)
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265929TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/south_west.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265925TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/west_midlands.csv')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_1300_1.bulk.csv?time=latest&measures=20100&geography=2013265923TYPE299', '../../urbangrammar_samba/functional_data/employment/workplace/yorkshire.csv')
download('https://www.nrscotland.gov.uk/files/geography/output-area-2011-mhw.zip', '../../urbangrammar_samba/functional_data/employment/workplace/scotland_oa', kind='zip')
download('https://www.nomisweb.co.uk/api/v01/dataset/nm_155_1.bulk.csv?time=latest&measures=20100&geography=TYPE262', '../../urbangrammar_samba/functional_data/employment/workplace/wp_density_ew.csv', timeout=30)
download('https://www.nrscotland.gov.uk/files//geography/products/workplacezones2011scotland.zip', '../../urbangrammar_samba/functional_data/employment/workplace/wpz_scotland', kind='zip')
download('http://www.scotlandscensus.gov.uk/documents/additional_tables/WP102SCca.csv', '../../urbangrammar_samba/functional_data/employment/workplace/wp_density_scotland.csv')
download('http://www.scotlandscensus.gov.uk/documents/additional_tables/WP103SCwz.csv', '../../urbangrammar_samba/functional_data/employment/workplace/wp_pop_scotland.csv')
with zipfile.ZipFile("../../urbangrammar_samba/functional_data/employment/workplace/wz2011ukbgcv2.zip", 'r') as zip_ref:
zip_ref.extractall("../../urbangrammar_samba/functional_data/employment/workplace/")
wpz_geom = gpd.read_file('../../urbangrammar_samba/functional_data/employment/workplace/WZ_2011_UK_BGC_V2.shp')
wpz_geom
wpz_ew = pd.read_csv("../../urbangrammar_samba/functional_data/employment/workplace/wp_density_ew.csv")
wpz_ew
wpz = wpz_geom[['WZ11CD', 'LAD_DCACD', 'geometry']].merge(wpz_ew[['geography code', 'Area/Population Density: All usual residents; measures: Value']], left_on='WZ11CD', right_on='geography code', how='left')
scot = pd.read_csv("../../urbangrammar_samba/functional_data/employment/workplace/wp_pop_scotland.csv", header=5)
wpz = wpz.merge(scot[['Unnamed: 0', 'Total']], left_on='WZ11CD', right_on='Unnamed: 0', how='left')
wpz.Total = wpz.Total.astype(str).apply(lambda x: x.replace(',', '') if ',' in x else x).astype(float)
wpz['count'] = wpz['Area/Population Density: All usual residents; measures: Value'].astype(float).fillna(0) + wpz.Total.fillna(0)
wpz = wpz[~wpz.WZ11CD.str.startswith('N')]
wpz[['geography code', 'count', 'geometry']].to_parquet('../../urbangrammar_samba/functional_data/employment/workplace/workplace_population_gb.pq')
wpz_ind_s = pd.read_csv('../../urbangrammar_samba/functional_data/employment/workplace/scotland_industry.csv', skiprows=4)
wpz_ind_s = wpz_ind_s.loc[4:5378].drop(columns=[c for c in wpz_ind_s.columns if 'Unnamed' in c])
wpz_ind_s
wpz_ind_s.columns
wpz_ind_ew = pd.read_csv('../../urbangrammar_samba/functional_data/employment/workplace/england_wales_industry.csv')
wpz_ind_ew.columns
wpz_ind_ew['A, B, D, E. Agriculture, energy and water'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['A', 'B', 'D', 'E']]].sum(axis=1)
wpz_ind_ew['C. Manufacturing'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['C']]].sum(axis=1)
wpz_ind_ew['F. Construction'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['F']]].sum(axis=1)
wpz_ind_ew['G, I. Distribution, hotels and restaurants'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['G', 'I']]].sum(axis=1)
wpz_ind_ew['H, J. Transport and communication'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['H', 'J']]].sum(axis=1)
wpz_ind_ew['K, L, M, N. Financial, real estate, professional and administrative activities'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['K', 'L', 'M', 'N']]].sum(axis=1)
wpz_ind_ew['O,P,Q. Public administration, education and health'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['O', 'P', 'Q']]].sum(axis=1)
wpz_ind_ew['R, S, T, U. Other'] = wpz_ind_ew[[c for c in wpz_ind_ew.columns[4:] if c[10] in ['R', 'S', 'T', 'U']]].sum(axis=1)
wpz = wpz_ind_ew[['geography code'] + list(wpz_ind_ew.columns[-8:])].append(wpz_ind_s.rename(columns={'2011 Workplace Zone': 'geography code'}).drop(columns='All workplace population aged 16 to 74'))
wpz_merged = wpz_geom.merge(wpz, left_on='WZ11CD', right_on='geography code', how='left')
wpz_merged = wpz_merged[~wpz_merged.WZ11CD.str.startswith('N')]
wpz_merged = wpz_merged.reset_index(drop=True)[list(wpz.columns) + ['geometry']]
wpz_merged.columns
for c in wpz_merged.columns[1:-1]:
wpz_merged[c] = wpz_merged[c].astype(str).apply(lambda x: x.replace(',', '') if ',' in x else x).astype(float)
wpz_merged
wpz_merged.to_parquet('../../urbangrammar_samba/functional_data/employment/workplace/workplace_by_industry_gb.pq')
%%time
pois = []
for i in tqdm(range(103), total=103):
nodes = gpd.read_parquet(f'../../urbangrammar_samba/spatial_signatures/morphometrics/nodes/nodes_{i}.pq')
poly = nodes.to_crs(4326).unary_union.convex_hull
tags = {'amenity': ['cinema', 'theatre']}
pois.append(ox.geometries.geometries_from_polygon(poly, tags))
pois_merged = pd.concat(pois)
pois_merged
pois_merged.drop_duplicates(subset='unique_id')[['amenity', 'name', 'geometry']].to_crs(27700).to_parquet('../../urbangrammar_samba/functional_data/pois/culture_gb.pq')
```
## Corine land cover
Corine - get link from https://land.copernicus.eu/pan-european/corine-land-cover
We need to extract data, clip to GB and reproject to OSGB.
```
download('https://land.copernicus.eu/land-files/afd643e4508e9dd7af7659c1fb1d75017ba6d9f4.zip', '../../urbangrammar_samba/functional_data/land_use/corine', kind='zip')
with zipfile.ZipFile("../../urbangrammar_samba/functional_data/land_use/corine/u2018_clc2018_v2020_20u1_geoPackage.zip", 'r') as zip_ref:
zip_ref.extractall("../../urbangrammar_samba/functional_data/land_use/corine")
extent = gpd.read_parquet("../../urbangrammar_samba/spatial_signatures/local_auth_chunks.pq")
corine_gdf = gpd.read_file("../../urbangrammar_samba/functional_data/land_use/corine/u2018_clc2018_v2020_20u1_geoPackage/DATA/U2018_CLC2018_V2020_20u1.gpkg", mask=extent)
corine_gdf.to_crs(27700).to_parquet("../../urbangrammar_samba/functional_data/land_use/corine/corine_gb.pq")
```
## Land cover classification
Land cover classification - get link from https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-land-cover?tab=form
We need to clip it to the extent of GB (dataset has a global coverage) and reproject to OSGB.
```
download('http://136.156.133.37/cache-compute-0011/cache/data0/dataset-satellite-land-cover-c20f5b30-2bdb-4f69-a21e-c8f2e696e715.zip', '../../urbangrammar_samba/functional_data/land_use/lcc', kind='zip' )
lcc = ra.open_rasterio("../../urbangrammar_samba/functional_data/land_use/lcc/C3S-LC-L4-LCCS-Map-300m-P1Y-2018-v2.1.1.nc")
lccs = lcc[0].lccs_class
extent.total_bounds
lccs_gb = lccs.sel(x=slice(-9, 2), y=slice(61, 49))
lccs_gb = lccs_gb.rio.set_crs(4326)
lccs_osgb = lccs_gb.rio.reproject(pyproj.CRS(27700).to_wkt())
lccs_osgb.rio.to_raster("../../urbangrammar_samba/functional_data/land_use/lcc/lccs_osgb.tif")
```
| github_jupyter |
# GLM: Negative Binomial Regression
```
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
import seaborn as sns
import re
print('Running on PyMC3 v{}'.format(pm.__version__))
```
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.
Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance.
### Convenience Functions
Taken from the Poisson regression example.
```
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
```
### Generate Data
As in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency.
#### Poisson Data
First, let's look at some Poisson distributed data from the Poisson regression example.
```
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
```
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close.
#### Negative Binomial Data
Now, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
```
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
```
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal.
### Visualize the Data
```
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
```
## Negative Binomial Regression
### Create GLM Model
```
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# Old initialization
# start = pm.find_MAP(fmin=optimize.fmin_powell)
# C = pm.approx_hessian(start)
# trace = pm.sample(4000, step=pm.NUTS(scaling=C))
trace = pm.sample(1000, tune=2000, cores=2)
```
### View Results
```
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
```
The mean values are close to the values we specified when generating the data:
- The base rate is a constant 1.
- Drinking alcohol triples the base rate.
- Not taking antihistamines increases the base rate by 6 times.
- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.
Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
```
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
```
| github_jupyter |
# Multi-qubit quantum circuit
In this exercise we creates a two qubit circuit, with two qubits in superposition, and then measures the individual qubits, resulting in two coin toss results with the following possible outcomes with equal probability: $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$. This is like tossing two coins.
Import the required libraries, including the IBM Q library for working with IBM Q hardware.
```
import numpy as np
from qiskit import QuantumCircuit, execute, Aer
from qiskit.tools.monitor import job_monitor
# Import visualization
from qiskit.visualization import plot_histogram, plot_bloch_multivector, iplot_bloch_multivector, plot_state_qsphere, iplot_state_qsphere
# Add the state vector calculation function
def get_psi(circuit, vis):
global psi
backend = Aer.get_backend('statevector_simulator')
psi = execute(circuit, backend).result().get_statevector(circuit)
if vis=="IQ":
display(iplot_state_qsphere(psi))
elif vis=="Q":
display(plot_state_qsphere(psi))
elif vis=="M":
print(psi)
elif vis=="B":
display(plot_bloch_multivector(psi))
else: # vis="IB"
display(iplot_bloch_multivector(psi))
vis=""
```
How many qubits do we want to use. The notebook let's you set up multi-qubit circuits of various sizes. Keep in mind that the biggest publicly available IBM quantum computer is 14 qubits in size.
```
#n_qubits=int(input("Enter number of qubits:"))
n_qubits=2
```
Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to all the qubits. Add measurement gates.
```
qc1 = QuantumCircuit(n_qubits,n_qubits)
qc_measure = QuantumCircuit(n_qubits,n_qubits)
for qubit in range (0,n_qubits):
qc1.h(qubit) #A Hadamard gate that creates a superposition
for qubit in range (0,n_qubits):
qc_measure.measure(qubit,qubit)
display(qc1.draw(output="mpl"))
```
Now that we have more than one qubit it is starting to become a bit difficult to visualize the outcomes when running the circuit. To alleviate this we can instead have the get_psi return the statevector itself by by calling it with the vis parameter set to `"M"`. We can also have it display a Qiskit-unique visualization called a Q Sphere by passing the parameter `"Q"` or `"q"`. Big Q returns an interactive Q-sphere, and little q a static one.
```
get_psi(qc1,"M")
print (abs(np.square(psi)))
get_psi(qc1,"B")
```
Now we see the statevector for multiple qubits, and can calculate the probabilities for the different outcomes by squaring the complex parameters in the vector.
The Q Sphere visualization provides the same informaton in a visual form, with |0..0> at the north pole, |1..1> at the bottom, and other combinations on latitude circles. In the dynamicc version, you can hover over the tips of the vectors to see the state, probability, and phase data. In the static version, the size of the vector tip represents the relative probability of getting that specific result, and the color represents the phase angle for that specific output. More on that later!
Now add your circuit with the measurement circuit and run a 1,000 shots to get statistics on the possible outcomes.
```
backend = Aer.get_backend('qasm_simulator')
qc_final=qc1+qc_measure
job = execute(qc_final, backend, shots=1000)
counts1 = job.result().get_counts(qc_final)
print(counts1)
plot_histogram(counts1)
```
As you might expect, with two independednt qubits ea h in a superposition, the resulting outcomes should be spread evenly accross th epossible outcomes, all the combinations of 0 and 1.
**Time for you to do some work!** To get an understanding of the probable outcomes and how these are displayed on the interactive (or static) Q Sphere, change the `n_qubits=2` value in the cell above, and run the cells again for a different number of qubits.
When you are done, set the value back to 2, and continue on.
```
n_qubits=2
```
# Entangled-qubit quantum circuit - The Bell state
Now we are going to do something different. We will entangle the qubits.
Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to the first qubit. Then add a controlled-NOT gate (cx) between the first and second qubit, entangling them. Add measurement gates.
We then take a look at using the CX (Controlled-NOT) gate to entangle the two qubits in a so called Bell state. This surprisingly results in the following possible outcomes with equal probability: $|00\rangle$ and $|11\rangle$. Two entangled qubits do not at all behave like two tossed coins.
We then run the circuit a large number of times to see what the statistical behavior of the qubits are.
Finally, we run the circuit on real IBM Q hardware to see how real physical qubits behave.
In this exercise we introduce the CX gate, which creates entanglement between two qubits, by flipping the controlled qubit (q_1) if the controlling qubit (q_0) is 1.

```
qc2 = QuantumCircuit(n_qubits,n_qubits)
qc2_measure = QuantumCircuit(n_qubits, n_qubits)
for qubit in range (0,n_qubits):
qc2_measure.measure(qubit,qubit)
qc2.h(0) # A Hadamard gate that puts the first qubit in superposition
display(qc2.draw(output="mpl"))
get_psi(qc2,"M")
get_psi(qc2,"B")
for qubit in range (1,n_qubits):
qc2.cx(0,qubit) #A controlled NOT gate that entangles the qubits.
display(qc2.draw(output="mpl"))
get_psi(qc2, "B")
```
Now we notice something peculiar; after we add the CX gate, entangling the qubits the Bloch spheres display nonsense. Why is that? It turns out that once your qubits are entangled they can no longer be described individually, but only as a combined object. Let's take a look at the state vector and Q sphere.
```
get_psi(qc2,"M")
print (abs(np.square(psi)))
get_psi(qc2,"Q")
```
Set the backend to a local simulator. Then create a quantum job for the circuit, the selected backend, that runs just one shot to simulate a coin toss with two simultaneously tossed coins, then run the job. Display the result; either 0 for up (base) or 1 for down (excited) for each qubit. Display the result as a histogram. Either |00> or |11> with 100% probability.
```
backend = Aer.get_backend('qasm_simulator')
qc2_final=qc2+qc2_measure
job = execute(qc2_final, backend, shots=1)
counts2 = job.result().get_counts(qc2_final)
print(counts2)
plot_histogram(counts2)
```
Note how the qubits completely agree. They are entangled.
**Do some work..** Run the cell above a few times to verify that you only get the results 00 or 11.
Now, lets run quite a few more shots, and display the statistsics for the two results. This time, as we are no longer just talking about two qubits, but the amassed results of thousands of runs on these qubits.
```
job = execute(qc2_final, backend, shots=1000)
result = job.result()
counts = result.get_counts()
print(counts)
plot_histogram(counts)
```
And look at that, we are back at our coin toss results, fifty-fifty. Every time one of the coins comes up heads (|0>) the other one follows suit. Tossing one coin we immediately know what the other one will come up as; the coins (qubits) are entangled.
# Run your entangled circuit on an IBM quantum computer
**Important:** With the simulator we get perfect results, only |00> or |11>. On a real NISQ (Noisy Intermediate Scale Quantum computer) we do not expect perfect results like this. Let's run the Bell state once more, but on an actual IBM Q quantum computer.
**Time for some work!** Before you can run your program on IBM Q you must load your API key. If you are running this notebook in an IBM Qx environment, your API key is already stored in the system, but if you are running on your own machine you [must first store the key](https://qiskit.org/documentation/install.html#access-ibm-q-systems).
```
#Save and store API key locally.
from qiskit import IBMQ
#IBMQ.save_account('MY_API_TOKEN') <- Uncomment this line if you need to store your API key
#Load account information
IBMQ.load_account()
provider = IBMQ.get_provider()
```
Grab the least busy IBM Q backend.
```
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(operational=True, simulator=False))
#backend = provider.get_backend('ibmqx2')
print("Selected backend:",backend.status().backend_name)
print("Number of qubits(n_qubits):", backend.configuration().n_qubits)
print("Pending jobs:", backend.status().pending_jobs)
```
Lets run a large number of shots, and display the statistsics for the two results: $|00\rangle$ and $|11\rangle$ on the real hardware. Monitor the job and display our place in the queue.
```
if n_qubits > backend.configuration().n_qubits:
print("Your circuit contains too many qubits (",n_qubits,"). Start over!")
else:
job = execute(qc2_final, backend, shots=1000)
job_monitor(job)
```
Get the results, and display in a histogram. Notice how we no longer just get the perfect entangled results, but also a few results that include non-entangled qubit results. At this stage, quantum computers are not perfect calculating machines, but pretty noisy.
```
result = job.result()
counts = result.get_counts(qc2_final)
print(counts)
plot_histogram(counts)
```
That was the simple readout. Let's take a look at the whole returned results:
```
print(result)
```
| github_jupyter |
# Twitter Mining Function & Scatter Plots
---------------------------------------------------------------
```
# Import Dependencies
%matplotlib notebook
import os
import csv
import json
import requests
from pprint import pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from twython import Twython
import simplejson
import sys
import string
import glob
from pathlib import Path
# Import Twitter 'Keys' - MUST SET UP YOUR OWN 'config_twt.py' file
# You will need to create your own "config_twt.py" file using each of the Twitter authentication codes
# they provide you when you sign up for a developer account with your Twitter handle
from config_twt import (app_key_twt, app_secret_twt, oauth_token_twt, oauth_token_secret_twt)
# Set Up Consumer Keys And Secret with Twitter Keys
APP_KEY = app_key_twt
APP_SECRET = app_secret_twt
# Set up OAUTH Token and Secret With Twitter Keys
OAUTH_TOKEN = oauth_token_twt
OAUTH_TOKEN_SECRET = oauth_token_secret_twt
# Load Keys In To a Twython Function And Call It "twitter"
twitter = Twython(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
# Setup Batch Counter For Phase 2
batch_counter = 0
```
___________________________
## Twitter Mining Function '(TMF)'
___________________________
### INSTRUCTIONS:
This will Twitter Query Function will:
- Perform searches for hastags (#)
- Search for "@twitter_user_acct"
- Provide mixed results of popular and most recent tweets from the last 7 days
- 'remaining' 'search/tweets (allowance of 180) rate limit status' regenerates in 15 minutes after depletion
### Final outputs are aggegrated queries in both:
- Pandas DataFrame of queried tweets
- CSV files saved in the same folder as this Jupyter Notebook
### Phase 1 - Run Query and Store The Dictionary Into a List
- Step 1) Run the 'Twitter Mining Function' cell below to begin program
Note:
- Limits have not been fully tested due to time constraint
- Search up to 180 queries where and each query yields up to 100 tweets max
- Running the TLC to see how many you have left after every csv outputs.
- Step 2) When prompted, input in EITHER: #hashtag or @Twitter_user_account
Examples: "#thuglife" or "@beyonce"
- Step 3) TMF will query Twitter and store the tweets_data, a list called "all_data"
- Step 4) Upon query search completion, it will prompt: "Perform another search query:' ('y'/'n') "
- Input 'y' to query, and the program will append the results
- Tip: Keep count of how many 'search tweets' you have, it should deduct 1 from 'remaining',
which can produce up to 100 tweets of data
- Step 5) End program by entering 'n' when prompted for 'search again'
Output: printed list of all appended query data
### Phase 2 - Converting to Pandas DataFrame and Produce a CSV Output
- Step 6) Loop Through Queried Data
- Step 7) Convert to Pandas DataFrame
- Step 8) Convert from DataFrame to CSV
### Addtional Considerations:
- Current set up uses standard search api keys, not premium
- TMF returns potentially 100 tweets at a time, and pulls from the last 7 days in random order
- More than likely will have to run multiple searches and track line items count
in each of the csv files output that will be created in the same folder
### Tweet Limit Counter (TLC)
- Run cell to see how many search queries you have available
- Your 'remaining' search tweets regenerates over 15 minutes.
```
# TLC - Run to Query Current Rate Limit on API Keys
twitter.get_application_rate_limit_status()['resources']['search']
#Twitter Mining Function (TMF)
#RUN THIS CELL TO BEGIN PROGRAM!
print('-'*80)
print("TWITTER QUERY FUNCTION - BETA")
print('-'*80)
print("INPUT PARAMETERS:")
print("- @Twitter_handle e.g. @clashofclans")
print("- Hashtags (#) e.g. #THUGLIFE")
print("NOTE: SCROLL DOWN AFTER EACH QUERY FOR ENTER INPUT")
print('-'*80)
def twitter_search(app_search):
# Store the following Twython function and parameters into variable 't'
t = Twython(app_key=APP_KEY,
app_secret=APP_SECRET,
oauth_token=OAUTH_TOKEN,
oauth_token_secret=OAUTH_TOKEN_SECRET)
# The Twitter Mining Function we will use to run searches is below
# and we're asking for it to pull 100 tweets
search = t.search(q=app_search, count=100)
tweets = search['statuses']
# This will be a list of dictionaries of each tweet where the loop below will append to
all_data = []
# From the tweets, go into each individual tweet and extract the following into a 'dictionary'
# and append it to big bucket called 'all_data'
for tweet in tweets:
try:
tweets_data = {
"Created At":tweet['created_at'],
"Text (Tweet)":tweet['text'],
"User ID":tweet['user']['id'],
"User Followers Count":tweet['user']['followers_count'],
"Screen Name":tweet['user']['name'],
"ReTweet Count":tweet['retweet_count'],
"Favorite Count":tweet['favorite_count']}
all_data.append(tweets_data)
#print(tweets_data)
except (KeyError, NameError, TypeError, AttributeError) as err:
print(f"{err} Skipping...")
#functions need to return something...
return all_data
# The On and Off Mechanisms:
search_again = 'y'
final_all_data = []
# initialize the query counter
query_counter = 0
while search_again == 'y':
query_counter += 1
start_program = str(input('Type the EXACT @twitter_acct or #hashtag to query: '))
all_data = twitter_search(start_program)
final_all_data += all_data
#print(all_data)
print(f"Completed Collecting Search Results for {start_program} . Queries Completed: {query_counter} ")
print('-'*80)
search_again = input("Would you like to run another query? Enter 'y'. Otherwise, 'n' or another response will end query mode. ")
print('-'*80)
# When you exit the program, set the query counter back to zero
query_counter = 0
print()
print(f"Phase 1 of 2 Queries Completed . Proceed to Phase 2 - Convert Collection to DF and CSV formats .")
#print("final Data", final_all_data)
#####################################################################################################
# TIPS!: # If you're searching for the same hastag or twitter_handle,
# consider copying and pasting it (e.g. @fruitninja)
# Display the total tweets the TMF successfully pulled:
print(len(final_all_data))
```
### Tweet Limit Counter (TLC)
- Run cell to see how many search queries you have available
- Your 'remaining' search tweets regenerates over 15 minutes.
```
# Run to view current rate limit status
twitter.get_application_rate_limit_status()['resources']['search']
#df = pd.DataFrame(final_all_data[0])
#df
final_all_data
```
### Step 6) Loop through the stored list of queried tweets from final_all_data and stores in designated lists
```
# Loop thru finall_all_data (list of dictionaries) and extract each item and store them into
# the respective lists
# BUCKETS
created_at = []
tweet_text = []
user_id = []
user_followers_count = []
screen_name = []
retweet_count = []
likes_count = []
# append tweets data to the buckets for each tweet
#change to final_all_data
for data in final_all_data:
#print(keys, data[keys])
created_at.append(data["Created At"]),
tweet_text.append(data['Text (Tweet)']),
user_id.append(data['User ID']),
user_followers_count.append(data['User Followers Count']),
screen_name.append(data['Screen Name']),
retweet_count.append(data['ReTweet Count']),
likes_count.append(data['Favorite Count'])
#print(created_at, tweet_text, user_id, user_followers_count, screen_name, retweet_count, likes_count)
print("Run complete. Proceed to next cell.")
```
### Step 7) Convert to Pandas DataFrame
```
# Setup DataFrame and run tweets_data_df
tweets_data_df = pd.DataFrame({
"Created At": created_at,
"Screen Name": screen_name,
"User ID": user_id,
"User Follower Count": user_followers_count,
"Likes Counts": likes_count,
"ReTweet Count": retweet_count,
"Tweet Text" : tweet_text
})
tweets_data_df.head()
```
### Step 8) Load into MySQL Database - later added this piece to display ETL potential of this project
```
# This section was added later after I reviewed and wanted to briefly reiterate on it
tweets_data_df2 = tweets_data_df.copy()
# Dropped Screen Name and Tweets Text bc would I would need to clean the 'Screen Name' and 'Tweet Text' Columns
tweets_data_df2 = tweets_data_df2.drop(["Screen Name", "Tweet Text"], axis=1).sort_values(by="User Follower Count")
# Import Dependencies 2/2:
from sqlalchemy import create_engine
from sqlalchemy.sql import select
from sqlalchemy_utils import database_exists, create_database, drop_database, has_index
import pymysql
rds_connection_string = "root:PASSWORD_HERE@127.0.0.1/"
#db_name = input("What database would you like to search for?")
db_name = 'twitterAPI_data_2019_db'
# Setup engine connection string
engine = create_engine(f'mysql://{rds_connection_string}{db_name}?charset=utf8', echo=True)
# Created a function incorproating SQL Alchemy to search, create, and or drop a database:
def search_create_drop_db(db_name):
db_exist = database_exists(f'mysql://{rds_connection_string}{db_name}')
db_url = f'mysql://{rds_connection_string}{db_name}'
if db_exist == True:
drop_table_y_or_n = input(f'"{db_name}" database already exists in MySQL. Do you want you drop the table? Enter exactly: "y" or "n". ')
if drop_table_y_or_n == 'y':
drop_database(db_url)
print(f"Database {db_name} was dropped")
create_new_db = input(f"Do you want to create another database called: {db_name}? ")
if create_new_db == 'y':
create_database(db_url)
return(f"The database {db_name} was created. Next You will need to create tables for this database. ")
else:
return("No database was created. Goodbye! ")
else:
return("The database exists. No action was taken. Goodbye! ")
else:
create_database(db_url)
return(f"The queried database did not exist, and was created as: {db_name} . ")
search_create_drop_db(db_name)
tweets_data_df2.to_sql('tweets', con=engine, if_exists='append')
```
### Step 9) Convert DataFrame to CSV File and save on local drive
```
# Save Tweets Data to a CSV File (Run Cell to input filename)
# Streamline the saving of multiple queries (1 query = up to 100 tweets) into a csv file.
# E.g. input = (#fruit_ninja) will save the file as "fruit_ninja_batch1.csv" as the file result
# Note: first chracter will be slice off so you can just copy and paste
# the hastag / @twitter_handle from steps above
batch_name = str(input("Enter in batch name."))
# If you restart kernel, batch_counter resets to zero.
batch_counter = batch_counter +1
# Check if the #hastag / @twitter_handle folder exists and create the folder if it does not
Path(f"./resources/{batch_name[1:]}").mkdir(parents=True, exist_ok=True)
# Save dataframe of all queries in a csv file to a folder in the resources folder csv using the
tweets_data_df.to_csv(f"./resources/{batch_name[1:]}/{batch_name[1:]}_batch{batch_counter}.csv", encoding='utf-8')
print(f"Output saved in current folder as: {batch_name[1:]}_batch{batch_counter}.csv ")
```
# PHASE 3 - CALCULATIONS USING API DATA
```
# This prints out all of the folder titles in "resources" folder
path = './resources/*' # use your path
resources = glob.glob(path)
all_folders = []
print("All folders in the 'resources' folder:")
print("="*40)
for foldername in resources:
str(foldername)
foldername = foldername[12:]
all_folders.append(foldername)
#print(li)
print("")
print(F"Total Folders: {len(all_folders)}")
print(all_folders)
all_TopApps_df_list = []
for foldername in all_folders:
plug = foldername
path = f'./resources\\{plug}'
all_files = glob.glob(path + "/*.csv")
counter = 0
app_dataframes = []
for filename in all_files:
counter += 1
df = pd.read_csv(filename, index_col=None, header=0)
app_dataframes.append(df)
output = pd.concat(app_dataframes, axis=0, ignore_index=True)
all_TopApps_df_list.append(f"{output}_{counter}")
counter = 0
#fb_frame
```
##### Facebook Calculations
```
# Example Template of looping thru csvfiles, and concatenate all of the csv files we collected in each folder
plug = 'facebook'
path = f'./resources\\{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
fb_frame = pd.concat(li, axis=0, ignore_index=True)
fb_frame
fb_frame.describe()
# Sort to set up removal of duplicates
fb_frame.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
facebook_filtered_df = fb_frame.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
facebook_filtered_df.describe()
facebook_filtered_df.head()
# Count total out of Unique Tweets
facebook_total_tweets = len(facebook_filtered_df['Tweet Text'])
facebook_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
facebook_avg_followers_ct = facebook_filtered_df['User Follower Count'].mean()
facebook_avg_followers_ct
# Total Likes of all tweets
facebook_total_likes = facebook_filtered_df['Likes Counts'].sum()
#facebook_avg_likes = facebook_filtered_df['Likes Counts'].mean()
facebook_total_likes
#facebook_avg_likes
# Facebook Retweets Stats:
#facebook_sum_retweets = facebook_filtered_df['ReTweet Count'].sum()
facebook_avg_retweets = facebook_filtered_df['ReTweet Count'].mean()
#facebook_sum_retweets
facebook_avg_retweets
```
#### Instagram Calculations
```
plug = 'instagram'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
instagram_source_df = pd.concat(li, axis=0, ignore_index=True)
instagram_source_df
# Snapshot Statistics
instagram_source_df.describe()
instagram_source_df.head()
# Sort to set up removal of duplicates
instagram_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
instagram_filtered_df = instagram_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
instagram_filtered_df
# Get New Snap Shot Statistics
instagram_filtered_df.describe()
# Count total out of Unique Tweets
instagram_total_tweets = len(instagram_filtered_df['Tweet Text'])
instagram_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
instagram_avg_followers_ct = instagram_filtered_df['User Follower Count'].mean()
instagram_avg_followers_ct
# Total Likes of all tweets
instagram_total_likes = instagram_filtered_df['Likes Counts'].sum()
#instagram_avg_likes = instagram_filtered_df['Likes Counts'].mean()
instagram_total_likes
#instagram_avg_likes
# Retweets Stats:
#instagram_sum_retweets = instagram_filtered_df['ReTweet Count'].sum()
instagram_avg_retweets = instagram_filtered_df['ReTweet Count'].mean()
#instagram_sum_retweets
instagram_avg_retweets
```
### Clash of Clans Calculations
```
plug = 'clashofclans'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
coc_source_df = pd.concat(li, axis=0, ignore_index=True)
coc_source_df
# Snapshot Statistics
coc_source_df.describe()
coc_source_df.head()
# Sort to set up removal of duplicates
coc_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
coc_filtered_df = coc_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
coc_filtered_df.head()
# Get New Snap Shot Statistics
coc_filtered_df.describe()
# Count total out of Unique Tweets
coc_total_tweets = len(coc_filtered_df['Tweet Text'])
coc_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
coc_avg_followers_ct = coc_filtered_df['User Follower Count'].mean()
coc_avg_followers_ct
# Total Likes of all tweets
coc_total_likes = coc_filtered_df['Likes Counts'].sum()
#coc_avg_likes = coc_filtered_df['Likes Counts'].mean()
coc_total_likes
#coc_avg_likes
# Retweets Stats:
#coc_sum_retweets = coc_filtered_df['ReTweet Count'].sum()
coc_avg_retweets = coc_filtered_df['ReTweet Count'].mean()
#coc_sum_retweets
coc_avg_retweets
```
### Temple Run Calculations
```
plug = 'templerun'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
templerun_source_df = pd.concat(li, axis=0, ignore_index=True)
templerun_source_df
# Snapshot Statistics
templerun_source_df.describe()
#templerun_source_df.head()
# Sort to set up removal of duplicates
templerun_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
templerun_filtered_df = templerun_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
#templerun_filtered_df
#templerun_filtered_df.describe()
# Count total out of Unique Tweets
templerun_total_tweets = len(templerun_filtered_df['Tweet Text'])
templerun_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
templerun_avg_followers_ct = templerun_filtered_df['User Follower Count'].mean()
templerun_avg_followers_ct
# Total Likes of all tweets
templerun_total_likes = templerun_filtered_df['Likes Counts'].sum()
#templerun_avg_likes = templerun_filtered_df['Likes Counts'].mean()
templerun_total_likes
#instagram_avg_likes
# Retweets Stats:
#templerun_sum_retweets = templerun_filtered_df['ReTweet Count'].sum()
templerun_avg_retweets = templerun_filtered_df['ReTweet Count'].mean()
#templerun_sum_retweets
templerun_avg_retweets
templerun_total_tweets
templerun_avg_retweets
templerun_avg_followers_ct
templerun_total_likes
```
### Pandora Calculations
```
plug = 'pandora'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
pandora_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
pandora_source_df.describe()
#pandora_source_df.head()
# Sort to set up removal of duplicates
pandora_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
pandora_filtered_df = pandora_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
pandora_filtered_df
pandora_filtered_df.describe()
# Count total out of Unique Tweets
pandora_total_tweets = len(pandora_filtered_df['Tweet Text'])
pandora_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
pandora_avg_followers_ct = pandora_filtered_df['User Follower Count'].mean()
pandora_avg_followers_ct
# Total Likes of all tweets
# use sum of likes.
pandora_total_likes = pandora_filtered_df['Likes Counts'].sum()
#pandora_avg_likes = pandora_filtered_df['Likes Counts'].mean()
pandora_total_likes
#pandora_avg_likes
# Retweets Stats:
#pandora_sum_retweets = pandora_filtered_df['ReTweet Count'].sum()
pandora_avg_retweets = pandora_filtered_df['ReTweet Count'].mean()
#pandora_sum_retweets
pandora_avg_retweets
```
### Pinterest Calculations
```
# Concatenate them
plug = 'pinterest'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
pinterest_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
pinterest_source_df.describe()
pinterest_source_df.head()
# Sort to set up removal of duplicates
pinterest_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
pinterest_filtered_df = pinterest_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
pinterest_filtered_df
pinterest_filtered_df.describe()
# Count total out of Unique Tweets
pinterest_total_tweets = len(pinterest_filtered_df['Tweet Text'])
pinterest_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
pinterest_avg_followers_ct = pinterest_filtered_df['User Follower Count'].mean()
pinterest_avg_followers_ct
# Total Likes of all tweets
pinterest_total_likes = pinterest_filtered_df['Likes Counts'].sum()
#pinterest_avg_likes = pinterest_filtered_df['Likes Counts'].mean()
pinterest_total_likes
#pinterest_avg_likes
# Retweets Stats:
#pinterest_sum_retweets = pinterest_filtered_df['ReTweet Count'].sum()
pinterest_avg_retweets = pinterest_filtered_df['ReTweet Count'].mean()
#pinterest_sum_retweets
pinterest_avg_retweets
```
### Bible (You Version) Calculations
```
plug = 'bible'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
bible_source_df = pd.concat(li, axis=0, ignore_index=True)
bible_source_df
# Snapshot Statistics
bible_source_df.describe()
bible_source_df.head()
# Sort to set up removal of duplicates
bible_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
bible_filtered_df = bible_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
bible_filtered_df
bible_filtered_df.describe()
# Count total out of Unique Tweets
bible_total_tweets = len(bible_filtered_df['Tweet Text'])
bible_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
bible_avg_followers_ct = bible_filtered_df['User Follower Count'].mean()
bible_avg_followers_ct
# Total Likes of all tweets
bible_total_likes = bible_filtered_df['Likes Counts'].sum()
#bible_avg_likes = bible_filtered_df['Likes Counts'].mean()
bible_total_likes
#bible_avg_likes
# Retweets Stats:
#bible_sum_retweets = bible_filtered_df['ReTweet Count'].sum()
bible_avg_retweets = bible_filtered_df['ReTweet Count'].mean()
#bible_sum_retweets
bible_avg_retweets
```
### Candy Crush Saga Calculations
```
plug = 'candycrushsaga'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
CandyCrushSaga_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
CandyCrushSaga_source_df.describe()
# has duplicates
CandyCrushSaga_source_df.sort_values(by=['User ID','Created At'], ascending=False)
CandyCrushSaga_source_df.head()
# Drop Duplicates only for matching columns omitting the index
CandyCrushSaga_filtered_df = CandyCrushSaga_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
CandyCrushSaga_filtered_df.describe()
CandyCrushSaga_filtered_df.head()
# Count total out of Unique Tweets
candycrushsaga_total_tweets = len(CandyCrushSaga_filtered_df['Tweet Text'])
candycrushsaga_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
candycrushsaga_avg_followers_ct = CandyCrushSaga_filtered_df['User Follower Count'].mean()
candycrushsaga_avg_followers_ct
# Total Likes of all tweets
candycrushsaga_total_likes = CandyCrushSaga_filtered_df['Likes Counts'].sum()
#facebook_avg_likes = facebook_filtered_df['Likes Counts'].mean()
candycrushsaga_total_likes
#facebook_avg_likes
# Retweets Stats:
#facebook_sum_retweets = facebook_filtered_df['ReTweet Count'].sum()
candycrushsaga_avg_retweets = CandyCrushSaga_filtered_df['ReTweet Count'].mean()
#facebook_sum_retweets
candycrushsaga_avg_retweets
```
### Spotify Music Caculations
```
plug = 'spotify'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
spotify_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
spotify_source_df.describe()
spotify_source_df.head()
# Sort to set up removal of duplicates
spotify_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
spotify_filtered_df = spotify_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
spotify_filtered_df
spotify_filtered_df.describe()
# Count total out of Unique Tweets
spotify_total_tweets = len(spotify_filtered_df['Tweet Text'])
spotify_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
spotify_avg_followers_ct = spotify_filtered_df['User Follower Count'].mean()
spotify_avg_followers_ct
# Total Likes of all tweets
spotify_total_likes = spotify_filtered_df['Likes Counts'].sum()
#spotify_avg_likes = spotify_filtered_df['Likes Counts'].mean()
spotify_total_likes
#spotify_avg_likes
# Retweets Stats:
#spotify_sum_retweets = spotify_filtered_df['ReTweet Count'].sum()
spotify_avg_retweets = spotify_filtered_df['ReTweet Count'].mean()
#spotify_sum_retweets
spotify_avg_retweets
```
### Angry Birds Calculations
```
plug = 'angrybirds'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
angrybirds_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
angrybirds_source_df.describe()
angrybirds_source_df.head()
# Sort to set up removal of duplicates
angrybirds_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
angrybirds_filtered_df = angrybirds_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
angrybirds_filtered_df
angrybirds_filtered_df.describe()
# Count total out of Unique Tweets
angrybirds_total_tweets = len(angrybirds_filtered_df['Tweet Text'])
angrybirds_total_tweets
# Calculate angrybirds Avg Followers - doesn't make sense to sum.
angrybirds_avg_followers_ct = angrybirds_filtered_df['User Follower Count'].mean()
angrybirds_avg_followers_ct
# Total Likes of all tweets
angrybirds_total_likes = angrybirds_filtered_df['Likes Counts'].sum()
#angrybirds_avg_likes = angrybirds_filtered_df['Likes Counts'].mean()
angrybirds_total_likes
#angrybirds_avg_likes
# Retweets Stats:
#angrybirds_sum_retweets = angrybirds_filtered_df['ReTweet Count'].sum()
angrybirds_avg_retweets = angrybirds_filtered_df['ReTweet Count'].mean()
#angrybirds_sum_retweets
angrybirds_avg_retweets
```
### YouTube Calculations
```
plug = 'youtube'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
youtube_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
youtube_source_df.describe()
youtube_source_df.head()
# Sort
youtube_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
youtube_filtered_df = youtube_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
youtube_filtered_df.describe()
youtube_filtered_df.head()
# Count total out of Unique Tweets
youtube_total_tweets = len(youtube_filtered_df['Tweet Text'])
youtube_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
youtube_avg_followers_ct = youtube_filtered_df['User Follower Count'].mean()
youtube_avg_followers_ct
# Total Likes of all tweets
# use sum of likes.
youtube_total_likes = youtube_filtered_df['Likes Counts'].sum()
#youtube_avg_likes = youtube_filtered_df['Likes Counts'].mean()
youtube_total_likes
#youtube_avg_likes
# You Tube Retweets Stats:
#youtube_sum_retweets = facebook_filtered_df['ReTweet Count'].sum()
youtube_avg_retweets = youtube_filtered_df['ReTweet Count'].mean()
#youtube_sum_retweets
youtube_avg_retweets
```
### Subway Surfers
```
plug = 'subwaysurfer'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
SubwaySurfers_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
SubwaySurfers_source_df.describe()
SubwaySurfers_source_df.head()
# Sort
SubwaySurfers_source_df.sort_values(by=['User ID','Created At'], ascending=False)
SubwaySurfers_source_df.head()
# Drop Duplicates only for matching columns omitting the index
SubwaySurfers_filtered_df = SubwaySurfers_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
SubwaySurfers_filtered_df.describe()
SubwaySurfers_filtered_df.head()
# Count total out of Unique Tweets
SubwaySurfers_total_tweets = len(SubwaySurfers_filtered_df['Tweet Text'])
SubwaySurfers_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
SubwaySurfers_avg_followers_ct = SubwaySurfers_filtered_df['User Follower Count'].mean()
SubwaySurfers_avg_followers_ct
# Total Likes of all tweets
SubwaySurfers_total_likes = SubwaySurfers_filtered_df['Likes Counts'].sum()
#SubwaySurfers_avg_likes = SubwaySurfers_filtered_df['Likes Counts'].mean()
SubwaySurfers_total_likes
#SubwaySurfers_avg_likes
# Subway Surfer Retweets Stats:
#SubwaySurfers_sum_retweets = SubwaySurfers_filtered_df['ReTweet Count'].sum()
SubwaySurfers_avg_retweets = SubwaySurfers_filtered_df['ReTweet Count'].mean()
#SubwaySurfers_sum_retweets
SubwaySurfers_avg_retweets
```
### Security Master - Antivirus, VPN
```
# Cheetah Mobile owns Security Master
plug = 'cheetah'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
SecurityMaster_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
SecurityMaster_source_df.describe()
SecurityMaster_source_df.head()
# has duplicates
SecurityMaster_source_df.sort_values(by=['User ID','Created At'], ascending=False)
SecurityMaster_source_df.head()
# Drop Duplicates only for matching columns omitting the index
SecurityMaster_filtered_df = SecurityMaster_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
SecurityMaster_filtered_df.describe()
SecurityMaster_filtered_df.head()
# Count total out of Unique Tweets
SecurityMaster_total_tweets = len(SecurityMaster_filtered_df['Tweet Text'])
SecurityMaster_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
SecurityMaster_avg_followers_ct = SecurityMaster_filtered_df['User Follower Count'].mean()
SecurityMaster_avg_followers_ct
# Total Likes of all tweets
SecurityMaster_total_likes = SecurityMaster_filtered_df['Likes Counts'].sum()
#SecurityMaster_avg_likes = SecurityMaster_filtered_df['Likes Counts'].mean()
SecurityMaster_total_likes
#SecurityMaster_avg_likes
# Security Master Retweets Stats:
#SecurityMaster_sum_retweets = SecurityMaster_filtered_df['ReTweet Count'].sum()
SecurityMaster_avg_retweets = SecurityMaster_filtered_df['ReTweet Count'].mean()
#SecurityMaster_sum_retweets
SecurityMaster_avg_retweets
```
### Clash Royale
```
plug = 'clashroyale'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
ClashRoyale_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
ClashRoyale_source_df.describe()
ClashRoyale_source_df.head()
# has duplicates
ClashRoyale_source_df.sort_values(by=['User ID','Created At'], ascending=False)
# Drop Duplicates only for matching columns omitting the index
ClashRoyale_filtered_df = ClashRoyale_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
ClashRoyale_filtered_df.describe()
ClashRoyale_filtered_df.head()
# Count total out of Unique Tweets
ClashRoyale_total_tweets = len(ClashRoyale_filtered_df['Tweet Text'])
ClashRoyale_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
ClashRoyale_avg_followers_ct = ClashRoyale_filtered_df['User Follower Count'].mean()
ClashRoyale_avg_followers_ct
# Total Likes of all tweets
ClashRoyale_total_likes = ClashRoyale_filtered_df['Likes Counts'].sum()
#ClashRoyale_avg_likes = ClashRoyale_filtered_df['Likes Counts'].mean()
ClashRoyale_total_likes
#ClashRoyale_avg_likes
# ClashRoyale Retweets Stats:
#ClashRoyale_sum_retweets = ClashRoyale_filtered_df['ReTweet Count'].sum()
ClashRoyale_avg_retweets = ClashRoyale_filtered_df['ReTweet Count'].mean()
#facebook_sum_retweets
ClashRoyale_avg_retweets
```
### Clean Master - Space Cleaner
```
plug = 'cleanmaster'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
CleanMaster_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
CleanMaster_source_df.describe()
CleanMaster_source_df.head()
# has duplicates
CleanMaster_source_df.sort_values(by=['User ID','Created At'], ascending=False)
CleanMaster_source_df.head()
# Drop Duplicates only for matching columns omitting the index
CleanMaster_filtered_df = CleanMaster_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
CleanMaster_filtered_df.describe()
CleanMaster_filtered_df.head()
# Count total out of Unique Tweets
CleanMaster_total_tweets = len(CleanMaster_filtered_df['Tweet Text'])
CleanMaster_total_tweets
# Calculate Avg Followers - doesn't make sense to sum.
CleanMaster_avg_followers_ct = CleanMaster_filtered_df['User Follower Count'].mean()
CleanMaster_avg_followers_ct
# Total Likes of all tweets
CleanMaster_total_likes = CleanMaster_filtered_df['Likes Counts'].sum()
#facebook_avg_likes = facebook_filtered_df['Likes Counts'].mean()
CleanMaster_total_likes
#facebook_avg_likes
# Clean MasterRetweets Stats:
#CleanMaster_sum_retweets = CleanMaster_filtered_df['ReTweet Count'].sum()
CleanMaster_avg_retweets = CleanMaster_filtered_df['ReTweet Count'].mean()
#facebook_sum_retweets
CleanMaster_avg_retweets
```
### What's App
```
plug = 'whatsapp'
path = f'./resources/{plug}'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
whatsapp_source_df = pd.concat(li, axis=0, ignore_index=True)
# Snapshot Statistics
whatsapp_source_df.describe()
whatsapp_source_df.head()
# has duplicates
whatsapp_source_df.sort_values(by=['User ID','Created At'], ascending=False)
whatsapp_source_df.head()
# Drop Duplicates only for matching columns omitting the index
whatsapp_filtered_df = whatsapp_source_df.drop_duplicates(['Created At', 'Screen Name', 'User ID', 'User Follower Count', 'Likes Counts', 'ReTweet Count', 'Tweet Text']).sort_values(by=['User ID','Created At'], ascending=False)
# Get New Snap Shot Statistics
whatsapp_filtered_df.describe()
whatsapp_filtered_df.head()
# Count total out of Unique Tweets
whatsapp_total_tweets = len(whatsapp_filtered_df['Tweet Text'])
whatsapp_total_tweets
# Calculate Facebook Avg Followers - doesn't make sense to sum.
whatsapp_avg_followers_ct = whatsapp_filtered_df['User Follower Count'].mean()
whatsapp_avg_followers_ct
# Total Likes of all tweets.
whatsapp_total_likes = whatsapp_filtered_df['Likes Counts'].sum()
#whatsapp_avg_likes = whatsapp_filtered_df['Likes Counts'].mean()
whatsapp_total_likes
#whatsapp_avg_likes
# Whatsapp Retweets Stats:
#whatsapp_sum_retweets = whatsapp_filtered_df['ReTweet Count'].sum()
whatsapp_avg_retweets = whatsapp_filtered_df['ReTweet Count'].mean()
#whatsapp_sum_retweets
whatsapp_avg_retweets
```
# Charts and Plots
#### Scatter plot - Twitter Average Followers to Tweets
```
# Scatter Plot 1 - Tweets vs Average Followers vs Total Likes of the Top 10 Apps for both Google and Apple App Stores
fig, ax = plt.subplots(figsize=(11,11))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_total_tweets, facebook_avg_followers_ct, s=facebook_total_likes*15, color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(instagram_total_tweets, instagram_avg_followers_ct, s=instagram_total_likes*15, color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.5)
coc_plot= ax.scatter(coc_total_tweets, coc_avg_followers_ct, s=coc_total_likes*10, color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_total_tweets, candycrushsaga_avg_followers_ct, s=candycrushsaga_total_likes*5, color='limegreen', label='Candy Crush Saga', edgecolors='black')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_total_tweets, CleanMaster_avg_followers_ct, s=CleanMaster_total_likes*5, color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_total_tweets, SubwaySurfers_avg_followers_ct, s=SubwaySurfers_total_likes*5, color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(youtube_total_tweets, youtube_avg_followers_ct, s=youtube_total_likes*5, color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_total_tweets, SecurityMaster_avg_followers_ct, s=SecurityMaster_total_likes*5, color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_total_tweets, ClashRoyale_avg_followers_ct, s=ClashRoyale_total_likes*5, color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_total_tweets, whatsapp_avg_followers_ct, s=whatsapp_total_likes*5, color='tan', label='Whats App', edgecolors='black', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_total_tweets, templerun_avg_followers_ct, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_total_tweets, pandora_avg_followers_ct, s=pandora_total_likes*5, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_total_tweets, pinterest_avg_followers_ct, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_total_tweets, bible_avg_followers_ct, s=bible_total_likes*5, color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_total_tweets, spotify_avg_followers_ct, s=spotify_total_likes*5, color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_total_tweets, angrybirds_avg_followers_ct, s=angrybirds_total_likes*5, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs Average Followers (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Total Likes" )
plt.ylabel("Average Number of Followers per Twitter User \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig("./TWEETS_vs__AVG_followers_Scatter.png")
# Test Cell: Tried to automate plot, but was unable to beause for the size (s), JG wanted to scale the
# the size by multiplying by a unique scale depending on the number of likes to emphasize data points
# Conclusion: was to stick with brute force method
SubwaySurfers_total_tweets,
x = [facebook_total_tweets, instagram_total_tweets, coc_total_tweets,
candycrushsaga_total_tweets, CleanMaster_total_tweets,
youtube_total_tweets, SecurityMaster_total_tweets,
ClashRoyale_total_tweets, whatsapp_total_tweets, templerun_total_tweets,
pandora_total_tweets, pinterest_total_tweets, bible_total_tweets, spotify_total_tweets,
angrybirds_total_tweets]
SubwaySurfers_avg_followers_ct,
y = [facebook_avg_followers_ct, instagram_avg_followers_ct, coc_avg_followers_ct,
candycrushsaga_avg_followers_ct, CleanMaster_avg_followers_ct,
youtube_avg_followers_ct, SecurityMaster_avg_followers_ct,
ClashRoyale_avg_followers_ct, whatsapp_avg_followers_ct, templerun_avg_followers_ct,
pandora_avg_followers_ct, pinterest_avg_followers_ct, bible_avg_followers_ct, spotify_avg_followers_ct,
angrybirds_avg_followers_ct]
"""
# Below this method doesn't work. Will go with brute force method.
s = [(facebook_total_likes*15), (instagram_total_likes*15), (coc_total_likes*10), (candycrushsaga_total_likes*5),
(CleanMaster_total_likes*5), (SubwaySurfers_total_likes*5), (youtube_total_likes*5), (SecurityMaster_total_likes*5)
(ClashRoyale_total_likes*5), (whatsapp_total_likes*5), (templerun_total_likes*5), (pandora_total_likes*5),
(pinterest_total_likes*5), (bible_total_likes*5), (spotify_total_likes*5), (angrybirds_total_likes*5)]
"""
s = [facebook_total_likes, instagram_total_likes, coc_total_likes, candycrushsaga_total_likes,
CleanMaster_total_likes, SubwaySurfers_total_likes, youtube_total_likes, SecurityMaster_total_likes,
ClashRoyale_total_likes, whatsapp_total_likes, templerun_total_likes, pandora_total_likes,
pinterest_total_likes, bible_total_likes, spotify_total_likes, angrybirds_total_likes]
colors = np.random.rand(16)
label = []
edgecolors = []
alpha = []
fig, ax = plt.subplots(figsize=(11,11))
ax.scatter(x, y, s)
plt.grid()
plt.show()
"""# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(, , , color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(, , , color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.5)
coc_plot= ax.scatter(,, , color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(,, , color='limegreen', label='Candy Crush Saga', edgecolors='black')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(,, , color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(,, , color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(,, , color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(,, , color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(,, , color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(,, , color='tan', label='Whats App', edgecolors='black', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(, , , color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(, ,, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(, ,, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(, , color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(, , color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(,,, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs Average Followers (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Total Likes" )
plt.ylabel("Average Number of Followers per Twitter User \n")
"""
# Scatter Plot 2 - Tweets vs ReTweets vs Likes
fig, ax = plt.subplots(figsize=(11,11))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_total_tweets, facebook_avg_retweets, s=facebook_total_likes*5, color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(instagram_total_tweets, instagram_avg_retweets, s=instagram_total_likes*5, color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.75)
coc_plot= ax.scatter(coc_total_tweets, coc_avg_retweets, s=coc_total_likes*5, color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_total_tweets, candycrushsaga_avg_retweets, s=candycrushsaga_total_likes*5, color='limegreen', label='Candy Crush Saga', edgecolors='black')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_total_tweets, CleanMaster_avg_retweets, s=CleanMaster_total_likes*5, color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_total_tweets, SubwaySurfers_avg_retweets, s=SubwaySurfers_total_likes*5, color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(youtube_total_tweets, youtube_avg_retweets, s=youtube_total_likes*5, color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_total_tweets, SecurityMaster_avg_retweets, s=SecurityMaster_total_likes*5, color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_total_tweets, ClashRoyale_avg_retweets, s=ClashRoyale_total_likes*5, color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_total_tweets, whatsapp_avg_retweets, s=whatsapp_total_likes*5, color='tan', label='Whats App', edgecolors='black', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_total_tweets, templerun_avg_retweets, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_total_tweets, pandora_avg_retweets, s=pandora_total_likes*5, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_total_tweets, pinterest_avg_retweets, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_total_tweets, bible_avg_retweets, s=bible_total_likes*5, color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_total_tweets, spotify_avg_retweets, s=spotify_total_likes*5, color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_total_tweets, angrybirds_avg_retweets, s=angrybirds_total_likes*5, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs ReTweets (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Total Likes \n" )
plt.ylabel("Average Number of ReTweets per Twitter User \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./TWEETS_VS_RETWEETS_vs_LIKES_Scatter.png')
# Scatter Plot 3 - Will not use this plot
fig, ax = plt.subplots(figsize=(8,8))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_avg_retweets, facebook_total_tweets, s=facebook_total_likes*5, color='blue', label='Facebook', edgecolors='red', alpha=0.75)
instagram_plot= ax.scatter(instagram_avg_retweets, instagram_total_tweets, s=instagram_total_likes*5, color='fuchsia', label='Instagram', edgecolors='red', alpha=0.75)
coc_plot= ax.scatter(coc_avg_retweets, coc_total_tweets, s=coc_total_likes*5, color='springgreen', label='Clash Of Clans', edgecolors='red', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_avg_retweets, candycrushsaga_total_tweets, s=candycrushsaga_total_likes*5, color='black', label='Candy Crush Saga', edgecolors='red')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_avg_retweets, CleanMaster_total_tweets, s=CleanMaster_total_likes*5, color='olive', label='Clean Master Space Cleaner', edgecolors='lime', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_avg_retweets, SubwaySurfers_total_tweets, s=SubwaySurfers_total_likes*5, color='plum', label='Subway Surfers', edgecolors='lime', alpha=0.75)
youtube_plot= ax.scatter(youtube_avg_retweets, youtube_total_tweets, s=youtube_total_likes*5, color='grey', label='You Tube', edgecolors='lime', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_avg_retweets, SecurityMaster_total_tweets, s=SecurityMaster_total_likes*5, color='coral', label='Security Master, Antivirus VPN', edgecolors='lime', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_avg_retweets, ClashRoyale_total_tweets, s=ClashRoyale_total_likes*5, color='orange', label='Clash Royale', edgecolors='lime', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_avg_retweets, whatsapp_total_tweets, s=whatsapp_total_likes*5, color='green', label='Whats App', edgecolors='lime', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_avg_retweets, templerun_total_tweets, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_avg_retweets, pandora_total_tweets, s=pandora_total_likes*5, color='cornflowerblue', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_avg_retweets, pinterest_total_tweets, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_avg_retweets, bible_total_tweets, s=bible_total_likes*5, color='brown', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_avg_retweets, spotify_total_tweets, s=spotify_total_likes*5, color='darkgreen', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_avg_retweets, angrybirds_total_tweets, s=angrybirds_total_likes*5, color='salmon', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs ReTweets (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Total Likes \n" )
plt.ylabel("Average Number of ReTweets per Twitter User \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./tweets_vs__avgfollowers_Scatter.png')
# Hardcoding numbers from analysis done in Apple and Google Play Store Final Code Notebooks
# Avergage Apple, Google Ratings
facebook_avg_rating = (3.5 + 4.1)/2
instagram_avg_rating = (4.5 + 4.5)/2
coc_avg_rating = (4.5 + 4.6)/2
candycrushsaga_avg_rating = (4.5 + 4.4)/2
# Avergage Apple, Google Reviews
facebook_reviews = (2974676 + 78158306)/2
instagram_reviews = (2161558 + 66577446)/2
coc_reviews = (2130805 + 44893888)/2
candycrushsaga_reviews = (961794 + 22430188)/2
# Apple App Ratings
templerun_rating = 4.5
pandora_rating = 4.5
pinterest_rating = 4.5
bible_rating = 4.5
spotify_rating = 4.5
angrybirds_rating = 4.5
# Apple App Reviews
templerun_reviews = 1724546
pandora_reviews = 1126879
pinterest_reviews = 1061624
bible_reviews = 985920
spotify_reviews = 878563
angrybirds_reviews = 824451
# Google App Ratings
whatsapp_rating = 4.4
clean_master_rating = 4.7
subway_surfers_rating = 4.5
you_tube_rating = 4.3
security_master_rating = 4.7
clash_royale_rating = 4.6
# Google App Reviews
whatsapp_reviews = 69119316
clean_master_reviews = 42916526
subway_surfers_reviews = 27725352
you_tube_reviews = 25655305
security_master_reviews = 24900999
clash_royale_reviews = 23136735
# Scatter Plot 5 - Tweets vs Ratings vs Likes - USE THIS ONE
fig, ax = plt.subplots(figsize=(11,11))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_total_tweets, facebook_avg_rating, s=facebook_total_likes*5, color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(instagram_total_tweets, instagram_avg_rating, s=instagram_total_likes*5, color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.75)
coc_plot= ax.scatter(coc_total_tweets, coc_avg_rating, s=coc_total_likes*5, color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_total_tweets, candycrushsaga_avg_rating, s=candycrushsaga_total_likes*5, color='limegreen', label='Candy Crush Saga', edgecolors='black')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_total_tweets, clean_master_rating, s=CleanMaster_total_likes*5, color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_total_tweets, subway_surfers_rating, s=SubwaySurfers_total_likes*5, color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(youtube_total_tweets, you_tube_rating, s=youtube_total_likes*5, color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_total_tweets, security_master_rating, s=SecurityMaster_total_likes*5, color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_total_tweets, clash_royale_rating, s=ClashRoyale_total_likes*5, color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_total_tweets, whatsapp_rating, s=whatsapp_total_likes*5, color='tan', label='Whats App', edgecolors='black', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_total_tweets,templerun_rating, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_total_tweets, pandora_rating, s=pandora_total_likes*5, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_total_tweets, pinterest_rating, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_total_tweets, bible_rating, s=bible_total_likes*5, color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_total_tweets, spotify_rating, s=spotify_total_likes*5, color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_total_tweets, angrybirds_rating, s=angrybirds_total_likes*5, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs Ratings (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Total Likes \n" )
plt.ylabel("App Store User Ratings (Out of 5) \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./TWEETS_VS_RATINGSVS LIKES_Scatter.png')
# Scatter Plot 5 - Tweets vs Reviews vs Ratings (size) - DO NOT USE
fig, ax = plt.subplots(figsize=(11,11))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_total_tweets, facebook_reviews, s=facebook_avg_rating*105, color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(instagram_total_tweets, instagram_reviews, s=instagram_avg_rating*105, color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.75)
coc_plot= ax.scatter(coc_total_tweets, coc_reviews, s=coc_avg_rating*105, color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_total_tweets, candycrushsaga_reviews, s=candycrushsaga_avg_rating*105, color='limegreen', label='Candy Crush Saga', edgecolors='black', alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_total_tweets, clean_master_reviews, s=clean_master_rating*105, color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_total_tweets, subway_surfers_reviews, s=subway_surfers_rating*105, color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(youtube_total_tweets, you_tube_reviews, s=you_tube_rating*105, color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_total_tweets, security_master_reviews, s=security_master_rating*105, color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_total_tweets, clash_royale_reviews, s=clash_royale_rating*105, color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_total_tweets, whatsapp_reviews, s=whatsapp_rating*105, color='tan', label='Whats App', edgecolors='lime', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_total_tweets,templerun_reviews, s=templerun_rating*105, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_total_tweets, pandora_reviews, s=pandora_rating*105, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_total_tweets, pinterest_reviews, s=pinterest_rating*105, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_total_tweets, bible_reviews, s=bible_rating*105, color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_total_tweets, spotify_reviews, s=spotify_rating*105, color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_total_tweets, angrybirds_reviews, s=angrybirds_rating*105, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs Reviews (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with App Ratings \n" )
plt.ylabel("App Store Reviews in Millions \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./tweets_vs__avgfollowers_Scatter.png')
# Scatter Plot 6 - Tweets vs Reviews vs Likes (size) -USE THIS ONE
fig, ax = plt.subplots(figsize=(11,11))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_total_tweets, facebook_reviews, s=facebook_total_likes*5, color='sandybrown', label='Facebook', edgecolors='black', alpha=0.75)
instagram_plot= ax.scatter(instagram_total_tweets, instagram_reviews, s=instagram_total_likes*5, color='saddlebrown', label='Instagram', edgecolors='black', alpha=0.75)
coc_plot= ax.scatter(coc_total_tweets, coc_reviews, s=coc_total_likes*5, color='springgreen', label='Clash Of Clans', edgecolors='black', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_total_tweets, candycrushsaga_reviews, s=candycrushsaga_total_likes*5, color='limegreen', label='Candy Crush Saga', edgecolors='black', alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_total_tweets, clean_master_reviews, s=CleanMaster_total_likes*5, color='m', label='Clean Master Space Cleaner', edgecolors='black', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_total_tweets, subway_surfers_reviews, s=SubwaySurfers_total_likes*5, color='lime', label='Subway Surfers', edgecolors='black', alpha=0.75)
youtube_plot= ax.scatter(youtube_total_tweets, you_tube_reviews, s=youtube_total_likes*5, color='red', label='You Tube', edgecolors='black', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_total_tweets, security_master_reviews, s=SecurityMaster_total_likes*5, color='blueviolet', label='Security Master, Antivirus VPN', edgecolors='black', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_total_tweets, clash_royale_reviews, s=ClashRoyale_total_likes*5, color='darkolivegreen', label='Clash Royale', edgecolors='black', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_total_tweets, whatsapp_reviews, s=whatsapp_total_likes*5, color='tan', label='Whats App', edgecolors='black', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_total_tweets, templerun_reviews, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_total_tweets, pandora_reviews, s=pandora_total_likes*5, color='coral', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_total_tweets, pinterest_reviews, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_total_tweets, bible_reviews, s=bible_total_likes*5, color='tomato', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_total_tweets, spotify_reviews, s=spotify_total_likes*5, color='orangered', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_total_tweets, angrybirds_reviews, s=angrybirds_total_likes*5, color='forestgreen', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs Reviews (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Total Tweets \n Note: Circle sizes correlate with Likes \n" )
plt.ylabel("App Store Reviews in Millions \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./TWEETS_VS_REVIEWS_VSLIKES_Scatter.png')
# Scatter Plot 5 - Tweets vs Reviews vs Likes (size) - Need to do
fig, ax = plt.subplots(figsize=(8,8))
# Apps both on Google Play Store and Apple - 4 apps
facebook_plot = ax.scatter(facebook_avg_retweets, facebook_total_tweets, s=facebook_total_likes*5, color='blue', label='Facebook', edgecolors='red', alpha=0.75)
instagram_plot= ax.scatter(instagram_avg_retweets, instagram_total_tweets, s=instagram_total_likes*5, color='fuchsia', label='Instagram', edgecolors='red', alpha=0.75)
coc_plot= ax.scatter(coc_avg_retweets, coc_total_tweets, s=coc_total_likes*5, color='springgreen', label='Clash Of Clans', edgecolors='red', alpha=0.75)
candycrushsaga_plot= ax.scatter(candycrushsaga_avg_retweets, candycrushsaga_total_tweets, s=candycrushsaga_total_likes*5, color='black', label='Candy Crush Saga', edgecolors='red')#, alpha=0.75)
# Google Play Store - 6 apps:
CleanMaster_plot= ax.scatter(CleanMaster_avg_retweets, CleanMaster_total_tweets, s=CleanMaster_total_likes*5, color='olive', label='Clean Master Space Cleaner', edgecolors='lime', alpha=0.75)
SubwaySurfers_plot= ax.scatter(SubwaySurfers_avg_retweets, SubwaySurfers_total_tweets, s=SubwaySurfers_total_likes*5, color='plum', label='Subway Surfers', edgecolors='lime', alpha=0.75)
youtube_plot= ax.scatter(youtube_avg_retweets, youtube_total_tweets, s=youtube_total_likes*5, color='grey', label='You Tube', edgecolors='lime', alpha=0.75)
SecurityMaster_plot= ax.scatter(SecurityMaster_avg_retweets, SecurityMaster_total_tweets, s=SecurityMaster_total_likes*5, color='coral', label='Security Master, Antivirus VPN', edgecolors='lime', alpha=0.75)
ClashRoyale_plot= ax.scatter(ClashRoyale_avg_retweets, ClashRoyale_total_tweets, s=ClashRoyale_total_likes*5, color='orange', label='Clash Royale', edgecolors='lime', alpha=0.75)
whatsapp_plot= ax.scatter(whatsapp_avg_retweets, whatsapp_total_tweets, s=whatsapp_total_likes*5, color='green', label='Whats App', edgecolors='lime', alpha=0.75)
# Apple Apps Store - 6 apps
templerun_plot= ax.scatter(templerun_avg_retweets, templerun_total_tweets, s=templerun_total_likes*5, color='lawngreen', label='Temple Run', edgecolors='black', alpha=0.75)
pandora_plot= ax.scatter(pandora_avg_retweets, pandora_total_tweets, s=pandora_total_likes*5, color='cornflowerblue', label='Pandora', edgecolors='black', alpha=0.75)
pinterest_plot= ax.scatter(pinterest_avg_retweets, pinterest_total_tweets, s=pinterest_total_likes*5, color='firebrick', label='Pinterest', edgecolors='black', alpha=0.75)
bible_plot= ax.scatter(bible_avg_retweets, bible_total_tweets, s=bible_total_likes*5, color='brown', label='Bible', edgecolors='black', alpha=0.75)
spotify_plot= ax.scatter(spotify_avg_retweets, spotify_total_tweets, s=spotify_total_likes*5, color='darkgreen', label='Spotify', edgecolors='black', alpha=0.75)
angrybirds_plot= ax.scatter(angrybirds_avg_retweets, angrybirds_total_tweets, s=angrybirds_total_likes*5, color='salmon', label='Angry Birds', edgecolors='black', alpha=0.75)
# title and labels
plt.title("Tweets vs ReTweets (Mar 27 - Apr 3, 2019) \n")
plt.xlabel("Avg ReTweets \n Note: Circle sizes correlate with Total Likes \n" )
plt.ylabel("Total Tweets \n")
# set and format the legend
lgnd = plt.legend(title='Legend', loc="best")
lgnd.legendHandles[0]._sizes = [30]
lgnd.legendHandles[1]._sizes = [30]
lgnd.legendHandles[2]._sizes = [30]
lgnd.legendHandles[3]._sizes = [30]
lgnd.legendHandles[4]._sizes = [30]
lgnd.legendHandles[5]._sizes = [30]
lgnd.legendHandles[6]._sizes = [30]
lgnd.legendHandles[7]._sizes = [30]
lgnd.legendHandles[8]._sizes = [30]
lgnd.legendHandles[9]._sizes = [30]
lgnd.legendHandles[10]._sizes = [30]
lgnd.legendHandles[11]._sizes = [30]
lgnd.legendHandles[12]._sizes = [30]
lgnd.legendHandles[13]._sizes = [30]
lgnd.legendHandles[14]._sizes = [30]
lgnd.legendHandles[15]._sizes = [30]
#grid lines and show
plt.grid()
plt.show()
#plt.savefig('./tweets_vs__avgfollowers_Scatter.png')
```
| github_jupyter |
# Simulating Power Spectra
In this notebook we will explore how to simulate the data that we will use to investigate how different spectral parameters can influence band ratios.
Simulated power spectra will be created with varying aperiodic and periodic parameters, and are created using the [FOOOF](https://github.com/fooof-tools/fooof) tool.
In the first set of simulations, each set of simulated spectra will vary across a single parameter while the remaining parameters remain constant. In a secondary set of simulated power spectra, we will simulate pairs of parameters changing together.
For this part of the project, this notebook demonstrates the simulations with some examples, but does not create the actual set simulations used in the project. The full set of simulations for the project are created by the standalone scripts, available in the `scripts` folder.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from fooof.sim import *
from fooof.plts import plot_spectra
# Import custom project code
import sys
sys.path.append('../bratios')
from settings import *
from paths import DATA_PATHS as dp
# Settings
FREQ_RANGE = [1, 40]
LO_BAND = [4, 8]
HI_BAND = [13, 30]
# Define default parameters
EXP_DEF = [0, 1]
CF_LO_DEF = np.mean(LO_BAND)
CF_HI_DEF = np.mean(HI_BAND)
PW_DEF = 0.4
BW_DEF = 1
# Set a range of values for the band power to take
PW_START = 0
PW_END = 1
W_INC = .1
# Set a range of values for the aperiodic exponent to take
EXP_START = .25
EXP_END = 3
EXP_INC = .25
```
## Simulate power spectra with one parameter varying
First we will make several power spectra with varying band power.
To do so, we will continue to use the example of the theta beta ratio, and vary the power of the higher (beta) band.
```
# The Stepper object iterates through a range of values
pw_step = Stepper(PW_START, PW_END, PW_INC)
num_spectra = len(pw_step)
# `param_iter` creates a generator can be used to step across ranges of parameters
pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF], [CF_HI_DEF, pw_step, BW_DEF]])
# Simulate power spectra
pw_fs, pw_ps, pw_syns = gen_group_power_spectra(num_spectra, FREQ_RANGE, EXP_DEF, pw_iter)
# Collect together simulated data
pw_data = [pw_fs, pw_ps, pw_syns]
# Save out data, to access from other notebooks
np.save(dp.make_file_path(dp.demo, 'PW_DEMO', 'npy'), pw_data)
# Plot our series of generated power spectra, with varying high-band power
plot_spectra(pw_fs, pw_ps, log_powers=True)
```
Above, we can see each of the spectra we generated plotted, with the same properties for all parameters, except for beta power.
The same approach can be used to simulate data that vary only in one parameter, for each isolated spectral feature.
## Simulate power spectra with two parameters varying
In this section we will explore generating data in which two parameters vary simultaneously.
Specifically, we will simulate the case in which the aperiodic exponent varies while power for a higher band oscillation also varies.
The total number of trials will be: `(n_pw_changes) * (n_exp_changes)`.
```
data = []
exp_step = Stepper(EXP_START, EXP_END, EXP_INC)
for exp in exp_step:
# Low band sweeps through power range
pw_step = Stepper(PW_START, PW_END, PW_INC)
pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF],
[CF_HIGH_DEF, pw_step, BW_DEF]])
# Generates data
pw_apc_fs, pw_apc_ps, pw_apc_syns = gen_group_power_spectra(
len(pw_step), FREQ_RANGE, [0, exp], pw_iter)
# Collect together all simulated data
data.append(np.array([exp, pw_apc_fs, pw_apc_ps], dtype=object))
# Save out data, to access from other notebooks
np.save(dp.make_file_path(dp.demo, 'EXP_PW_DEMO', 'npy'), data)
# Extract some example power spectra, sub-sampling ones that vary in both exp & power
# Note: this is just a shortcut to step across the diagonal of the matrix of simulated spectra
plot_psds = [data[ii][2][ii, :] for ii in range(min(len(exp_step), len(pw_step)))]
# Plot a selection of power spectra in the paired parameter simulations
plot_spectra(pw_apc_fs, plot_psds, log_powers=True)
```
In the plot above, we can see a selection of the data we just simulated, selecting a group of power spectra that vary across both exponent and beta power.
In the next notebook we will calculate band ratios and see how changing these parameters affects ratio measures.
### Simulating the full set of data
Here we just simulated example data, to show how the simulations work.
The full set of simulations for this project are re-created with scripts, available in the `scripts` folder.
To simulate full set of single parameter simulation for this project, run this script:
`python gen_single_param_sims.py`
To simulate full set of interacting parameter simulation for this project, run this script:
`python gen_interacting_param_sims.py`
These scripts will automatically save all the regenerated data into the `data` folder.
```
# Check all the available data files for the single parameter simulations
dp.list_files('sims_single')
# Check all the available data files for the interacting parameter simulations
dp.list_files('sims_interacting')
```
| github_jupyter |
# Symbolic System
Create a symbolic three-state system:
```
import markoviandynamics as md
sym_system = md.SymbolicDiscreteSystem(3)
```
Get the symbolic equilibrium distribution:
```
sym_system.equilibrium()
```
Create a symbolic three-state system with potential energy barriers:
```
sym_system = md.SymbolicDiscreteSystemArrhenius(3)
```
It's the same object as the previous one, only with additional symbolic barriers:
```
sym_system.B_ij
```
We can assing values to the free parameters in the equilibrium distribution:
```
sym_system.equilibrium(energies=[0, 0.1, 1])
sym_system.equilibrium(energies=[0, 0.1, 1], temperature=1.5)
```
and create multiple equilibrium points by assigning temperature sequence:
```
import numpy as np
temperature_range = np.linspace(0.01, 10, 300)
equilibrium_line = sym_system.equilibrium([0, 0.1, 1], temperature_range)
equilibrium_line.shape
```
# Symbolic rate matrix
Create a symbolic rate matrix with Arrhenius process transitions:
```
sym_rate_matrix = md.SymbolicRateMatrixArrhenius(3)
sym_rate_matrix
```
Energies and barriers can be substituted at once:
```
energies = [0, 0.1, 1]
barriers = [[0, 0.11, 1.1],
[0.11, 0, 10],
[1.1, 10, 0]]
sym_rate_matrix.subs_symbols(energies, barriers)
sym_rate_matrix.subs_symbols(energies, barriers, temperature=2.5)
```
A symbolic rate matrix can be also lambdified (transform to lambda function):
```
rate_matrix_lambdified = sym_rate_matrix.lambdify()
```
The parameters of this function are the free symbols in the rate matrix:
```
rate_matrix_lambdified.__code__.co_varnames
```
They are positioned in ascending order. First the temperature, then the energies and the barriers. Sequence of rate matrices can be created by calling this function with a sequence for each parameter.
# Dynamics
We start by computing an initial probability distribution by assigning the energies and temperature:
```
p_initial = sym_system.equilibrium(energies, 0.5)
p_initial
```
## Trajectory - evolve by a fixed rate matrix
Compute the rate matrix by substituting free symbols:
```
rate_matrix = md.rate_matrix_arrhenius(energies, barriers, 1.2)
rate_matrix
```
Create trajectory of probability distributions in time:
```
import numpy as np
# Create time sequence
t_range = np.linspace(0, 5, 100)
trajectory = md.evolve(p_initial, rate_matrix, t_range)
trajectory.shape
import matplotlib.pyplot as plt
%matplotlib inline
for i in [0, 1, 2]:
plt.plot(t_range, trajectory[i,0,:], label='$p_{}(t)$'.format(i + 1))
plt.xlabel('$t$')
plt.legend()
```
## Trajectory - evolve by a time-dependent rate matrix
Create a temperature sequence in time:
```
temperature_time = 1.4 + np.sin(4. * t_range)
```
Create a rate matrix as a function of the temperature sequence:
```
# Array of stacked rate matrices that corresponds to ``temperature_time``
rate_matrix_time = md.rate_matrix_arrhenius(energies, barriers, temperature_time)
rate_matrix_time.shape
crazy_trajectory = md.evolve(p_initial, rate_matrix_time, t_range)
crazy_trajectory.shape
for i in [0, 1, 2]:
plt.plot(t_range, crazy_trajectory[i,0,:], label='$p_{}(t)$'.format(i + 1))
plt.xlabel('$t$')
plt.legend()
```
# Diagonalize the rate matrix
Calculate the eigenvalues, left and right eigenvectors:
```
U, eigenvalues, V = md.eigensystem(rate_matrix)
U.shape, eigenvalues.shape, V.shape
```
The eigenvalues are in descending order (the eigenvectors are ordered accordingly):
```
eigenvalues
```
We can also compute the eigensystem for multiple rate matrices at once (or evolution of a rate matrix, i.e., `rate_matrix_time`):
```
U, eigenvalues, V = md.eigensystem(rate_matrix_time)
U.shape, eigenvalues.shape, V.shape
```
# Decompose to rate matrix eigenvectors
A probability distribution, in general, can be decomposed to the right eigenvectors of the rate matrix:
$$\left|p\right\rangle = a_1\left|v_1\right\rangle + a_2\left|v_2\right\rangle + a_3\left|v_3\right\rangle$$
where $a_i$ is the coefficient of the i'th right eigenvector $\left|v_i\right\rangle$. A rate matrix that satisfies detailed balance has its first eigenvector as the equilibrium distribution $\left|\pi\right\rangle$. Therefore, *markovian-dynamics* normalizes $a_1$ to $1$ and decompose a probability distribution to
$$\left|p\right\rangle = \left|\pi\right\rangle + a_2\left|v_2\right\rangle + a_3\left|v_3\right\rangle$$
Decompose ``p_initial``:
```
md.decompose(p_initial, rate_matrix)
```
We can decompose also multiple points and/or by multiple rate matrices. For example, decompose multiple points:
```
first_decomposition = md.decompose(equilibrium_line, rate_matrix)
first_decomposition.shape
for i in [0, 1, 2]:
plt.plot(temperature_range, first_decomposition[i,:], label='$a_{}(T)$'.format(i + 1))
plt.xlabel('$T$')
plt.legend()
```
or decompose a trajectory:
```
second_decomposition = md.decompose(trajectory, rate_matrix)
second_decomposition.shape
for i in [0, 1, 2]:
plt.plot(t_range, second_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1))
plt.xlabel('$t$')
plt.legend()
```
Decompose single point using multiple rate matrices:
```
third_decomposition = md.decompose(p_initial, rate_matrix_time)
third_decomposition.shape
for i in [0, 1, 2]:
plt.plot(t_range, third_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1))
plt.legend()
```
Decompose, for every time $t$, the corresponding point $\left|p(t)\right\rangle$ using the temporal rate matrix $R(t)$
```
fourth_decomposition = md.decompose(trajectory, rate_matrix_time)
fourth_decomposition.shape
for i in [0, 1, 2]:
plt.plot(t_range, fourth_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1))
plt.legend()
```
# Plotting the 2D probability simplex for three-state system
The probability space of a three-state system is a three dimensional space. However, the normalization constraint $\sum_{i}p_i=1$ together with $0 < p_i \le 1$, form a 2D triangular plane in which all of the possible probability points reside.
We'll start by importing the plotting module:
```
import markoviandynamics.plotting.plotting2d as plt2d
# Use latex rendering
plt2d.latex()
```
Plot the probability plane:
```
plt2d.figure(figsize=(7, 5.5))
plt2d.equilibrium_line(equilibrium_line)
plt2d.legend()
```
We can plot many objects on the probability plane, such as trajectories, points, and eigenvectors of the rate matrix:
```
# Final equilibrium point
p_final = sym_system.equilibrium(energies, 1.2)
plt2d.figure(focus=True, figsize=(7, 5.5))
plt2d.equilibrium_line(equilibrium_line)
# Plot trajectory
plt2d.plot(trajectory, c='r', label=r'$\left|p(t)\right>$')
# Initial & final points
plt2d.point(p_initial, c='k', label=r'$\left|p_0\right>$')
plt2d.point(p_final, c='r', label=r'$\left|\pi\right>$')
# Eigenvectors
plt2d.eigenvectors(md.eigensystem(rate_matrix), kwargs_arrow={'zorder': 1})
plt2d.legend()
```
Plot multiple trajectories at once:
```
# Create temperature sequence
temperature_range = np.logspace(np.log10(0.01), np.log10(10), 50)
# Create the equilibrium line points
equilibrium_line = sym_system.equilibrium(energies, temperature_range)
# Create a trajectory for every point on ``equilibrium_line``
equilibrium_line_trajectory = md.evolve(equilibrium_line, rate_matrix, t_range)
plt2d.figure(focus=True, figsize=(7, 5))
plt2d.equilibrium_line(equilibrium_line)
plt2d.plot(equilibrium_line_trajectory, c='g', alpha=0.2)
plt2d.point(p_final, c='r', label=r'$\left|\pi\right>$')
plt2d.legend()
# Create a trajectory for every point on ``equilibrium_line``
equilibrium_line_crazy_trajectory = md.evolve(equilibrium_line, rate_matrix_time, t_range)
plt2d.figure(focus=True, figsize=(7, 5))
plt2d.equilibrium_line(equilibrium_line)
plt2d.plot(equilibrium_line_crazy_trajectory, c='r', alpha=0.1)
plt2d.text(p_final, r'Text $\alpha$', delta_x=0.05)
plt2d.legend()
```
| github_jupyter |
# Logistic Regression
Modules
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
```
Hyper-parameters
```
input_size = 784
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
```
MNIST dataset (images and labels)
```
train_dataset = torchvision.datasets.MNIST(root='../../data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data',
train=False,
transform=transforms.ToTensor())
```
Data loader (input pipeline)
```
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
Logistic regression model
```
model = nn.Linear(input_size, num_classes)
```
Loss and optimizer
`nn.CrossEntropyLoss()` computes softmax internally
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
Train the model
```
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Reshape images to (batch_size, input_size)
images = images.reshape(-1, 28*28)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
```
Test the model
In test phase, we don't need to compute gradients (for memory efficiency)
```
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total))
```
Save the model checkpoint
```
torch.save(model.state_dict(), 'model.ckpt')
```
| github_jupyter |
# Version information
```
from datetime import date
print("Running date:", date.today().strftime("%B %d, %Y"))
import pyleecan
print("Pyleecan version:" + pyleecan.__version__)
import SciDataTool
print("SciDataTool version:" + SciDataTool.__version__)
```
# How to define a machine
This tutorial shows the different ways to define electrical machine. To do so, it presents the definition of the **Toyota Prius 2004** interior magnet with distributed winding \[1\].
The notebook related to this tutorial is available on [GitHub](https://github.com/Eomys/pyleecan/tree/master/Tutorials/tuto_Machine.ipynb).
## Type of machines Pyleecan can model
Pyleecan handles the geometrical modelling of main 2D radial flux machines such as:
- surface or interior permanent magnet machines (SPMSM, IPMSM)
- synchronous reluctance machines (SynRM)
- squirrel-cage induction machines and doubly-fed induction machines (SCIM, DFIM)
- would rotor synchronous machines and salient pole synchronous machines (WSRM)
- switched reluctance machines (SRM)
The architecture of Pyleecan also enables to define other kinds of machines (with more than two laminations for instance). More information in our ICEM 2020 pyblication \[2\]
Every machine can be defined by using the **Graphical User Interface** or directly in **Python script**.
## Defining machine with Pyleecan GUI
The GUI is the easiest way to define machine in Pyleecan. Its purpose is to create or load a machine and save it in JSON format to be loaded in a python script. The interface enables to define step by step in a user-friendly way every characteristics of the machine such as:
- topology
- dimensions
- materials
- winding
Each parameter is explained by a tooltip and the machine can be previewed at each stage of the design.
## Start the GUI
The GUI can be started by running the following command in a notebook:
```python
# Start Pyleecan GUI from the Jupyter Notebook
%run -m pyleecan
```
To use it on Anaconda you may need to create the system variable:
QT_QPA_PLATFORM_PLUGIN_PATH : path\to\anaconda3\Lib\site-packages\PySide2\plugins\platforms
The GUI can also be launched in a terminal by calling one of the following commands in a terminal:
```
Path/to/python.exe -m pyleecan
Path/to/python3.exe -m pyleecan
```
## load a machine
Once the machine defined in the GUI it can be loaded with the following commands:
```
%matplotlib notebook
# Load the machine
from os.path import join
from pyleecan.Functions.load import load
from pyleecan.definitions import DATA_DIR
IPMSM_A = load(join(DATA_DIR, "Machine", "Toyota_Prius.json"))
IPMSM_A.plot()
```
## Defining Machine in scripting mode
Pyleecan also enables to define the machine in scripting mode, using different classes. Each class is defined from a csv file in the folder _pyleecan/Generator/ClasseRef_ and the documentation of every class is available on the dedicated [webpage](https://www.pyleecan.org/pyleecan.Classes.html).
The following image shows the machine classes organization :

Every rotor and stator can be created with the **Lamination** class or one of its daughters.

The scripting enables to define some complex and exotic machine that can't be defined in the GUI such as this one:
```
from pyleecan.Classes.MachineUD import MachineUD
from pyleecan.Classes.LamSlotWind import LamSlotWind
from pyleecan.Classes.LamSlot import LamSlot
from pyleecan.Classes.WindingCW2LT import WindingCW2LT
from pyleecan.Classes.SlotW10 import SlotW10
from pyleecan.Classes.SlotW22 import SlotW22
from numpy import pi
machine = MachineUD(name="4 laminations machine")
# Main geometry parameter
Rext = 170e-3 # Exterior radius of outter lamination
W1 = 30e-3 # Width of first lamination
A1 = 2.5e-3 # Width of the first airgap
W2 = 20e-3
A2 = 10e-3
W3 = 20e-3
A3 = 2.5e-3
W4 = 60e-3
# Outer stator
lam1 = LamSlotWind(Rext=Rext, Rint=Rext - W1, is_internal=False, is_stator=True)
lam1.slot = SlotW22(
Zs=12, W0=2 * pi / 12 * 0.75, W2=2 * pi / 12 * 0.75, H0=0, H2=W1 * 0.65
)
lam1.winding = WindingCW2LT(qs=3, p=3)
# Outer rotor
lam2 = LamSlot(
Rext=lam1.Rint - A1, Rint=lam1.Rint - A1 - W2, is_internal=True, is_stator=False
)
lam2.slot = SlotW10(Zs=22, W0=25e-3, W1=25e-3, W2=15e-3, H0=0, H1=0, H2=W2 * 0.75)
# Inner rotor
lam3 = LamSlot(
Rext=lam2.Rint - A2,
Rint=lam2.Rint - A2 - W3,
is_internal=False,
is_stator=False,
)
lam3.slot = SlotW10(
Zs=22, W0=17.5e-3, W1=17.5e-3, W2=12.5e-3, H0=0, H1=0, H2=W3 * 0.75
)
# Inner stator
lam4 = LamSlotWind(
Rext=lam3.Rint - A3, Rint=lam3.Rint - A3 - W4, is_internal=True, is_stator=True
)
lam4.slot = SlotW10(Zs=12, W0=25e-3, W1=25e-3, W2=1e-3, H0=0, H1=0, H2=W4 * 0.75)
lam4.winding = WindingCW2LT(qs=3, p=3)
# Machine definition
machine.lam_list = [lam1, lam2, lam3, lam4]
# Plot, check and save
machine.plot()
```
## Stator definition
To define the stator, we initialize a [**LamSlotWind**](http://pyleecan.org/pyleecan.Classes.LamSlotWind.html) object with the different parameters. In pyleecan, all the parameters must be set in SI units.
```
from pyleecan.Classes.LamSlotWind import LamSlotWind
mm = 1e-3 # Millimeter
# Lamination setup
stator = LamSlotWind(
Rint=80.95 * mm, # internal radius [m]
Rext=134.62 * mm, # external radius [m]
L1=83.82 * mm, # Lamination stack active length [m] without radial ventilation airducts
# but including insulation layers between lamination sheets
Nrvd=0, # Number of radial air ventilation duct
Kf1=0.95, # Lamination stacking / packing factor
is_internal=False,
is_stator=True,
)
```
Then we add 48 slots using [**SlotW11**](http://pyleecan.org/pyleecan.Classes.SlotW11.html) which is one of the 25 Slot classes:
```
from pyleecan.Classes.SlotW11 import SlotW11
# Slot setup
stator.slot = SlotW11(
Zs=48, # Slot number
H0=1.0 * mm, # Slot isthmus height
H1=0, # Height
H2=33.3 * mm, # Slot height below wedge
W0=1.93 * mm, # Slot isthmus width
W1=5 * mm, # Slot top width
W2=8 * mm, # Slot bottom width
R1=4 * mm # Slot bottom radius
)
```
As for the slot, we can define the winding and its conductor with [**WindingDW1L**](http://pyleecan.org/pyleecan.Classes.WindingDW1L.html) and [**CondType11**](http://pyleecan.org/pyleecan.Classes.CondType11.html). The conventions for winding are further explained on [pyleecan website](https://pyleecan.org/winding.convention.html)
```
from pyleecan.Classes.WindingDW1L import WindingDW1L
from pyleecan.Classes.CondType11 import CondType11
# Winding setup
stator.winding = WindingDW1L(
qs=3, # number of phases
Lewout=0, # staight length of conductor outside lamination before EW-bend
p=4, # number of pole pairs
Ntcoil=9, # number of turns per coil
Npcp=1, # number of parallel circuits per phase
Nslot_shift_wind=0, # 0 not to change the stator winding connection matrix built by pyleecan number
# of slots to shift the coils obtained with pyleecan winding algorithm
# (a, b, c becomes b, c, a with Nslot_shift_wind1=1)
is_reverse_wind=False # True to reverse the default winding algorithm along the airgap
# (c, b, a instead of a, b, c along the trigonometric direction)
)
# Conductor setup
stator.winding.conductor = CondType11(
Nwppc_tan=1, # stator winding number of preformed wires (strands)
# in parallel per coil along tangential (horizontal) direction
Nwppc_rad=1, # stator winding number of preformed wires (strands)
# in parallel per coil along radial (vertical) direction
Wwire=0.000912, # single wire width without insulation [m]
Hwire=2e-3, # single wire height without insulation [m]
Wins_wire=1e-6, # winding strand insulation thickness [m]
type_winding_shape=0, # type of winding shape for end winding length calculation
# 0 for hairpin windings
# 1 for normal windings
)
```
## Rotor definition
For this example, we use the [**LamHole**](http://www.pyleecan.org/pyleecan.Classes.LamHole.html) class to define the rotor as a lamination with holes to contain magnet.
In the same way as for the stator, we start by defining the lamination:
```
from pyleecan.Classes.LamHole import LamHole
# Rotor setup
rotor = LamHole(
Rint=55.32 * mm, # Internal radius
Rext=80.2 * mm, # external radius
is_internal=True,
is_stator=False,
L1=stator.L1 # Lamination stack active length [m]
# without radial ventilation airducts but including insulation layers between lamination sheets
)
```
After that, we can add holes with magnets to the rotor using the class [**HoleM50**](http://www.pyleecan.org/pyleecan.Classes.HoleM50.html):
```
from pyleecan.Classes.HoleM50 import HoleM50
rotor.hole = list()
rotor.hole.append(
HoleM50(
Zh=8, # Number of Hole around the circumference
W0=42.0 * mm, # Slot opening
W1=0, # Tooth width (at V bottom)
W2=0, # Distance Magnet to bottom of the V
W3=14.0 * mm, # Tooth width (at V top)
W4=18.9 * mm, # Magnet Width
H0=10.96 * mm, # Slot Depth
H1=1.5 * mm, # Distance from the lamination Bore
H2=1 * mm, # Additional depth for the magnet
H3=6.5 * mm, # Magnet Height
H4=0, # Slot top height
)
)
```
The holes are defined as a list to enable to create several layers of holes and/or to combine different kinds of holes
## Create a shaft and a frame
The classes [**Shaft**](http://www.pyleecan.org/pyleecan.Classes.Shaft.html) and [**Frame**](http://www.pyleecan.org/pyleecan.Classes.Frame.html) enable to add a shaft and a frame to the machine. For this example there is no frame:
```
from pyleecan.Classes.Shaft import Shaft
from pyleecan.Classes.Frame import Frame
# Set shaft
shaft = Shaft(Drsh=rotor.Rint * 2, # Diamater of the rotor shaft [m]
# used to estimate bearing diameter for friction losses
Lshaft=1.2 # length of the rotor shaft [m]
)
frame = None
```
## Set materials and magnets
Every Pyleecan object can be saved in JSON using the method `save` and can be loaded with the `load` function.
In this example, the materials *M400_50A* and *Copper1* are loaded and set in the corresponding properties.
```
# Loading Materials
M400_50A = load(join(DATA_DIR, "Material", "M400-50A.json"))
Copper1 = load(join(DATA_DIR, "Material", "Copper1.json"))
# Set Materials
stator.mat_type = M400_50A # Stator Lamination material
rotor.mat_type = M400_50A # Rotor Lamination material
stator.winding.conductor.cond_mat = Copper1 # Stator winding conductor material
```
A material can also be defined in scripting as any other Pyleecan object. The material *Magnet_prius* is created with the classes [**Material**](http://www.pyleecan.org/pyleecan.Classes.Material.html) and [**MatMagnetics**](http://www.pyleecan.org/pyleecan.Classes.MatMagnetics.html).
```
from pyleecan.Classes.Material import Material
from pyleecan.Classes.MatMagnetics import MatMagnetics
# Defining magnets
Magnet_prius = Material(name="Magnet_prius")
# Definition of the magnetic properties of the material
Magnet_prius.mag = MatMagnetics(
mur_lin = 1.05, # Relative magnetic permeability
Hc = 902181.163126629, # Coercitivity field [A/m]
alpha_Br = -0.001, # temperature coefficient for remanent flux density /°C compared to 20°C
Brm20 = 1.24, # magnet remanence induction at 20°C [T]
Wlam = 0, # lamination sheet width without insulation [m] (0 == not laminated)
)
# Definition of the electric properties of the material
Magnet_prius.elec.rho = 1.6e-06 # Resistivity at 20°C
# Definition of the structural properties of the material
Magnet_prius.struct.rho = 7500.0 # mass per unit volume [kg/m3]
```
The magnet materials are set with the "magnet_X" property. Pyleecan enables to define different magnetization or material for each magnets of the holes. Here both magnets are defined identical.
```
# Set magnets in the rotor hole
rotor.hole[0].magnet_0.mat_type = Magnet_prius
rotor.hole[0].magnet_1.mat_type = Magnet_prius
rotor.hole[0].magnet_0.type_magnetization = 1
rotor.hole[0].magnet_1.type_magnetization = 1
```
## Create, save and plot the machine
Finally, the Machine object can be created with [**MachineIPMSM**](http://www.pyleecan.org/pyleecan.Classes.MachineIPMSM.html) and saved using the `save` method.
```
from pyleecan.Classes.MachineIPMSM import MachineIPMSM
%matplotlib notebook
IPMSM_Prius_2004 = MachineIPMSM(
name="Toyota Prius 2004",
stator=stator,
rotor=rotor,
shaft=shaft,
frame=frame # None
)
IPMSM_Prius_2004.save('IPMSM_Toyota_Prius_2004.json')
im=IPMSM_Prius_2004.plot()
```
Note that Pyleecan also handles ventilation duct thanks to the classes :
- [**VentilationCirc**](http://www.pyleecan.org/pyleecan.Classes.VentilationCirc.html)
- [**VentilationPolar**](http://www.pyleecan.org/pyleecan.Classes.VentilationPolar.html)
- [**VentilationTrap**](http://www.pyleecan.org/pyleecan.Classes.VentilationTrap.html)
[1] Z. Yang, M. Krishnamurthy and I. P. Brown, "Electromagnetic and vibrational characteristic of IPM over full torque-speed range", *2013 International Electric Machines & Drives Conference*, Chicago, IL, 2013, pp. 295-302.
[2] P. Bonneel, J. Le Besnerais, E. Devillers, C. Marinel, and R. Pile, “Design Optimization of Innovative Electrical Machines Topologies Based on Pyleecan Opensource Object-Oriented Software,” in 24th International Conference on Electrical Machines (ICEM), 2020.
| github_jupyter |
## 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
```
name = input('请输入你的姓名')
print('你好',name)
print('请输入出生的月份与日期')
month = int(input('月份:'))
date = int(input('日期:'))
if month == 4:
if date < 20:
print(name, '你是白羊座')
else:
print(name,'你是非常有性格的金牛座')
if month == 5:
if date < 21:
print(name, '你是非常有性格的金牛座')
else:
print(name,'你是双子座')
if month == 6:
if date < 22:
print(name, '你是双子座')
else:
print(name,'你是巨蟹座')
if month == 7:
if date < 23:
print(name, '你是巨蟹座')
else:
print(name,'你是狮子座')
if month == 8:
if date < 23:
print(name, '你是狮子座')
else:
print(name,'你是处女座')
if month == 9:
if date < 24:
print(name, '你是处女座')
else:
print(name,'你是天秤座')
if month == 10:
if date < 24:
print(name, '你是天秤座')
else:
print(name,'你是天蝎座')
if month == 11:
if date < 23:
print(name, '你是天蝎座')
else:
print(name,'你是射手座')
if month == 12:
if date < 22:
print(name, '你是射手座')
else:
print(name,'你是摩羯座')
if month == 1:
if date < 20:
print(name, '你是摩羯座')
else:
print(name,'你是水瓶座')
if month == 2:
if date < 19:
print(name, '你是水瓶座')
else:
print(name,'你是双鱼座')
if month == 3:
if date < 22:
print(name, '你是双鱼座')
else:
print(name,'你是白羊座')
```
## 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
```
m = int(input('请输入一个整数,回车结束'))
n = int(input('请输入一个整数,不为零'))
intend = input('请输入计算意图,如 + * %')
if m<n:
min_number = m
else:
min_number = n
total = min_number
if intend == '+':
if m<n:
while m<n:
m = m + 1
total = total + m
print(total)
else:
while m > n:
n = n + 1
total = total + n
print(total)
elif intend == '*':
if m<n:
while m<n:
m = m + 1
total = total * m
print(total)
else:
while m > n:
n = n + 1
total = total * n
print(total)
elif intend == '%':
print(m % n)
else:
print(m // n)
```
## 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
```
number = int(input('现在北京的PM2.5指数是多少?请输入整数'))
if number > 500:
print('应该打开空气净化器,戴防雾霾口罩')
elif 300 < number < 500:
print('尽量呆在室内不出门,出门佩戴防雾霾口罩')
elif 200 < number < 300:
print('尽量不要进行户外活动')
elif 100 < number < 200:
print('轻度污染,可进行户外活动,可不佩戴口罩')
else:
print('无须特别注意')
```
## 尝试性练习:写程序,能够在屏幕上显示空行。
```
print('空行是我')
print('空行是我')
print('空行是我')
print( )
print('我是空行')
```
## 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议
```
word = input('请输入一个单词,回车结束')
if word.endswith('s') or word.endswith('sh') or word.endswith('ch') or word.endswith('x'):
print(word,'es',sep = '')
elif word.endswith('y'):
if word.endswith('ay') or word.endswith('ey') or word.endswith('iy') or word.endswith('oy') or word.endswith('uy'):
print(word,'s',sep = '')
else:
word = word[:-1]
print(word,'ies',sep = '')
elif word.endswith('f'):
word = word[:-1]
print(word,'ves',sep = '')
elif word.endswith('fe'):
word = word[:-2]
print(word,'ves',sep = '')
elif word.endswith('o'):
print('词尾加s或者es')
else:
print(word,'s',sep = '')
```
| github_jupyter |
# Buscas supervisionadas
## Imports
```
# imports necessarios
from search import *
from notebook import psource, heatmap, gaussian_kernel, show_map, final_path_colors, display_visual, plot_NQueens
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
import time
from statistics import mean, stdev
from math import sqrt
from memory_profiler import memory_usage
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
```
## Criação do mapa e do grafo
```
# make the dict where the key is associated with his neighbors
mapa = {}
for i in range(0,60):
for j in range(0,60):
mapa[(i,j)] = {(i+1,j):1, (i-1,j):1, (i,j+1):1, (i,j-1):1}
grafo = UndirectedGraph(mapa)
```
## Modelagem da classe problema
```
class RobotProblem(Problem):
"""Problema para encontrar o goal saindo de uma posicao (x,y) com um robo."""
def __init__(self, initial, goal, mapa, graph):
Problem.__init__(self, initial, goal)
self.mapa = mapa
self.graph = graph
def actions(self, actual_pos):
"""The actions at a graph node are just its neighbors."""
neighbors = list(self.graph.get(actual_pos).keys())
valid_actions = []
for act in neighbors:
if act[0] == 0 or act[0] == 60 or act[1] == 0 or act[1] == 60:
i = 1
elif (act[0] == 20 and (0<= act[1] <= 40)):
i = 2
elif (act[0] == 40 and (20<= act[1] <= 60)):
i = 3
else:
valid_actions.append(act)
return valid_actions
def result(self, state, action):
"""The result of going to a neighbor is just that neighbor."""
return action
def path_cost(self, cost_so_far, state1, action, state2):
return cost_so_far + 1
def goal_test(self, state):
if state[0] == self.goal[0] and state[1] == self.goal[1]:
return True
else:
return False
def heuristic_1(self, node):
"""h function is straight-line distance from a node's state to goal."""
locs = getattr(self.graph, 'locations', None)
if locs:
if type(node) is str:
return int(distance(locs[node], locs[self.goal]))
return int(distance(locs[node.state], locs[self.goal]))
else:
return infinity
def heuristic_2(self,node):
""" Manhattan Heuristic Function """
x1,y1 = node.state[0], node.state[1]
x2,y2 = self.goal[0], self.goal[1]
return abs(x2 - x1) + abs(y2 - y1)
```
## Busca supervisionada A*: Heuristica 1
### Calculo do consumo de memoria
```
def calc_memory_a_h1():
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
node = astar_search(robot_problem, h=robot_problem.heuristic_1)
mem_usage = memory_usage(calc_memory_a_h1)
print('Memória usada (em intervalos de .1 segundos): %s' % mem_usage)
print('Maximo de memoria usada: %s' % max(mem_usage))
```
### Calculo do custo da busca e o caminho percorrido
```
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
node = astar_search(robot_problem, h=robot_problem.heuristic_1)
print("Custo da busca A* com a primeira heuristica: " + str(node.path_cost))
list_nodes = []
for n in node.path():
list_nodes.append(n.state)
x = []
y = []
for nod in list_nodes:
x.append(nod[0])
y.append(nod[1])
fig = plt.figure()
plt.xlim(0,60)
plt.ylim(0,60)
plt.title('Caminho percorrido pelo robo na busca A* com a primeira heuristica')
plt.annotate("",
xy=(0,0), xycoords='data',
xytext=(0, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(0,0), xycoords='data',
xytext=(60, 0), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(60,0), xycoords='data',
xytext=(60, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(0,60), xycoords='data',
xytext=(60, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(40,20), xycoords='data',
xytext=(40, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(20,0), xycoords='data',
xytext=(20, 40), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.scatter(x,y)
plt.scatter(10,10,color='r')
plt.scatter(50,50,color='r')
plt.show()
```
### Calculo do tempo gasto pelo A* com inicio em (10,10) e fim em (50,50) usando a heuristica 1
```
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
times = []
for i in range(0,1000):
start = time.time()
node = astar_search(robot_problem, h=robot_problem.heuristic_1)
end = time.time()
times.append(end - start)
media_a_1 = mean(times)
desvio_a_1 = stdev(times)
intervalo_conf = '(' + str( media_a_1 - 1.96 * (desvio_a_1 / (len(times)) ** (1/2)) ) + ',' + str( media_a_1 + 1.96 * (desvio_a_1 / (len(times)) ** (1/2)) ) + ')'
print("Media do tempo gasto para a busca A* com a primeira heuristica: " + str(media_a_1))
print("Desvio padrao do tempo gasto para a busca A* com a primeira heuristica: " + str(desvio_a_1))
print("Intervalo de confiança para a busca A* com a primeira heuristica: " + intervalo_conf)
fig = plt.figure()
plt.hist(times,bins=50)
plt.title('Histograma para o tempo de execucao do A* com a primeira heuristica')
plt.show()
```
### Projecao da relacao entre distancia em linha reta e tempo para o A* com a primeira heuristica
```
goal_pos = (50,50)
x = []
y = []
for i in range(5,50):
for j in range(5,50):
if i != 20 and i != 40:
init_pos = (i,i)
distancia_linha_reta = sqrt( (goal_pos[0] - init_pos[0]) ** 2 + (goal_pos[1] - init_pos[1]) ** 2)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
start = time.time()
node = astar_search(robot_problem, h=robot_problem.heuristic_1)
end = time.time()
x.append(distancia_linha_reta)
y.append(end - start)
import pandas as pd
data = {'x':[], 'y':[]}
df = pd.DataFrame(data)
df['x'] = x
df['y'] = y
df
fig = plt.figure()
plt.scatter(x,y)
plt.ylim(0.2, 1)
plt.title("Distancia em linha reta x Tempo A*-heuristica1")
plt.xlabel("Distancia em linha reta entre os pontos inicial e final")
plt.ylabel("Tempo da busca A* com a primeira heuristica")
plt.show()
```
## Busca supervisionada A*: Heuristica 2
### Calculo do consumo de memoria
```
def calc_memory_a_h2():
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
node = astar_search(robot_problem, h=robot_problem.heuristic_2)
mem_usage = memory_usage(calc_memory_a_h2)
print('Memória usada (em intervalos de .1 segundos): %s' % mem_usage)
print('Maximo de memoria usada: %s' % max(mem_usage))
```
### Calculo do custo da busca e o caminho percorrido
```
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
node = astar_search(robot_problem, h=robot_problem.heuristic_2)
print("Custo da busca A* com a segunda heuristica: " + str(node.path_cost))
list_nodes = []
for n in node.path():
list_nodes.append(n.state)
x = []
y = []
for nod in list_nodes:
x.append(nod[0])
y.append(nod[1])
fig = plt.figure()
plt.xlim(0,60)
plt.ylim(0,60)
plt.title('Caminho percorrido pelo robo na busca A* com a segunda heuristica')
plt.annotate("",
xy=(0,0), xycoords='data',
xytext=(0, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(0,0), xycoords='data',
xytext=(60, 0), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(60,0), xycoords='data',
xytext=(60, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(0,60), xycoords='data',
xytext=(60, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(40,20), xycoords='data',
xytext=(40, 60), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.annotate("",
xy=(20,0), xycoords='data',
xytext=(20, 40), textcoords='data',
arrowprops=dict(arrowstyle="-",
edgecolor = "black",
linewidth=5,
alpha=0.65,
connectionstyle="arc3,rad=0."),
)
plt.scatter(x,y)
plt.scatter(10,10,color='r')
plt.scatter(50,50,color='r')
plt.show()
```
### Calculo do tempo gasto pelo A* com inicio em (10,10) e fim em (50,50) usando a heuristica 2
```
init_pos = (10,10)
goal_pos = (50,50)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
times = []
for i in range(0,1000):
start = time.time()
node = astar_search(robot_problem, h=robot_problem.heuristic_2)
end = time.time()
times.append(end - start)
media_a_2 = mean(times)
desvio_a_2 = stdev(times)
intervalo_conf = '(' + str( media_a_2 - 1.96 * (desvio_a_2 / (len(times)) ** (1/2)) ) + ',' + str( media_a_2 + 1.96 * (desvio_a_2 / (len(times)) ** (1/2)) ) + ')'
print("Media do tempo gasto para a busca A* com a segunda heuristica: " + str(media_a_2))
print("Desvio padrao do tempo gasto para a busca A* com a segunda heuristica: " + str(desvio_a_2))
print("Intervalo de confiança para a busca A* com a segunda heuristica: " + intervalo_conf)
fig = plt.figure()
plt.hist(times,bins=50)
plt.title('Histograma para o tempo de execucao do A* com a segunda heuristica')
plt.show()
```
### Projecao da relacao entre distancia em linha reta e tempo para o A* com a segunda heuristica
```
goal_pos = (50,50)
x = []
y = []
for i in range(5,50):
for j in range(5,50):
if i != 20 and i != 40:
init_pos = (i,i)
distancia_linha_reta = sqrt( (goal_pos[0] - init_pos[0]) ** 2 + (goal_pos[1] - init_pos[1]) ** 2)
robot_problem = RobotProblem(init_pos, goal_pos, mapa, grafo)
start = time.time()
node = astar_search(robot_problem, h=robot_problem.heuristic_2)
end = time.time()
x.append(distancia_linha_reta)
y.append(end - start)
import pandas as pd
data = {'x':[], 'y':[]}
df = pd.DataFrame(data)
df['x'] = x
df['y'] = y
df
fig = plt.figure()
plt.scatter(x,y)
plt.ylim(-0.05, 0.45)
plt.title("Distancia em linha reta x Tempo A*-heuristica2")
plt.xlabel("Distancia em linha reta entre os pontos inicial e final")
plt.ylabel("Tempo da busca A* com a segunda heuristica")
plt.show()
```
| github_jupyter |
```
"""The file needed to run this notebook can be accessed from the following folder using a UTS email account:
https://drive.google.com/drive/folders/1y6e1Z2SbLDKkmvK3-tyQ6INO5rrzT3jp
"""
```
# Object Detection Using RFCN
## Tutorial:
1. Image annotation using LabelImg
2. Conversion of annotation & images into tfrecords
3. Configuration of SSD config model file
4. Training the model
5. Using trained model for inference
## Tasks for this week:
1. installation of Google Object Detection API and required packages
2. Conversion of images and xml into tfrecord. i.e. train tfrecord, test tfrecord
3. Training: Transfer learning from already trained models
4. Freezing a trained model and export it for inference
##Task-1: Installation of Google Object Detection API and required packages
### Step 1: Import packages
```
%tensorflow_version 1.x
!pip install numpy==1.17.4
import os
import re
import tensorflow as tf
print(tf.__version__)
pip install --upgrade tf_slim
```
### Step 2: Initial Configuration to Select model config file and selection of other hyperparameters
```
# If you forked the repository, you can replace the link.
repo_url = 'https://github.com/Tony607/object_detection_demo'
# Number of training steps.
num_steps = 7000
# Number of evaluation steps.
num_eval_steps = 100
MODELS_CONFIG = {
'rfcn_resnet101': {
'model_name': 'rfcn_resnet101_coco_2018_01_28',
'pipeline_file': 'rfcn_resnet101_pets.config',
'batch_size': 8
}
}
# Pick the model you want to use
selected_model = 'rfcn_resnet101'
# Name of the object detection model to use.
MODEL = MODELS_CONFIG[selected_model]['model_name']
# Name of the pipline file in tensorflow object detection API.
pipeline_file = MODELS_CONFIG[selected_model]['pipeline_file']
# Training batch size fits in Colabe's Tesla K80 GPU memory for selected model.
batch_size = MODELS_CONFIG[selected_model]['batch_size']
%cd /content
repo_dir_path = os.path.abspath(os.path.join('.', os.path.basename(repo_url)))
!git clone {repo_url}
%cd {repo_dir_path}
!git pull
```
### Step 3: Download Google Object Detection API and other dependencies
```
%cd /content
#!git clone --quiet https://github.com/tensorflow/models.git
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models.git
!apt-get install -qq protobuf-compiler python-pil python-lxml python-tk
!pip install -q Cython contextlib2 pillow lxml matplotlib
!pip install -q pycocotools
%cd /content/models/research
!protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/'
!python object_detection/builders/model_builder_test.py
```
```
from google.colab import drive
drive.mount('/content/gdrive')
```
##Task-2: Conversion of XML annotations and images into tfrecords for training and testing datasets
### Step 4: Prepare `tfrecord` files
Use the following scripts to generate the `tfrecord` files.
```bash
# Convert train folder annotation xml files to a single csv file,
# generate the `label_map.pbtxt` file to `data/` directory as well.
python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations
# Convert test folder annotation xml files to a single csv.
python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv
# Generate `train.record`
python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt
# Generate `test.record`
python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt
```
```
#create the annotation directory
%cd /content/object_detection_demo/data
annotation_dir = 'annotations/'
os.makedirs(annotation_dir, exist_ok=True)
"""Need to manually upload the label_pbtxt file and the train_labels.csv and test_labels.csv
into the annotation folder using the link here
https://drive.google.com/drive/folders/1NqKz2tC8I5eL5Qo4YzZiEph8W-dtI44d
"""
%cd /content/gdrive/My Drive/A3 test
# Generate `train.record`
!python generate_tfrecord.py --csv_input=train_csv/train_labels.csv --output_path=annotations/train.record --img_path=train --label_map annotations/label_map.pbtxt
%cd /content/gdrive/My Drive/A3 test
# Generate `test.record`
!python generate_tfrecord.py --csv_input=test_csv/test_labels.csv --output_path=annotations/test.record --img_path=test --label_map annotations/label_map.pbtxt
test_record_fname = '/content/gdrive/My Drive/A3 test/annotations/test.record'
train_record_fname = '/content/gdrive/My Drive/A3 test/annotations/train.record'
label_map_pbtxt_fname = '/content/gdrive/My Drive/A3 test/annotations/label_map.pbtxt'
```
### Step 5. Download the base model for transfer learning
```
%cd /content/models/research
import os
import shutil
import glob
import urllib.request
import tarfile
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
DEST_DIR = '/content/models/research/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urllib.request.urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
!echo {DEST_DIR}
!ls -alh {DEST_DIR}
fine_tune_checkpoint = os.path.join(DEST_DIR, "model.ckpt")
fine_tune_checkpoint
```
##Task-3: Training: Transfer learning from already trained models
###Step 6: configuring a training pipeline
```
import os
pipeline_fname = os.path.join('/content/models/research/object_detection/samples/configs/', pipeline_file)
assert os.path.isfile(pipeline_fname), '`{}` not exist'.format(pipeline_fname)
def get_num_classes(pbtxt_fname):
from object_detection.utils import label_map_util
label_map = label_map_util.load_labelmap(pbtxt_fname)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
return len(category_index.keys())
num_classes = get_num_classes(label_map_pbtxt_fname)
with open(pipeline_fname) as f:
s = f.read()
with open(pipeline_fname, 'w') as f:
# fine_tune_checkpoint
s = re.sub('fine_tune_checkpoint: ".*?"',
'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s)
# tfrecord files train and test.
s = re.sub(
'(input_path: ".*?)(train.record)(.*?")', 'input_path: "{}"'.format(train_record_fname), s)
s = re.sub(
'(input_path: ".*?)(val.record)(.*?")', 'input_path: "{}"'.format(test_record_fname), s)
# label_map_path
s = re.sub(
'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s)
# Set training batch_size.
s = re.sub('batch_size: [0-9]+',
'batch_size: {}'.format(batch_size), s)
# Set training steps, num_steps
s = re.sub('num_steps: [0-9]+',
'num_steps: {}'.format(num_steps), s)
# Set number of classes num_classes.
s = re.sub('num_classes: [0-9]+',
'num_classes: {}'.format(num_classes), s)
f.write(s)
!cat {pipeline_fname}
model_dir = 'training/'
# Optionally remove content in output model directory to fresh start.
!rm -rf {model_dir}
os.makedirs(model_dir, exist_ok=True)
```
### Step 7. Install Tensorboard to visualize the progress of training process
```
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -o ngrok-stable-linux-amd64.zip
LOG_DIR = model_dir
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
```
### Step: 8 Get tensorboard link
```
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
```
### Step 9. Training the model
```
!python /content/models/research/object_detection/model_main.py \
--pipeline_config_path={pipeline_fname} \
--model_dir={model_dir} \
--alsologtostderr \
--num_train_steps={num_steps} \
--num_eval_steps={num_eval_steps}
!ls {model_dir}
```
##Task-4: Freezing a trained model and export it for inference
### Step: 10 Exporting a Trained Inference Graph
```
import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
!ls {output_directory}
```
### Step 11: Use frozen model for inference.
```
import os
pb_fname = os.path.join(os.path.abspath(output_directory), "frozen_inference_graph.pb")
assert os.path.isfile(pb_fname), '`{}` not exist'.format(pb_fname)
!ls -alh {pb_fname}
import os
import glob
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = pb_fname
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = label_map_pbtxt_fname
# If you want to test the code with your images, just add images files to the PATH_TO_TEST_IMAGES_DIR.
PATH_TO_TEST_IMAGES_DIR = '/content/gdrive/My Drive/A3 test/img'
assert os.path.isfile(pb_fname)
assert os.path.isfile(PATH_TO_LABELS)
TEST_IMAGE_PATHS = glob.glob(os.path.join(PATH_TO_TEST_IMAGES_DIR, "*.*"))
assert len(TEST_IMAGE_PATHS) > 0, 'No image found in `{}`.'.format(PATH_TO_TEST_IMAGES_DIR)
print(TEST_IMAGE_PATHS)
%cd /content/models/research/object_detection
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
# This is needed to display the images.
%matplotlib inline
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
num_classes = 1
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=num_classes, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {
output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(
tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(
tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [
real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [
real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(
output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
#load images
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
```
| github_jupyter |
# 1. Enumerate sentence
Create a function that prints words within a sentence along with their index in front of the word itself.
For example if we give the function the argument "This is a sentence" it should print
```
1 This
2 is
3 a
4 sentence
```
```
def enumWords(sentence):
#Complete this method.
```
# 2. Fibonacci
Create a function `fibonacci()` which takes an integer `num` as an input and returns the first `num` fibonacci numbers.
Eg.
Input: `8`
Output: `[1, 1, 2, 3, 5, 8, 13, 21]`
*Hint: You might want to recall [fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number)*
```
def fibonacci(num):
#Complete this method.
################ Checking code ########################
# Please don't edit this code
newList = fibonacci(10)
if newList == [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]:
print("Success!")
else:
print("Error! Your function returned")
print(newList)
```
# 3. Guessing game 2
Ask the user to input a number and then have the program guess it. After each guess, the user must input whether it was too high, too low or the correct number. In the end, the program must always guess the users number and it must print out the number of guesses it needed.
# 4. Find word
Create a function that searches for a word within a provided lists of words. Inputs to the function should be a list of words and a word to search for
The function should return `True` if the word is contained within the list and `False` otherwise.
```
fruits = ["banana", "orange", "grapefruit", "lime", "lemon"]
def findWord(wordList, word):
#Complete this method.
################ Checking code ########################
# Please don't edit this code
if findWord(fruits, "lime"):
print("Success!")
else:
print("Try again!")
```
# 5. Powers of 2
Use a while loop to find the largest power of 2 which is less than 30 million.
# 6. Making a better school
This exercise is on defining classes. This topic is covered in the optional notebook python-intro-3-extra-classes.
Below is a copy of the `School`, `Student` and `Exam` classes, together with a copy of the code needed to populate an object of that class with students and exam results. Edit the `School` class to add in the following functions:
* `.resits()` : this should return the list of exams that each student should resit if they get a "F" or "U" grade.
* `.prizeStudent()` : this should return the name of the student who scored the highest average percent across all of the exams.
* `.reviseCourse(threshold)` : this should return the name of the exam that gets the lowest average score across all students, if the average score is below `threshold`.
Use these functions to find out which students need to resit which exams, which student should be awarded the annual school prize, and which courses should be revised as the average mark is less than 50%.
```
class School:
def __init__(self):
self._students = {}
self._exams = []
def addStudent(self, name):
self._students[name] = Student(name)
def addExam(self, exam, max_score):
self._exams.append(exam)
for key in self._students.keys():
self._students[key].addExam(exam, Exam(max_score))
def addResult(self, name, exam, score):
self._students[name].addResult(exam, score)
def grades(self):
grades = {}
for name in self._students.keys():
grades[name] = self._students[name].grades()
return grades
# NOTE: This is not a class method
def addResults(school, exam, results):
for student in results.keys():
school.addResult(student, exam, results[student])
class Student:
def __init__(self, name):
self._exams = {}
self._name = name
def addExam(self, name, exam):
self._exams[name] = exam
def addResult(self, name, score):
self._exams[name].setResult(score)
def result(self, exam):
return self._exams[exam].percent()
def grade(self, exam):
return self._exams[exam].grade()
def grades(self):
g = {}
for exam in self._exams.keys():
g[exam] = self.grade(exam)
return g
class Exam:
def __init__(self, max_score=100):
self._max_score = max_score
self._actual_score = 0
def percent(self):
return 100.0 * self._actual_score / self._max_score
def setResult(self, score):
if score < 0:
self._actual_score = 0
elif score > self._max_score:
self._actual_score = self._max_score
else:
self._actual_score = score
def grade(self):
if self._actual_score == 0:
return "U"
elif self.percent() > 70.0:
return "A"
elif self.percent() > 60.0:
return "B"
elif self.percent() > 50.0:
return "C"
else:
return "F"
# NOTE: This si not a class method
def addResults(school, exam, results):
for student in results.keys():
school.addResult(student, exam, results[student])
school = School()
school.grades()
students = ["Andrew", "James", "Laura"]
exams = { "Maths" : 20, "Physics" : 50, "English": 30}
results = {"Maths" : {"Andrew" : 13, "James" : 17, "Laura" : 14},
"Physics" : {"Andrew" : 34, "James" : 44, "Laura" : 27},
"English" : {"Andrew" : 26, "James" : 14, "Laura" : 29}}
for student in students:
school.addStudent(student)
for exam in exams.keys():
school.addExam(exam, exams[exam])
for result in results.keys():
addResults(school, result, results[result])
school.grades()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.stats as sts
import seaborn as sns
sns.set()
%matplotlib inline
```
# 01. Smooth function optimization
Рассмотрим все ту же функцию из задания по линейной алгебре:
$ f(x) = \sin{\frac{x}{5}} * e^{\frac{x}{10}} + 5 * e^{-\frac{x}{2}} $
, но теперь уже на промежутке `[1, 30]`.
В первом задании будем искать минимум этой функции на заданном промежутке с помощью `scipy.optimize`. Разумеется, в дальнейшем вы будете использовать методы оптимизации для более сложных функций, а `f(x)` мы рассмотрим как удобный учебный пример.
Напишите на Питоне функцию, вычисляющую значение `f(x)` по известному `x`. Будьте внимательны: не забывайте про то, что по умолчанию в питоне целые числа делятся нацело, и о том, что функции `sin` и `exp` нужно импортировать из модуля `math`.
```
from math import sin, exp, sqrt
def f(x):
return sin(x / 5) * exp(x / 10) + 5 * exp(-x / 2)
f(10)
xs = np.arange(41, 60, 0.1)
ys = np.array([f(x) for x in xs])
plt.plot(xs, ys)
```
Изучите примеры использования `scipy.optimize.minimize` в документации `Scipy` (см. "Материалы").
Попробуйте найти минимум, используя стандартные параметры в функции `scipy.optimize.minimize` (т.е. задав только функцию и начальное приближение). Попробуйте менять начальное приближение и изучить, меняется ли результат.
```
from scipy.optimize import minimize, rosen, rosen_der, differential_evolution
x0 = 60
minimize(f, x0)
# поиграемся с розенброком
x0 = [1., 10.]
minimize(rosen, x0, method='BFGS')
```
___
## Submission #1
Укажите в `scipy.optimize.minimize` в качестве метода `BFGS` (один из самых точных в большинстве случаев градиентных методов оптимизации), запустите из начального приближения $ x = 2 $. Градиент функции при этом указывать не нужно – он будет оценен численно. Полученное значение функции в точке минимума - ваш первый ответ по заданию 1, его надо записать с точностью до 2 знака после запятой.
Теперь измените начальное приближение на x=30. Значение функции в точке минимума - ваш второй ответ по заданию 1, его надо записать через пробел после первого, с точностью до 2 знака после запятой.
Стоит обдумать полученный результат. Почему ответ отличается в зависимости от начального приближения? Если нарисовать график функции (например, как это делалось в видео, где мы знакомились с Numpy, Scipy и Matplotlib), можно увидеть, в какие именно минимумы мы попали. В самом деле, градиентные методы обычно не решают задачу глобальной оптимизации, поэтому результаты работы ожидаемые и вполне корректные.
```
# 1. x0 = 2
x0 = 2
res1 = minimize(f, x0, method='BFGS')
# 2. x0 = 30
x0 = 30
res2 = minimize(f, x0, method='BFGS')
with open('out/06. submission1.txt', 'w') as f_out:
output = '{0:.2f} {1:.2f}'.format(res1.fun, res2.fun)
print(output)
f_out.write(output)
```
# 02. Глобальная оптимизация
Теперь попробуем применить к той же функции $ f(x) $ метод глобальной оптимизации — дифференциальную эволюцию.
Изучите документацию и примеры использования функции `scipy.optimize.differential_evolution`.
Обратите внимание, что границы значений аргументов функции представляют собой список кортежей (list, в который помещены объекты типа tuple). Даже если у вас функция одного аргумента, возьмите границы его значений в квадратные скобки, чтобы передавать в этом параметре список из одного кортежа, т.к. в реализации `scipy.optimize.differential_evolution` длина этого списка используется чтобы определить количество аргументов функции.
Запустите поиск минимума функции f(x) с помощью дифференциальной эволюции на промежутке [1, 30]. Полученное значение функции в точке минимума - ответ в задаче 2. Запишите его с точностью до второго знака после запятой. В этой задаче ответ - только одно число.
Заметьте, дифференциальная эволюция справилась с задачей поиска глобального минимума на отрезке, т.к. по своему устройству она предполагает борьбу с попаданием в локальные минимумы.
Сравните количество итераций, потребовавшихся BFGS для нахождения минимума при хорошем начальном приближении, с количеством итераций, потребовавшихся дифференциальной эволюции. При повторных запусках дифференциальной эволюции количество итераций будет меняться, но в этом примере, скорее всего, оно всегда будет сравнимым с количеством итераций BFGS. Однако в дифференциальной эволюции за одну итерацию требуется выполнить гораздо больше действий, чем в BFGS. Например, можно обратить внимание на количество вычислений значения функции (nfev) и увидеть, что у BFGS оно значительно меньше. Кроме того, время работы дифференциальной эволюции очень быстро растет с увеличением числа аргументов функции.
```
res = differential_evolution(f, [(1, 30)])
res
```
___
## Submission #2
```
res = differential_evolution(f, [(1, 30)])
with open('out/06. submission2.txt', 'w') as f_out:
output = '{0:.2f}'.format(res.fun)
print(output)
f_out.write(output)
```
# 03. Минимизация негладкой функции
Теперь рассмотрим функцию $ h(x) = int(f(x)) $ на том же отрезке `[1, 30]`, т.е. теперь каждое значение $ f(x) $ приводится к типу int и функция принимает только целые значения.
Такая функция будет негладкой и даже разрывной, а ее график будет иметь ступенчатый вид. Убедитесь в этом, построив график $ h(x) $ с помощью `matplotlib`.
```
def h(x):
return int(f(x))
xs = np.arange(0, 70, 1)
ys = [h(x) for x in xs]
plt.plot(xs, ys)
minimize(h, 40.3)
```
Попробуйте найти минимум функции $ h(x) $ с помощью BFGS, взяв в качестве начального приближения $ x = 30 $. Получившееся значение функции – ваш первый ответ в этой задаче.
```
res_bfgs = minimize(h, 30)
res_bfgs
```
Теперь попробуйте найти минимум $ h(x) $ на отрезке `[1, 30]` с помощью дифференциальной эволюции. Значение функции $ h(x) $ в точке минимума – это ваш второй ответ в этом задании. Запишите его через пробел после предыдущего.
```
res_diff_evol = differential_evolution(h, [(1, 30)])
res_diff_evol
```
Обратите внимание на то, что полученные ответы различаются. Это ожидаемый результат, ведь BFGS использует градиент (в одномерном случае – производную) и явно не пригоден для минимизации рассмотренной нами разрывной функции. Попробуйте понять, почему минимум, найденный BFGS, именно такой (возможно в этом вам поможет выбор разных начальных приближений).
Выполнив это задание, вы увидели на практике, чем поиск минимума функции отличается от глобальной оптимизации, и когда может быть полезно применить вместо градиентного метода оптимизации метод, не использующий градиент. Кроме того, вы попрактиковались в использовании библиотеки SciPy для решения оптимизационных задач, и теперь знаете, насколько это просто и удобно.
___
## Submission #3
```
with open('out/06. submission3.txt', 'w') as f_out:
output = '{0:.2f} {1:.2f}'.format(res_bfgs.fun, res_diff_evol.fun)
print(output)
f_out.write(output)
```
___
Дальше играюсь с визуализацией ф-ии розенброка
```
lb = -10
rb = 10
step = 0.2
gen_xs = np.arange(lb, rb, step)
xs = np.meshgrid(np.arange(-1, 1, 0.1), np.arange(-10, 10, 0.1))
ys = rosen(xs)
print(xs[0].shape, xs[1].shape, ys.shape)
plt.contour(xs[0], xs[1], ys, 30)
lb = 0
rb = 4
step = 0.3
gen_xs = np.arange(lb, rb, step)
#xs = np.meshgrid(gen_xs, gen_xs)
#ys = (xs[0]**2 + xs[1]**2)**0.5
xs = np.meshgrid(np.arange(-2, 2, 0.1), np.arange(-10, 10, 0.1))
ys = rosen(xs)
print(xs[0].shape, xs[1].shape, ys.shape)
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
plt.contour(xs[0], xs[1], ys, 30, cmap=cmap)
#plt.plot(xs[0], xs[1], marker='.', color='k', linestyle='none', alpha=0.1)
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(xs[0], xs[1], ys, cmap=cmap, linewidth=0, antialiased=False)
plt.show()
x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
res.x
```
| github_jupyter |
# Mount Drive
```
from google.colab import drive
drive.mount('/content/drive')
!pip install -U -q PyDrive
!pip install httplib2==0.15.0
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from pydrive.files import GoogleDriveFileList
from google.colab import auth
from oauth2client.client import GoogleCredentials
from getpass import getpass
import urllib
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Cloning PAL_2021 to access modules.
# Need password to access private repo.
if 'CLIPPER' not in os.listdir():
cmd_string = 'git clone https://github.com/PAL-ML/CLIPPER.git'
os.system(cmd_string)
```
# Installation
## Install multi label metrics dependencies
```
! pip install scikit-learn==0.24
```
## Install CLIP dependencies
```
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
! pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
! pip install ftfy regex
! wget https://openaipublic.azureedge.net/clip/bpe_simple_vocab_16e6.txt.gz -O bpe_simple_vocab_16e6.txt.gz
!pip install git+https://github.com/Sri-vatsa/CLIP # using this fork because of visualization capabilities
```
## Install clustering dependencies
```
!pip -q install umap-learn>=0.3.7
```
## Install dataset manager dependencies
```
!pip install wget
```
# Imports
```
# ML Libraries
import tensorflow as tf
import tensorflow_hub as hub
import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
import keras
# Data processing
import PIL
import base64
import imageio
import pandas as pd
import numpy as np
import json
from PIL import Image
import cv2
from sklearn.feature_extraction.image import extract_patches_2d
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from IPython.core.display import display, HTML
from matplotlib import cm
import matplotlib.image as mpimg
# Models
import clip
# Datasets
import tensorflow_datasets as tfds
# Clustering
# import umap
from sklearn import metrics
from sklearn.cluster import KMeans
#from yellowbrick.cluster import KElbowVisualizer
# Misc
import progressbar
import logging
from abc import ABC, abstractmethod
import time
import urllib.request
import os
from sklearn.metrics import jaccard_score, hamming_loss, accuracy_score, f1_score
from sklearn.preprocessing import MultiLabelBinarizer
# Modules
from CLIPPER.code.ExperimentModules import embedding_models
from CLIPPER.code.ExperimentModules.dataset_manager import DatasetManager
from CLIPPER.code.ExperimentModules.weight_imprinting_classifier import WeightImprintingClassifier
from CLIPPER.code.ExperimentModules import simclr_data_augmentations
from CLIPPER.code.ExperimentModules.utils import (save_npy, load_npy,
get_folder_id,
create_expt_dir,
save_to_drive,
load_all_from_drive_folder,
download_file_by_name,
delete_file_by_name)
logging.getLogger('googleapicliet.discovery_cache').setLevel(logging.ERROR)
```
# Initialization & Constants
## Dataset details
```
dataset_name = "LFW"
folder_name = "LFW-Embeddings-28-02-21"
# Change parentid to match that of experiments root folder in gdrive
parentid = '1bK72W-Um20EQDEyChNhNJthUNbmoSEjD'
# Filepaths
train_labels_filename = "train_labels.npz"
train_embeddings_filename_suffix = "_embeddings_train.npz"
# Initialize sepcific experiment folder in drive
folderid = create_expt_dir(drive, parentid, folder_name)
```
## Few shot learning parameters
```
num_ways = 5 # [5, 20]
num_shot = 5 # [5, 1]
num_eval = 15 # [5, 10, 15, 19]
num_episodes = 100
shuffle = False
```
## Image embedding and augmentations
```
embedding_model = embedding_models.CLIPEmbeddingWrapper()
num_augmentations = 0 # [0, 5, 10]
trivial=False # [True, False]
```
## Training parameters
```
# List of number of epochs to train over, e.g. [5, 10, 15, 20]. [0] indicates no training.
train_epochs_arr = [0]
# Single label (softmax) parameters
multi_label= False # [True, False] i.e. sigmoid or softmax
metrics_val = ['accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy']
```
# Load data
```
def get_ndarray_from_drive(drive, folderid, filename):
download_file_by_name(drive, folderid, filename)
return np.load(filename)['data']
train_labels = get_ndarray_from_drive(drive, folderid, train_labels_filename)
train_labels = train_labels.astype(str)
dm = DatasetManager()
test_data_generator = dm.load_dataset('lfw', split='train')
class_names = dm.get_class_names()
print(class_names)
```
# Create label dictionary
```
unique_labels = np.unique(train_labels)
print(len(unique_labels))
label_dictionary = {la:[] for la in unique_labels}
for i in range(len(train_labels)):
la = train_labels[i]
label_dictionary[la].append(i)
```
# Weight Imprinting models on train data embeddings
## Function definitions
```
def start_progress_bar(bar_len):
widgets = [
' [',
progressbar.Timer(format= 'elapsed time: %(elapsed)s'),
'] ',
progressbar.Bar('*'),' (',
progressbar.ETA(), ') ',
]
pbar = progressbar.ProgressBar(
max_value=bar_len, widgets=widgets
).start()
return pbar
def prepare_indices(
num_ways,
num_shot,
num_eval,
num_episodes,
label_dictionary,
labels,
shuffle=False
):
eval_indices = []
train_indices = []
wi_y = []
eval_y = []
label_dictionary = {la:label_dictionary[la] for la in label_dictionary if len(label_dictionary[la]) >= (num_shot+num_eval)}
unique_labels = list(label_dictionary.keys())
pbar = start_progress_bar(num_episodes)
for s in range(num_episodes):
# Setting random seed for replicability
np.random.seed(s)
_train_indices = []
_eval_indices = []
selected_labels = np.random.choice(unique_labels, size=num_ways, replace=False)
for la in selected_labels:
la_indices = label_dictionary[la]
select = np.random.choice(la_indices, size = num_shot+num_eval, replace=False)
tr_idx = list(select[:num_shot])
ev_idx = list(select[num_shot:])
_train_indices = _train_indices + tr_idx
_eval_indices = _eval_indices + ev_idx
if shuffle:
np.random.shuffle(_train_indices)
np.random.shuffle(_eval_indices)
train_indices.append(_train_indices)
eval_indices.append(_eval_indices)
_wi_y = labels[_train_indices]
_eval_y = labels[_eval_indices]
wi_y.append(_wi_y)
eval_y.append(_eval_y)
pbar.update(s+1)
return train_indices, eval_indices, wi_y, eval_y
def embed_images(
embedding_model,
train_indices,
num_augmentations,
trivial=False
):
def augment_image(image, num_augmentations, trivial):
""" Perform SimCLR augmentations on the image
"""
if np.max(image) > 1:
image = image/255
augmented_images = [image]
def _run_filters(image):
width = image.shape[1]
height = image.shape[0]
image_aug = simclr_data_augmentations.random_crop_with_resize(
image,
height,
width
)
image_aug = tf.image.random_flip_left_right(image_aug)
image_aug = simclr_data_augmentations.random_color_jitter(image_aug)
image_aug = simclr_data_augmentations.random_blur(
image_aug,
height,
width
)
image_aug = tf.reshape(image_aug, [image.shape[0], image.shape[1], 3])
image_aug = tf.clip_by_value(image_aug, 0., 1.)
return image_aug.numpy()
for _ in range(num_augmentations):
if trivial:
aug_image = image
else:
aug_image = _run_filters(image)
augmented_images.append(aug_image)
augmented_images = np.stack(augmented_images)
return augmented_images
embedding_model.load_model()
unique_indices = np.unique(np.array(train_indices))
ds = dm.load_dataset('lfw', split='train')
embeddings = []
IMAGE_IDX = 'image'
pbar = start_progress_bar(unique_indices.size+1)
num_done=0
for idx, item in enumerate(ds):
if idx in unique_indices:
image = item[IMAGE_IDX]
if num_augmentations > 0:
aug_images = augment_image(image, num_augmentations, trivial)
else:
aug_images = image
processed_images = embedding_model.preprocess_data(aug_images)
embedding = embedding_model.embed_images(processed_images)
embeddings.append(embedding)
num_done += 1
pbar.update(num_done+1)
if idx == unique_indices[-1]:
break
embeddings = np.stack(embeddings)
return unique_indices, embeddings
def train_model_for_episode(
indices_and_embeddings,
train_indices,
wi_y,
num_augmentations,
train_epochs=None,
train_batch_size=5,
multi_label=True
):
train_embeddings = []
train_labels = []
ind = indices_and_embeddings[0]
emb = indices_and_embeddings[1]
for idx, tr_idx in enumerate(train_indices):
train_embeddings.append(emb[np.argwhere(ind==tr_idx)[0][0]])
train_labels += [wi_y[idx] for _ in range(num_augmentations+1)]
train_embeddings = np.concatenate(train_embeddings)
train_labels = np.array(train_labels)
train_embeddings = WeightImprintingClassifier.preprocess_input(train_embeddings)
wi_weights, label_mapping = WeightImprintingClassifier.get_imprinting_weights(
train_embeddings, train_labels, False, multi_label
)
wi_parameters = {
"num_classes": num_ways,
"input_dims": train_embeddings.shape[-1],
"scale": False,
"dense_layer_weights": wi_weights,
"multi_label": multi_label
}
wi_cls = WeightImprintingClassifier(wi_parameters)
if train_epochs:
# ep_y = train_labels
rev_label_mapping = {label_mapping[val]:val for val in label_mapping}
train_y = np.zeros((len(train_labels), num_ways))
for idx_y, l in enumerate(train_labels):
if multi_label:
for _l in l:
train_y[idx_y, rev_label_mapping[_l]] = 1
else:
train_y[idx_y, rev_label_mapping[l]] = 1
wi_cls.train(train_embeddings, train_y, train_epochs, train_batch_size)
return wi_cls, label_mapping
def evaluate_model_for_episode(
model,
eval_x,
eval_y,
label_mapping,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
threshold=0.7,
multi_label=True
):
eval_x = WeightImprintingClassifier.preprocess_input(eval_x)
logits = model.predict_scores(eval_x).tolist()
if multi_label:
pred_y = model.predict_multi_label(eval_x, threshold)
pred_y = [[label_mapping[v] for v in l] for l in pred_y]
met = model.evaluate_multi_label_metrics(
eval_x, eval_y, label_mapping, threshold, metrics
)
else:
pred_y = model.predict_single_label(eval_x)
pred_y = [label_mapping[l] for l in pred_y]
met = model.evaluate_single_label_metrics(
eval_x, eval_y, label_mapping, metrics
)
return pred_y, met, logits
def run_episode_through_model(
indices_and_embeddings,
train_indices,
eval_indices,
wi_y,
eval_y,
thresholds=None,
num_augmentations=0,
train_epochs=None,
train_batch_size=5,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
embeddings=None,
multi_label=True
):
metrics_values = {m:[] for m in metrics}
wi_cls, label_mapping = train_model_for_episode(
indices_and_embeddings,
train_indices,
wi_y,
num_augmentations,
train_epochs,
train_batch_size,
multi_label=multi_label
)
eval_x = embeddings[eval_indices]
ep_logits = []
if multi_label:
for t in thresholds:
pred_labels, met, logits = evaluate_model_for_episode(
wi_cls,
eval_x,
eval_y,
label_mapping,
threshold=t,
metrics=metrics,
multi_label=True
)
ep_logits.append(logits)
for m in metrics:
metrics_values[m].append(met[m])
else:
pred_labels, metrics_values, logits = evaluate_model_for_episode(
wi_cls,
eval_x,
eval_y,
label_mapping,
metrics=metrics,
multi_label=False
)
ep_logits = logits
return metrics_values, ep_logits
def run_evaluations(
indices_and_embeddings,
train_indices,
eval_indices,
wi_y,
eval_y,
num_episodes,
num_ways,
thresholds,
verbose=True,
normalize=True,
train_epochs=None,
train_batch_size=5,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
embeddings=None,
num_augmentations=0,
multi_label=True
):
metrics_values = {m:[] for m in metrics}
all_logits = []
if verbose:
pbar = start_progress_bar(num_episodes)
for idx_ep in range(num_episodes):
_train_indices = train_indices[idx_ep]
_eval_indices = eval_indices[idx_ep]
if multi_label:
_wi_y = [[label] for label in wi_y[idx_ep]]
_eval_y = [[label] for label in eval_y[idx_ep]]
else:
_wi_y = wi_y[idx_ep]
_eval_y = eval_y[idx_ep]
met, ep_logits = run_episode_through_model(
indices_and_embeddings,
_train_indices,
_eval_indices,
_wi_y,
_eval_y,
num_augmentations=num_augmentations,
train_epochs=train_epochs,
train_batch_size=train_batch_size,
embeddings=embeddings,
thresholds=thresholds,
metrics=metrics,
multi_label=multi_label
)
all_logits.append(ep_logits)
for m in metrics:
metrics_values[m].append(met[m])
if verbose:
pbar.update(idx_ep+1)
return metrics_values, all_logits
def get_max_mean_jaccard_index_by_threshold(metrics_thresholds):
max_mean_jaccard = np.max([np.mean(mt['jaccard']) for mt in metrics_thresholds])
return max_mean_jaccard
def get_max_mean_jaccard_index_with_threshold(metrics_thresholds):
arr = np.array(metrics_thresholds['jaccard'])
max_mean_jaccard = np.max(np.mean(arr, 0))
threshold = np.argmax(np.mean(arr, 0))
return max_mean_jaccard, threshold
def get_max_mean_f1_score_with_threshold(metrics_thresholds):
arr = np.array(metrics_thresholds['f1_score'])
max_mean_jaccard = np.max(np.mean(arr, 0))
threshold = np.argmax(np.mean(arr, 0))
return max_mean_jaccard, threshold
def get_mean_max_jaccard_index_by_episode(metrics_thresholds):
mean_max_jaccard = np.mean(np.max(np.array([mt['jaccard'] for mt in metrics_thresholds]), axis=0))
return mean_max_jaccard
def plot_metrics_by_threshold(
metrics_thresholds,
thresholds,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
title_suffix=""
):
legend = []
fig = plt.figure(figsize=(10,10))
if 'jaccard' in metrics:
mean_jaccard_threshold = np.mean(np.array(metrics_thresholds['jaccard']), axis=0)
opt_threshold_jaccard = thresholds[np.argmax(mean_jaccard_threshold)]
plt.plot(thresholds, mean_jaccard_threshold, c='blue')
plt.axvline(opt_threshold_jaccard, ls="--", c='blue')
legend.append("Jaccard Index")
legend.append(opt_threshold_jaccard)
if 'hamming' in metrics:
mean_hamming_threshold = np.mean(np.array(metrics_thresholds['hamming']), axis=0)
opt_threshold_hamming = thresholds[np.argmin(mean_hamming_threshold)]
plt.plot(thresholds, mean_hamming_threshold, c='green')
plt.axvline(opt_threshold_hamming, ls="--", c='green')
legend.append("Hamming Score")
legend.append(opt_threshold_hamming)
if 'map' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['map']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='red')
plt.axvline(opt_threshold_f1_score, ls="--", c='red')
legend.append("mAP")
legend.append(opt_threshold_f1_score)
if 'o_f1' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_f1']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='yellow')
plt.axvline(opt_threshold_f1_score, ls="--", c='yellow')
legend.append("OF1")
legend.append(opt_threshold_f1_score)
if 'c_f1' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_f1']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='orange')
plt.axvline(opt_threshold_f1_score, ls="--", c='orange')
legend.append("CF1")
legend.append(opt_threshold_f1_score)
if 'o_precision' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_precision']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='purple')
plt.axvline(opt_threshold_f1_score, ls="--", c='purple')
legend.append("OP")
legend.append(opt_threshold_f1_score)
if 'c_precision' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_precision']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='cyan')
plt.axvline(opt_threshold_f1_score, ls="--", c='cyan')
legend.append("CP")
legend.append(opt_threshold_f1_score)
if 'o_recall' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_recall']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='brown')
plt.axvline(opt_threshold_f1_score, ls="--", c='brown')
legend.append("OR")
legend.append(opt_threshold_f1_score)
if 'c_recall' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_recall']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='pink')
plt.axvline(opt_threshold_f1_score, ls="--", c='pink')
legend.append("CR")
legend.append(opt_threshold_f1_score)
if 'c_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='maroon')
plt.axvline(opt_threshold_f1_score, ls="--", c='maroon')
legend.append("CACC")
legend.append(opt_threshold_f1_score)
if 'top1_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['top1_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='magenta')
plt.axvline(opt_threshold_f1_score, ls="--", c='magenta')
legend.append("TOP1")
legend.append(opt_threshold_f1_score)
if 'top5_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['top5_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='slategray')
plt.axvline(opt_threshold_f1_score, ls="--", c='slategray')
legend.append("TOP5")
legend.append(opt_threshold_f1_score)
plt.xlabel('Threshold')
plt.ylabel('Value')
plt.legend(legend)
title = title_suffix+"\nMulti label metrics by threshold"
plt.title(title)
plt.grid()
fname = os.path.join(PLOT_DIR, title_suffix)
plt.savefig(fname)
plt.show()
```
## Setting multiple thresholds
```
# No threshold for softmax
thresholds = None
```
# Main
## Picking indices
```
train_indices, eval_indices, wi_y, eval_y = prepare_indices(
num_ways, num_shot, num_eval, num_episodes, label_dictionary, train_labels, shuffle
)
indices, embeddings = embed_images(
embedding_model,
train_indices,
num_augmentations,
trivial=trivial
)
```
## CLIP
```
clip_embeddings_train_fn = "clip" + train_embeddings_filename_suffix
clip_embeddings_train = get_ndarray_from_drive(drive, folderid, clip_embeddings_train_fn)
import warnings
warnings.filterwarnings('ignore')
if train_epochs_arr == [0]:
if trivial:
results_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_with_logits.json"
else:
results_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_with_logits.json"
else:
if trivial:
results_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_with_logits.json"
else:
results_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_with_logits.json"
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
download_file_by_name(drive, folderid, results_filename)
if results_filename in os.listdir():
with open(results_filename, 'r') as f:
json_loaded = json.load(f)
clip_metrics_over_train_epochs = json_loaded['metrics']
logits_over_train_epochs = json_loaded["logits"]
else:
clip_metrics_over_train_epochs = []
logits_over_train_epochs = []
for idx, train_epochs in enumerate(train_epochs_arr):
if idx < len(clip_metrics_over_train_epochs):
continue
print(train_epochs)
clip_metrics_thresholds, all_logits = run_evaluations(
(indices, embeddings),
train_indices,
eval_indices,
wi_y,
eval_y,
num_episodes,
num_ways,
thresholds,
train_epochs=train_epochs,
num_augmentations=num_augmentations,
embeddings=clip_embeddings_train,
multi_label=multi_label,
metrics=metrics_val
)
clip_metrics_over_train_epochs.append(clip_metrics_thresholds)
logits_over_train_epochs.append(all_logits)
fin_list = []
for a1 in wi_y:
fin_a1_list = []
for a2 in a1:
new_val = a2.decode("utf-8")
fin_a1_list.append(new_val)
fin_list.append(fin_a1_list)
with open(results_filename, 'w') as f:
results = {'metrics': clip_metrics_over_train_epochs,
"logits": logits_over_train_epochs,
"true_labels": fin_list}
json.dump(results, f)
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
delete_file_by_name(drive, folderid, results_filename)
save_to_drive(drive, folderid, results_filename)
def get_best_metric_and_threshold(mt, metric_name, thresholds, optimal='max'):
if optimal=='max':
opt_value = np.max(np.mean(np.array(mt[metric_name]), axis=0))
opt_threshold = thresholds[np.argmax(np.mean(np.array(mt[metric_name]), axis=0))]
if optimal=='min':
opt_value = np.min(np.mean(np.array(mt[metric_name]), axis=0))
opt_threshold = thresholds[np.argmin(np.mean(np.array(mt[metric_name]), axis=0))]
return opt_value, opt_threshold
def get_avg_metric(mt, metric_name):
opt_value = np.mean(np.array(mt[metric_name]), axis=0)
return opt_value
all_metrics = ['accuracy', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'c_accuracy']
f1_vals = []
f1_t_vals = []
jaccard_vals = []
jaccard_t_vals = []
final_dict = {}
for ind_metric in all_metrics:
vals = []
t_vals = []
final_array = []
for mt in clip_metrics_over_train_epochs:
ret_val = get_avg_metric(mt,ind_metric)
vals.append(ret_val)
final_array.append(vals)
final_dict[ind_metric] = final_array
if train_epochs_arr == [0]:
if trivial:
graph_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_graphs.json"
else:
graph_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_graphs.json"
else:
if trivial:
graph_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_graphs.json"
else:
graph_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_graphs.json"
with open(graph_filename, 'w') as f:
json.dump(final_dict, f)
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
delete_file_by_name(drive, folderid, graph_filename)
save_to_drive(drive, folderid, graph_filename)
final_dict
```
| github_jupyter |
```
import pandas as pd
import numpy as np
data = pd.read_csv('features_30_sec.csv')
data.head()
dataset = data[data['label'].isin(['blues', 'classical', 'jazz', 'metal', 'pop'])].drop(['filename','length'],axis=1)
dataset.iloc[:, :-15].head()
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler ,MinMaxScaler
from sklearn import preprocessing
encode = LabelEncoder().fit(dataset.iloc[:,-1])
y= LabelEncoder().fit_transform(dataset.iloc[:,-1])
scaler1 = MinMaxScaler().fit(np.array(dataset.iloc[:, :-15], dtype = float))
scaler2 = StandardScaler().fit(np.array(dataset.iloc[:, :-15], dtype = float))
X = StandardScaler().fit_transform(np.array(dataset.iloc[:, :-15], dtype = float))
X.shape
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.30,random_state=42)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
import tensorflow.keras as keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout, BatchNormalization
# defining our regression model
n_cols = dataset.iloc[:, :-15].shape[1]
def regression_model_1():
# structure of our model
model = Sequential()
model.add(Dense(256, activation='relu', input_shape=(n_cols,)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(64, activation='relu',))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(5,activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
model.summary()
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
earlystop = EarlyStopping(patience=10)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy',
patience=10,
verbose=1,
)
Callbacks = [earlystop, learning_rate_reduction]
#build the model
# model_1 = regression_model_1()
#fit the model
# model_1.fit(X_train,y_train, callbacks=Callbacks , validation_data=(X_test,y_test) ,epochs=100,batch_size=150)
# model_1.save('Keras_reg_30sec_5.h5')
from keras.models import load_model
model = load_model('Keras_reg_30sec_5.h5')
predictions = model.predict_classes(X_test)
score = model.evaluate(X_test,y_test, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
print(y_test)
print(predictions)
from sklearn.metrics import confusion_matrix
cf_matrix = confusion_matrix(y_test,predictions)
print(cf_matrix)
import seaborn as sns
%matplotlib inline
classes=['blues', 'classical', 'jazz', 'metal', 'pop']
sns.heatmap(cf_matrix, annot=True , cmap='Blues',xticklabels=classes,yticklabels=classes)
data_test = pd.read_csv('test_30_sec.csv')
dataset_test= data_test.drop(['filename','length'],axis=1)
X__test=scaler2.transform(np.array(dataset_test.iloc[:, :-15], dtype = float))
actual = encode.transform(dataset_test.iloc[:,-1])
pred = model.predict_classes(X__test)
# blues[0],classical[1],jazz[2],metal[3],pop[4]
print(actual)
print(pred)
cf_matrix_test = confusion_matrix(actual,pred)
classes=['blues', 'classical','jazz', 'metal', 'pop']
sns.heatmap(cf_matrix_test, annot=True , cmap='Blues',xticklabels=classes,yticklabels=classes)
```
| github_jupyter |
## Importing Packages
```
import pandas as pd
import numpy as np
import tqdm
import pickle
from pprint import pprint
import os
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
#sklearn
from sklearn.manifold import TSNE
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.model_selection import train_test_split
import gensim
from gensim import corpora, models
from gensim.corpora import Dictionary
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
! pip install pyLDAvis
import pyLDAvis
import pyLDAvis.sklearn
import pyLDAvis.gensim_models as gensimvis
from google.colab import drive
drive.mount('/content/drive')
with open('processed_tweets.pickle', 'rb') as read_file:
df = pickle.load(read_file)
```
## Train-Test Split
```
X_train, X_test = train_test_split(df.tweet, test_size=0.2, random_state=42)
X_train
X_test
train_list_of_lists = list(X_train.values)
```
## Bigram-Trigram Models
(I did not incorporate bigrams and trigrams into the model yet)
```
# Build the bigram and trigram models
bigram = gensim.models.Phrases(train_list_of_lists, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[train_list_of_lists], threshold=100)
# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
data_words_bigrams = make_bigrams(train_list_of_lists)
```
## Bag of Words
```
id2word = Dictionary(train_list_of_lists)
corpus = [id2word.doc2bow(text) for text in train_list_of_lists]
sample = corpus[3000]
for i in range(len(sample)):
print("Word {} (\"{}\") appears {} time(s).".format(sample[i][0],
id2word[sample[i][0]],
sample[i][1]))
```
## LDA with Bag of Words
```
# Build LDA model
lda_model = LdaModel(corpus=corpus,
id2word=id2word,
num_topics=4,
random_state=42,
chunksize=100,
passes=100,
update_every=5,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]
pyLDAvis.enable_notebook()
LDAvis_prepared = gensimvis.prepare(lda_model, corpus, id2word)
LDAvis_prepared
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('Coherence Score: ', coherence_lda)
lda_model_bow = gensim.models.LdaMulticore(corpus, num_topics=4, id2word=id2word, passes=100, workers=2)
for idx, topic in lda_model_bow.print_topics(-1):
print('Topic: {} \nWords: {}'.format(idx, topic))
LDAvis_prepared_2 = gensimvis.prepare(lda_model_bow, corpus, id2word)
LDAvis_prepared_2
for index, score in sorted(lda_model_bow[corpus[3000]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_bow.print_topic(index, 4)))
# Compute Coherence Score
coherence_model_lda_2 = CoherenceModel(model=lda_model_bow, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda_2 = coherence_model_lda_2.get_coherence()
print('Coherence Score: ', coherence_lda_2)
```
## LDA with TF-IDF
```
tfidf = models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
pprint(doc)
break
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf, num_topics=4, id2word=id2word, passes=100, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
LDAvis_prepared_3 = gensimvis.prepare(lda_model_tfidf, corpus_tfidf, id2word)
LDAvis_prepared_3
for index, score in sorted(lda_model_tfidf[corpus[3000]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 4)))
# Compute Coherence Score
coherence_model_lda_3 = CoherenceModel(model=lda_model_tfidf, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda_3 = coherence_model_lda_3.get_coherence()
print('Coherence Score: ', coherence_lda_3)
# supporting function
def compute_coherence_values(corpus, dictionary, k, a, b):
lda_model = gensim.models.LdaMulticore(corpus=corpus,
id2word=dictionary,
num_topics=k,
random_state=42,
chunksize=100,
passes=10,
alpha=a,
eta=b)
coherence_model_lda = CoherenceModel(model=lda_model, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
return coherence_model_lda.get_coherence()
grid = {}
grid['Validation_Set'] = {}
# Topics range
min_topics = 2
max_topics = 11
step_size = 1
topics_range = range(min_topics, max_topics, step_size)
# Alpha parameter
alpha = list(np.arange(0.01, 1, 0.1))
alpha.append('symmetric')
alpha.append('asymmetric')
# Beta parameter
beta = list(np.arange(0.01, 1, 0.1))
beta.append('symmetric')
# Validation sets
num_of_docs = len(corpus)
corpus_sets = [# gensim.utils.ClippedCorpus(corpus, num_of_docs*0.25),
# gensim.utils.ClippedCorpus(corpus, num_of_docs*0.5),
gensim.utils.ClippedCorpus(corpus, int(num_of_docs*0.75)),
corpus]
corpus_title = ['75% Corpus', '100% Corpus']
model_results = {'Validation_Set': [],
'Topics': [],
'Alpha': [],
'Beta': [],
'Coherence': []
}
# Can take a long time to run
if 1 == 1:
pbar = tqdm.tqdm(total=540)
# iterate through validation corpuses
for i in range(len(corpus_sets)):
# iterate through number of topics
for k in topics_range:
# iterate through alpha values
for a in alpha:
# iterare through beta values
for b in beta:
# get the coherence score for the given parameters
cv = compute_coherence_values(corpus=corpus_sets[i], dictionary=id2word,
k=k, a=a, b=b)
# Save the model results
model_results['Validation_Set'].append(corpus_title[i])
model_results['Topics'].append(k)
model_results['Alpha'].append(a)
model_results['Beta'].append(b)
model_results['Coherence'].append(cv)
pbar.update(1)
pd.DataFrame(model_results).to_csv('lda_tuning_results_2.csv', index=False)
pbar.close()
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
results = pd.read_csv('lda_tuning_results.csv')
results.head(20)
sorted.tail(1)
plt.plot(kind='scatter', x='Topics', y='Coherence')
```
| github_jupyter |
# Analyzing data with Dask, SQL, and Coiled
In this notebook, we look at using [Dask-SQL](https://dask-sql.readthedocs.io/en/latest/), an exciting new open-source library which adds a SQL query layer on top of Dask. This allows you to query and transform Dask DataFrames using common SQL operations.
## Launch a cluster
Let's first start by creating a Coiled cluster which uses the `examples/dask-sql` software environment, which has `dask`, `pandas`, `s3fs`, and a few other libraries installed.
```
import coiled
cluster = coiled.Cluster(
n_workers=10,
worker_cpu=4,
worker_memory="30GiB",
software="examples/dask-sql",
)
cluster
```
and then connect Dask to our remote Coiled cluster
```
from dask.distributed import Client
client = Client(cluster)
client.wait_for_workers(10)
client
```
## Getting started with Dask-SQL
Internally, Dask-SQL uses a well-established Java library, Apache Calcite, to parse SQL and perform some initial work on your query. To help Dask-SQL locate JVM shared libraries, we set the `JAVA_HOME` environment variable.
```
import os
os.environ["JAVA_HOME"] = os.environ["CONDA_DIR"]
```
The main interface for interacting with Dask-SQL is the `dask_sql.Context` object. It allows your to register Dask DataFrames as data sources and can convert SQL queries to Dask DataFrame operations.
```
from dask_sql import Context
c = Context()
```
For this notebook, we'll use the NYC taxi dataset, which is publically accessible on AWS S3, as our data source
```
import dask.dataframe as dd
from distributed import wait
df = dd.read_csv(
"s3://nyc-tlc/trip data/yellow_tripdata_2019-*.csv",
dtype={
"payment_type": "UInt8",
"VendorID": "UInt8",
"passenger_count": "UInt8",
"RatecodeID": "UInt8",
},
storage_options={"anon": True}
)
# Load datasest into the cluster's distributed memory.
# This isn't strictly necessary, but does allow us to
# avoid repeated running the same I/O operations.
df = df.persist()
wait(df);
```
We can then use our `dask_sql.Context` to assign a table name to this DataFrame, and then use that table name within SQL queries
```
# Registers our Dask DataFrame df as a table with the name "taxi"
c.register_dask_table(df, "taxi")
# Perform a SQL operation on the "taxi" table
result = c.sql("SELECT count(1) FROM taxi")
result
```
Note that this returned another Dask DataFrame and no computation has been run yet. This is similar to other Dask DataFrame operations, which are lazily evaluated. We can call `.compute()` to run the computation on our cluster.
```
result.compute()
```
Hooray, we've run our first SQL query with Dask-SQL! Let's try out some more complex queries.
## More complex SQL examples
With Dask-SQL we can run more complex SQL statements like, for example, a groupby-aggregation:
```
c.sql('SELECT avg(tip_amount) FROM taxi GROUP BY passenger_count').compute()
```
NOTE: that the equivalent operatation using the Dask DataFrame API would be:
```python
df.groupby("passenger_count").tip_amount.mean().compute()
```
We can even make plots of our SQL query results for near-real-time interactive data exploration and visualization.
```
c.sql("""
SELECT floor(trip_distance) AS dist, avg(fare_amount) as fare
FROM taxi
WHERE trip_distance < 50 AND trip_distance >= 0
GROUP BY floor(trip_distance)
""").compute().plot(x="dist", y="fare");
```
If you would like to learn more about Dask-SQL check out the [Dask-SQL docs](https://dask-sql.readthedocs.io/) or [source code](https://github.com/nils-braun/dask-sql) on GitHub.
| github_jupyter |
```
# Copyright 2019 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License")
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
import tensorflow.keras.backend as keras_backend
tf.keras.backend.set_floatx('float32')
import tensorflow_probability as tfp
from tensorflow_probability.python.layers import util as tfp_layers_util
import random
import sys
import time
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
print(tf.__version__) # use tensorflow version >= 2.0.0
#pip install tensorflow=2.0.0
#pip install --upgrade tensorflow-probability
exp_type = 'MAML' # choose from 'MAML', 'MR-MAML-W', 'MR-MAML-A'
class SinusoidGenerator():
def __init__(self, K=10, width=5, K_amp=20, phase=0, amps = None, amp_ind=None, amplitude =None, seed = None):
'''
Args:
K: batch size. Number of values sampled at every batch.
amplitude: Sine wave amplitude.
pahse: Sine wave phase.
'''
self.K = K
self.width = width
self.K_amp = K_amp
self.phase = phase
self.seed = seed
self.x = self._sample_x()
self.amp_ind = amp_ind if amp_ind is not None else random.randint(0,self.K_amp-5)
self.amps = amps if amps is not None else np.linspace(0.1,4,self.K_amp)
self.amplitude = amplitude if amplitude is not None else self.amps[self.amp_ind]
def _sample_x(self):
if self.seed is not None:
np.random.seed(self.seed)
return np.random.uniform(-self.width, self.width, self.K)
def batch(self, noise_scale, x = None):
'''return xa is [K, d_x+d_a], y is [K, d_y]'''
if x is None:
x = self._sample_x()
x = x[:, None]
amp = np.zeros([1, self.K_amp])
amp[0,self.amp_ind] = 1
amp = np.tile(amp, x.shape)
xa = np.concatenate([x, amp], axis = 1)
y = self.amplitude * np.sin(x - self.phase) + np.random.normal(scale = noise_scale, size = x.shape)
return xa, y
def equally_spaced_samples(self, K=None, width=None):
'''Returns K equally spaced samples.'''
if K is None:
K = self.K
if width is None:
width = self.width
return self.batch(noise_scale = 0, x=np.linspace(-width+0.5, width-0.5, K))
noise_scale = 0.1 #@param {type:"number"}
n_obs = 20 #@param {type:"number"}
n_context = 10 #@param {type:"number"}
K_amp = 20 #@param {type:"number"}
x_width = 5 #@param {type:"number"}
n_iter = 20000 #@param {type:"number"}
amps = np.linspace(0.1,4,K_amp)
lr_inner = 0.01 #@param {type:"number"}
dim_w = 5 #@param {type:"number"}
train_ds = [SinusoidGenerator(K=n_context, width = x_width, \
K_amp = K_amp, amps = amps) \
for _ in range(n_iter)]
class SineModel(keras.Model):
def __init__(self):
super(SineModel, self).__init__() # python 2 syntax
# super().__init__() # python 3 syntax
self.hidden1 = keras.layers.Dense(40)
self.hidden2 = keras.layers.Dense(40)
self.out = keras.layers.Dense(1)
def call(self, x):
x = keras.activations.relu(self.hidden1(x))
x = keras.activations.relu(self.hidden2(x))
x = self.out(x)
return x
def kl_qp_gaussian(mu_q, sigma_q, mu_p, sigma_p):
"""Kullback-Leibler KL(N(mu_q), Diag(sigma_q^2) || N(mu_p), Diag(sigma_p^2))"""
sigma2_q = tf.square(sigma_q) + 1e-16
sigma2_p = tf.square(sigma_p) + 1e-16
temp = tf.math.log(sigma2_p) - tf.math.log(sigma2_q) - 1.0 + \
sigma2_q / sigma2_p + tf.square(mu_q - mu_p) / sigma2_p #n_target * d_w
kl = 0.5 * tf.reduce_mean(temp, axis = 1)
return tf.reduce_mean(kl)
def copy_model(model, x=None, input_shape=None):
'''
Copy model weights to a new model.
Args:
model: model to be copied.
x: An input example.
'''
copied_model = SineModel()
if x is not None:
copied_model.call(tf.convert_to_tensor(x))
if input_shape is not None:
copied_model.build(tf.TensorShape([None,input_shape]))
copied_model.set_weights(model.get_weights())
return copied_model
def np_to_tensor(list_of_numpy_objs):
return (tf.convert_to_tensor(obj, dtype=tf.float32) for obj in list_of_numpy_objs)
def compute_loss(model, xa, y):
y_hat = model.call(xa)
loss = keras_backend.mean(keras.losses.mean_squared_error(y, y_hat))
return loss, y_hat
def train_batch(xa, y, model, optimizer, encoder=None):
tensor_xa, tensor_y = np_to_tensor((xa, y))
if exp_type == 'MAML':
with tf.GradientTape() as tape:
loss, _ = compute_loss(model, tensor_xa, tensor_y)
if exp_type == 'MR-MAML-W':
w = encoder(tensor_xa)
with tf.GradientTape() as tape:
y_hat = model.call(w)
loss = keras_backend.mean(keras.losses.mean_squared_error(tensor_y, y_hat))
if exp_type == 'MR-MAML-A':
_, w, _ = encoder(tensor_xa)
with tf.GradientTape() as tape:
y_hat = model.call(w)
loss = keras_backend.mean(keras.losses.mean_squared_error(y, y_hat))
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
def test_inner_loop(model, optimizer, xa_context, y_context, xa_target, y_target, num_steps, encoder=None):
inner_record = []
tensor_xa_target, tensor_y_target = np_to_tensor((xa_target, y_target))
if exp_type == 'MAML':
w_target = tensor_xa_target
if exp_type == 'MR-MAML-W':
w_target = encoder(tensor_xa_target)
if exp_type == 'MR-MAML-A':
_, w_target, _ = encoder(tensor_xa_target)
for step in range(0, np.max(num_steps) + 1):
if step in num_steps:
if exp_type == 'MAML':
loss, y_hat = compute_loss(model, w_target, tensor_y_target)
else:
y_hat = model.call(w_target)
loss = keras_backend.mean(keras.losses.mean_squared_error(tensor_y_target, y_hat))
inner_record.append((step, y_hat, loss))
loss = train_batch(xa_context, y_context, model, optimizer, encoder)
return inner_record
def eval_sinewave_for_test(model, sinusoid_generator, num_steps=(0, 1, 10), encoder=None, learning_rate = lr_inner, ax = None, legend= False):
# data for training
xa_context, y_context = sinusoid_generator.batch(noise_scale = noise_scale)
y_context = y_context + np.random.normal(scale = noise_scale, size = y_context.shape)
# data for validation
xa_target, y_target = sinusoid_generator.equally_spaced_samples(K = 200, width = 5)
y_target = y_target + np.random.normal(scale = noise_scale, size = y_target.shape)
# copy model so we can use the same model multiple times
if exp_type == 'MAML':
copied_model = copy_model(model, x = xa_context)
else:
copied_model = copy_model(model, input_shape=dim_w)
optimizer = keras.optimizers.SGD(learning_rate=learning_rate)
inner_record = test_inner_loop(copied_model, optimizer, xa_context, y_context, xa_target, y_target, num_steps, encoder)
# plot
if ax is not None:
plt.sca(ax)
x_context = xa_context[:,0,None]
x_target = xa_target[:,0,None]
train, = plt.plot(x_context, y_context, '^')
ground_truth, = plt.plot(x_target, y_target0, linewidth=2.0)
plots = [train, ground_truth]
legends = ['Context Points', 'True Function']
for n, y_hat, loss in inner_record:
cur, = plt.plot(x_target, y_hat[:, 0], '--')
plots.append(cur)
legends.append('After {} Steps'.format(n))
if legend:
plt.legend(plots, legends, loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylim(-6, 6)
plt.axvline(x=-sinusoid_generator.width, linestyle='--')
plt.axvline(x=sinusoid_generator.width,linestyle='--')
return inner_record
exp_type = 'MAML'
if exp_type == 'MAML':
model = SineModel()
model.build((None, K_amp+1))
dataset = train_ds
optimizer = keras.optimizers.Adam()
total_loss = 0
n_iter = 15000
losses = []
for i, t in enumerate(random.sample(dataset, n_iter)):
xa_train, y_train = np_to_tensor(t.batch(noise_scale = noise_scale))
with tf.GradientTape(watch_accessed_variables=False) as test_tape:
test_tape.watch(model.trainable_variables)
with tf.GradientTape() as train_tape:
train_loss, _ = compute_loss(model, xa_train, y_train)
model_copy = copy_model(model, xa_train)
gradients_inner = train_tape.gradient(train_loss, model.trainable_variables) # \nabla_{\theta}
k = 0
for j in range(len(model_copy.layers)):
model_copy.layers[j].kernel = tf.subtract(model.layers[j].kernel, # \phi_t = T(\theta, \nabla_{\theta})
tf.multiply(lr_inner, gradients_inner[k]))
model_copy.layers[j].bias = tf.subtract(model.layers[j].bias,
tf.multiply(lr_inner, gradients_inner[k+1]))
k += 2
xa_validation, y_validation = np_to_tensor(t.batch(noise_scale = noise_scale))
test_loss, y_hat = compute_loss(model_copy, xa_validation, y_validation) # test_loss
gradients_outer = test_tape.gradient(test_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients_outer, model.trainable_variables))
total_loss += test_loss
loss = total_loss / (i+1.0)
if i % 1000 == 0:
print('Step {}: loss = {}'.format(i, loss))
if exp_type == 'MAML':
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
n_context = 5
n_test_task = 100
errs = []
for ii in range(n_test_task):
np.random.seed(ii)
A = np.random.uniform(low = amps[0], high = amps[-1])
test_ds = SinusoidGenerator(K=n_context, seed = ii, amplitude = A, amp_ind= random.randint(0,K_amp-5))
inner_record = eval_sinewave_for_test(model, test_ds, num_steps=(0, 1, 5, 100));
errs.append(inner_record[-1][2].numpy())
print('Model is', exp_type, 'meta-test MSE is', np.mean(errs) )
```
# Training & Testing for MR-MAML(W)
```
if exp_type == 'MR-MAML-W':
model = SineModel()
dataset = train_ds
optimizer = keras.optimizers.Adam()
Beta = 5e-5
learning_rate = 1e-3
n_iter = 15000
model.build((None, dim_w))
kernel_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(untransformed_scale_initializer=tf.compat.v1.initializers.random_normal(
mean=-50., stddev=0.1))
encoder_w = tf.keras.Sequential([
tfp.layers.DenseReparameterization(100, activation=tf.nn.relu, kernel_posterior_fn=kernel_posterior_fn,input_shape=(1 + K_amp,)),
tfp.layers.DenseReparameterization(dim_w,kernel_posterior_fn=kernel_posterior_fn),
])
total_loss = 0
losses = []
start = time.time()
for i, t in enumerate(random.sample(dataset, n_iter)):
xa_train, y_train = np_to_tensor(t.batch(noise_scale = noise_scale)) #[K, 1]
x_validation = np.random.uniform(-x_width, x_width, n_obs - n_context)
xa_validation, y_validation = np_to_tensor(t.batch(noise_scale = noise_scale, x = x_validation))
all_var = encoder_w.trainable_variables + model.trainable_variables
with tf.GradientTape(watch_accessed_variables=False) as test_tape:
test_tape.watch(all_var)
with tf.GradientTape() as train_tape:
w_train = encoder_w(xa_train)
y_hat_train = model.call(w_train)
train_loss = keras_backend.mean(keras.losses.mean_squared_error(y_train, y_hat_train)) # K*1
gradients_inner = train_tape.gradient(train_loss, model.trainable_variables) # \nabla_{\theta}
model_copy = copy_model(model, x = w_train)
k = 0
for j in range(len(model_copy.layers)):
model_copy.layers[j].kernel = tf.subtract(model.layers[j].kernel, # \phi_t = T(\theta, \nabla_{\theta})
tf.multiply(lr_inner, gradients_inner[k]))
model_copy.layers[j].bias = tf.subtract(model.layers[j].bias,
tf.multiply(lr_inner, gradients_inner[k+1]))
k += 2
w_validation = encoder_w(xa_validation)
y_hat_validation = model_copy.call(w_validation)
mse_loss = keras_backend.mean(keras.losses.mean_squared_error(y_validation, y_hat_validation))
kl_loss = Beta * sum(encoder_w.losses)
validation_loss = mse_loss + kl_loss
gradients_outer = test_tape.gradient(validation_loss,all_var)
keras.optimizers.Adam(learning_rate=learning_rate).apply_gradients(zip(gradients_outer, all_var))
losses.append(validation_loss.numpy())
if i % 1000 == 0 and i > 0:
print('Step {}:'.format(i), 'loss=', np.mean(losses))
losses = []
if exp_type == 'MR-MAML-W':
n_context = 5
n_test_task = 100
errs = []
for ii in range(n_test_task):
np.random.seed(ii)
A = np.random.uniform(low = amps[0], high = amps[-1])
test_ds = SinusoidGenerator(K=n_context, seed = ii, amplitude = A, amp_ind= random.randint(0,K_amp-5))
inner_record = eval_sinewave_for_test(model, test_ds, num_steps=(0, 1, 5, 100), encoder=encoder_w);
errs.append(inner_record[-1][2].numpy())
print('Model is', exp_type, ', meta-test MSE is', np.mean(errs) )
```
#Training & Testing for MR-MAML(A)
```
if exp_type == 'MR-MAML-A':
class Encoder(keras.Model):
def __init__(self, dim_w=5, name='encoder', **kwargs):
# super().__init__(name = name)
super(Encoder, self).__init__(name = name)
self.dense_proj = layers.Dense(80, activation='relu')
self.dense_mu = layers.Dense(dim_w)
self.dense_sigma_w = layers.Dense(dim_w)
def call(self, inputs):
h = self.dense_proj(inputs)
mu_w = self.dense_mu(h)
sigma_w = self.dense_sigma_w(h)
sigma_w = tf.nn.softplus(sigma_w)
ws = mu_w + tf.random.normal(tf.shape(mu_w)) * sigma_w
return ws, mu_w, sigma_w
model = SineModel()
model.build((None, dim_w))
encoder_w = Encoder(dim_w = dim_w)
encoder_w.build((None, K_amp+1))
Beta = 5.0
n_iter = 10000
dataset = train_ds
optimizer = keras.optimizers.Adam()
losses = [];
for i, t in enumerate(random.sample(dataset, n_iter)):
xa_train, y_train = np_to_tensor(t.batch(noise_scale = noise_scale)) #[K, 1]
with tf.GradientTape(watch_accessed_variables=False) as test_tape, tf.GradientTape(watch_accessed_variables=False) as encoder_test_tape:
test_tape.watch(model.trainable_variables)
encoder_test_tape.watch(encoder_w.trainable_variables)
with tf.GradientTape() as train_tape:
w_train, _, _ = encoder_w(xa_train)
y_hat = model.call(w_train)
train_loss = keras_backend.mean(keras.losses.mean_squared_error(y_train, y_hat))
model_copy = copy_model(model, x=w_train)
gradients_inner = train_tape.gradient(train_loss, model.trainable_variables) # \nabla_{\theta}
k = 0
for j in range(len(model_copy.layers)):
model_copy.layers[j].kernel = tf.subtract(model.layers[j].kernel, # \phi_t = T(\theta, \nabla_{\theta})
tf.multiply(lr_inner, gradients_inner[k]))
model_copy.layers[j].bias = tf.subtract(model.layers[j].bias,
tf.multiply(lr_inner, gradients_inner[k+1]))
k += 2
x_validation = np.random.uniform(-x_width, x_width, n_obs - n_context)
xa_validation, y_validation = np_to_tensor(t.batch(noise_scale = noise_scale, x = x_validation))
w_validation, w_mu_validation, w_sigma_validation = encoder_w(xa_validation)
test_mse, _ = compute_loss(model_copy, w_validation, y_validation)
kl_ib = kl_qp_gaussian(w_mu_validation, w_sigma_validation,
tf.zeros(tf.shape(w_mu_validation)), tf.ones(tf.shape(w_sigma_validation)))
test_loss = test_mse + Beta * kl_ib
gradients_outer = test_tape.gradient(test_mse, model.trainable_variables)
optimizer.apply_gradients(zip(gradients_outer, model.trainable_variables))
gradients = encoder_test_tape.gradient(test_loss,encoder_w.trainable_variables)
keras.optimizers.Adam(learning_rate=0.001).apply_gradients(zip(gradients, encoder_w.trainable_variables))
losses.append(test_loss)
if i % 1000 == 0 and i > 0:
print('Step {}:'.format(i), 'loss = ', np.mean(losses))
if exp_type == 'MR-MAML-A':
n_context = 5
n_test_task = 100
errs = []
for ii in range(n_test_task):
np.random.seed(ii)
A = np.random.uniform(low = amps[0], high = amps[-1])
test_ds = SinusoidGenerator(K=n_context, seed = ii, amplitude = A, amp_ind= random.randint(0,K_amp-5))
inner_record = eval_sinewave_for_test(model, test_ds, num_steps=(0, 1, 5, 100), encoder=encoder_w);
errs.append(inner_record[-1][2].numpy())
print('Model is', exp_type, ', meta-test MSE is', np.mean(errs) )
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gpdsec/Residual-Neural-Network/blob/main/Custom_Resnet_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
*It's custom ResNet trained demonstration purpose, not for accuracy.
Dataset used is cats_vs_dogs dataset from tensorflow_dataset with **Custom Augmentatior** for data augmentation*
---
```
from google.colab import drive
drive.mount('/content/drive')
```
### **1. Importing Libraries**
```
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Input, GlobalMaxPooling2D, add, ReLU
from tensorflow.keras import layers
from tensorflow.keras import Sequential
import tensorflow_datasets as tfds
import pandas as pd
import numpy as np
from tensorflow.keras import Model
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from PIL import Image
from tqdm.notebook import tqdm
import os
import time
%matplotlib inline
```
### **2. Loading & Processing Data**
##### **Loading Data**
```
(train_ds, val_ds, test_ds), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True)
## Image preprocessing function
def preprocess(img, lbl):
image = tf.image.resize_with_pad(img, target_height=224, target_width=224)
image = tf.divide(image, 255)
label = [0,0]
if int(lbl) == 1:
label[1]=1
else:
label[0]=1
return image, tf.cast(label, tf.float32)
train_ds = train_ds.map(preprocess)
test_ds = test_ds.map(preprocess)
val_ds = val_ds.map(preprocess)
info
```
#### **Data Augmentation layer**
```
###### Important Variables
batch_size = 32
shape = (224, 224, 3)
training_steps = int(18610/batch_size)
validation_steps = int(2326/batch_size)
path = '/content/drive/MyDrive/Colab Notebooks/cats_v_dogs.h5'
####### Data agumentation layer
# RandomFlip and RandomRotation Suits my need for Data Agumentation
augmentation=Sequential([
layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
layers.experimental.preprocessing.RandomRotation(0.2),
])
####### Data Shuffle and batch Function
def shufle_batch(train_set, val_set, batch_size):
train_set=(train_set.shuffle(1000).batch(batch_size))
train_set = train_set.map(lambda x, y: (augmentation(x, training=True), y))
val_set = (val_set.shuffle(1000).batch(batch_size))
val_set = val_set.map(lambda x, y: (augmentation(x, training=True), y))
return train_set, val_set
train_set, val_set = shufle_batch(train_ds, val_ds, batch_size)
```
## **3. Creating Model**
##### **Creating Residual block**
```
def residual_block(x, feature_map, filter=(3,3) , _strides=(1,1), _network_shortcut=False):
shortcut = x
x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
if _network_shortcut :
shortcut = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(shortcut)
shortcut = BatchNormalization()(shortcut)
x = add([shortcut, x])
x = ReLU()(x)
return x
# Build the model using the functional API
i = Input(shape)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
x = BatchNormalization()(x)
x = residual_block(x, 32, filter=(3,3) , _strides=(1,1), _network_shortcut=False)
#x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
#x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = residual_block(x,64, filter=(3,3) , _strides=(1,1), _network_shortcut=False)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(2, activation='sigmoid')(x)
model = Model(i, x)
model.compile()
model.summary()
```
### **4. Optimizer and loss Function**
```
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=False)
Optimiser = tf.keras.optimizers.Adam()
```
### **5. Metrics For Loss and Acuracy**
```
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.BinaryAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name="test_loss")
test_accuracy = tf.keras.metrics.BinaryAccuracy(name='test_accuracy')
```
### **6. Function for training and Testing**
```
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
prediction = model(images, training=True)
loss = loss_object(labels,prediction)
gradient = tape.gradient(loss, model.trainable_variables)
Optimiser.apply_gradients(zip(gradient, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, prediction)
@tf.function
def test_step(images, labels):
prediction = model(images, training = False)
t_loss = loss_object(labels, prediction)
test_loss(t_loss)
test_accuracy(labels, prediction)
```
### **7. Training Model**
```
EPOCHS = 25
Train_LOSS = []
TRain_Accuracy = []
Test_LOSS = []
Test_Accuracy = []
for epoch in range(EPOCHS):
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
print(f'Epoch : {epoch+1}')
count = 0 # variable to keep tab how much data steps of training
desc = "EPOCHS {:0>4d}".format(epoch+1)
for images, labels in tqdm(train_set, total=training_steps, desc=desc):
train_step(images, labels)
for test_images, test_labels in val_set:
test_step(test_images, test_labels)
print(
f'Loss: {train_loss.result()}, '
f'Accuracy: {train_accuracy.result()*100}, '
f'Test Loss: {test_loss.result()}, '
f'Test Accuracy: {test_accuracy.result()*100}'
)
Train_LOSS.append(train_loss.result())
TRain_Accuracy.append(train_accuracy.result()*100)
Test_LOSS.append(test_loss.result())
Test_Accuracy.append(test_accuracy.result()*100)
### Saving BestModel
if epoch==0:
min_Loss = test_loss.result()
min_Accuracy = test_accuracy.result()*100
elif (min_Loss>test_loss.result()):
if (min_Accuracy <= test_accuracy.result()*100) :
min_Loss = test_loss.result()
min_Accuracy = ( test_accuracy.result()*100)
print(f"Saving Best Model {epoch+1}")
model.save_weights(path) # Saving Model To drive
```
### **8. Ploting Loss and Accuracy Per Iteration**
```
# Plot loss per iteration
plt.plot(Train_LOSS, label='loss')
plt.plot(Test_LOSS, label='val_loss')
plt.title('Plot loss per iteration')
plt.legend()
# Plot Accuracy per iteration
plt.plot(TRain_Accuracy, label='loss')
plt.plot(Test_Accuracy, label='val_loss')
plt.title('Plot Accuracy per iteration')
plt.legend()
```
## 9. Evoluting model
##### **Note-**
Testing Accuracy of Model with Complete Unseen DataSet.
```
model.load_weights(path)
len(test_ds)
test_set = test_ds.shuffle(50).batch(2326)
for images, labels in test_set:
prediction = model.predict(images)
break
## Function For Accuracy
def accuracy(prediction, labels):
corect =0
for i in range(len(prediction)):
pred = prediction[i]
labe = labels[i]
if pred[0]>pred[1] and labe[0]>labe[1]:
corect+=1
elif pred[0]<pred[1] and labe[0]<labe[1]:
corect+=1
return (corect/len(prediction))*100
print(accuracy(prediction, labels))
```
| github_jupyter |
```
import geopandas as gpd
import pandas as pd
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import tarfile
from discretize import TensorMesh
from SimPEG.utils import plot2Ddata, surface2ind_topo
from SimPEG.potential_fields import gravity
from SimPEG import (
maps,
data,
data_misfit,
inverse_problem,
regularization,
optimization,
directives,
inversion,
utils,
)
#sagrav = gpd.read_file(r'C:\users\rscott\Downloads\gravity_stations_shp\gravity_stations.shp') #test
print(sagrav['MGA_ZONE'].unique())
sagrav.head()
#survey_array = sagrav[['LONGITUDE','LATITUDE','AHD_ELEVAT','BA_1984_UM']].to_numpy()
survey_array = sagrav[['MGA_EAST','MGA_NORTH','AHD_ELEVAT','BA_1984_UM']].to_numpy()
dobs = survey_array
survey_array.shape
dobs.shape
#dobs_total_bounds = [sagrav['MGA_EAST'].min(),sagrav['MGA_NORTH'].min(),sagrav['MGA_EAST'].max(),sagrav['MGA_NORTH'].max()]
dobs_total_bounds = sagrav.total_bounds
print(dobs_total_bounds)
sa54 = sagrav.loc[sagrav['MGA_ZONE'] == 54]
dobs_total_bounds
minx, miny, maxx, maxy = dobs_total_bounds
minx = sa54['MGA_EAST'].min()
maxx = sa54['MGA_EAST'].max()
miny = sa54['MGA_NORTH'].min()
maxy = sa54['MGA_NORTH'].max()
minxtest = maxx - 0.045
minxtest = maxx - 5000
maxxtest = maxx
minytest = maxy - 0.045
minytest = maxy - 5000
maxytest = maxy
print(minxtest, maxxtest, minytest, maxytest)
# Define receiver locations and observed data
receiver_locations = dobs[:, 0:3]
dobs = dobs[:, -1]
#sagrav_test = sagrav.loc[(sagrav['MGA_EAST'] >= minxtest) & (sagrav['MGA_EAST'] <= maxxtest) & (sagrav['MGA_NORTH'] >= minytest) & (sagrav['MGA_NORTH'] <= maxytest) ]
from tqdm import tqdm
from time import sleep
#print(minxtest, minytest, maxxtest, maxytest)
print(minx, miny, maxx, maxy)
#maxrangey = (maxy - miny)//0.045
#maxrangex = (maxx - minx)//0.045
maxrangey = (maxy - miny)//5000
maxrangex = (maxx - minx)//5000
print(maxrangex, maxrangey)
#with tqdm(total=maxrangey) as pbar:
for i in range(int(maxrangey)):
print(i)
for j in range(int(maxrangex)):
#xmin, ymin, xmax, ymax = sagrav_test.total_bounds
#xmin = minx + j*0.045
#ymin = miny + i*0.045
#xmax = minx + (j+1)*0.045
#ymax = miny + (i+1)*0.045
xmin = minx + j*5000
ymin = miny + i*5000
xmax = minx + (j+1)*5000
ymax = miny + (i+1)*5000
print(xmin, ymin, xmax, ymax)
#sagrav_test = sagrav.loc[(sagrav['LONGITUDE'] >= xmin) & (sagrav['LATITUDE'] >= ymin) & (sagrav['LONGITUDE'] <= xmax) & (sagrav['LATITUDE'] <= ymax) ]
sagrav_test = sa54.loc[(sa54['MGA_EAST'] >= xmin) & (sa54['MGA_NORTH'] >= ymin) & (sa54['MGA_EAST'] <= xmax) & (sa54['MGA_NORTH'] <= ymax) ]
#sac_sussex = sagrav.cx[xmin:xmax, ymin:ymax]
#print(sagrav_test.shape)
if (sagrav_test.shape[0] > 0):
#print(sagrav_test)
break
if (sagrav_test.shape[0] > 3):
print(sagrav_test)
break
print(minx, miny, maxx, maxy, sagrav_test.shape)
print(sagrav_test.total_bounds)
print(xmin, xmax, ymin, ymax)
ncx = 10
ncy = 10
ncz = 5
#dx = 0.0045*2
#dy = 0.0045*2
dx = 500
dy = 500
dz = 200
x0 = xmin
y0 = ymin
z0 = -1000
hx = dx*np.ones(ncx)
hy = dy*np.ones(ncy)
hz = dz*np.ones(ncz)
mesh2 = TensorMesh([hx, hx, hz], x0 = [x0,y0,z0])
mesh2
sagrav_test
survey_array_test = sagrav_test[['LONGITUDE','LATITUDE','AHD_ELEVAT','BA_1984_UM']].to_numpy()
print(survey_array_test.shape)
dobs_test = survey_array_test
receiver_locations_test = dobs_test[:, 0:3]
dobs_test = dobs_test[:, -1]
# Plot
mpl.rcParams.update({"font.size": 12})
fig = plt.figure(figsize=(7, 5))
ax1 = fig.add_axes([0.1, 0.1, 0.73, 0.85])
plot2Ddata(receiver_locations_test, dobs_test, ax=ax1, contourOpts={"cmap": "bwr"})
ax1.set_title("Gravity Anomaly")
ax1.set_xlabel("x (m)")
ax1.set_ylabel("y (m)")
ax2 = fig.add_axes([0.8, 0.1, 0.03, 0.85])
norm = mpl.colors.Normalize(vmin=-np.max(np.abs(dobs_test)), vmax=np.max(np.abs(dobs_test)))
cbar = mpl.colorbar.ColorbarBase(
ax2, norm=norm, orientation="vertical", cmap=mpl.cm.bwr, format="%.1e"
)
cbar.set_label("$mgal$", rotation=270, labelpad=15, size=12)
plt.show()
dobs_test.shape
sagrav_test
maximum_anomaly = np.max(np.abs(dobs_test))
uncertainties = 0.01 * maximum_anomaly * np.ones(np.shape(dobs_test))
print(i)
# Define the receivers. The data consist of vertical gravity anomaly measurements.
# The set of receivers must be defined as a list.
receiver_list = gravity.receivers.Point(receiver_locations_test, components="gz")
receiver_list = [receiver_list]
# Define the source field
source_field = gravity.sources.SourceField(receiver_list=receiver_list)
# Define the survey
survey = gravity.survey.Survey(source_field)
receiver_list
data_object = data.Data(survey, dobs=dobs_test, standard_deviation=uncertainties)
data_object
mesh2
#source_field
# Define density contrast values for each unit in g/cc. Don't make this 0!
# Otherwise the gradient for the 1st iteration is zero and the inversion will
# not converge.
background_density = 1e-6
# Find the indecies of the active cells in forward model (ones below surface)
#ind_active = surface2ind_topo(mesh, xyz_topo)
topo_fake = receiver_locations_test + 399
print(receiver_locations_test)
print(topo_fake)
ind_active = surface2ind_topo(mesh2, receiver_locations_test)
#ind_active = surface2ind_topo(mesh2, topo_fake)
#ind_active = surface2ind_topo(mesh2, topo_fake)
# Define mapping from model to active cells
nC = int(ind_active.sum())
model_map = maps.IdentityMap(nP=nC) # model consists of a value for each active cell
# Define and plot starting model
starting_model = background_density * np.ones(nC)
nC
model_map
ind_active
starting_model
simulation = gravity.simulation.Simulation3DIntegral(
survey=survey, mesh=mesh2, rhoMap=model_map, actInd=ind_active
)
# Define the data misfit. Here the data misfit is the L2 norm of the weighted
# residual between the observed data and the data predicted for a given model.
# Within the data misfit, the residual between predicted and observed data are
# normalized by the data's standard deviation.
dmis = data_misfit.L2DataMisfit(data=data_object, simulation=simulation)
# Define the regularization (model objective function).
reg = regularization.Simple(mesh2, indActive=ind_active, mapping=model_map)
# Define how the optimization problem is solved. Here we will use a projected
# Gauss-Newton approach that employs the conjugate gradient solver.
opt = optimization.ProjectedGNCG(
maxIter=10, lower=-1.0, upper=1.0, maxIterLS=20, maxIterCG=10, tolCG=1e-3
)
# Here we define the inverse problem that is to be solved
inv_prob = inverse_problem.BaseInvProblem(dmis, reg, opt)
dmis.nD
# Defining a starting value for the trade-off parameter (beta) between the data
# misfit and the regularization.
starting_beta = directives.BetaEstimate_ByEig(beta0_ratio=1e0)
# Defining the fractional decrease in beta and the number of Gauss-Newton solves
# for each beta value.
beta_schedule = directives.BetaSchedule(coolingFactor=5, coolingRate=1)
# Options for outputting recovered models and predicted data for each beta.
save_iteration = directives.SaveOutputEveryIteration(save_txt=False)
# Updating the preconditionner if it is model dependent.
update_jacobi = directives.UpdatePreconditioner()
# Setting a stopping criteria for the inversion.
target_misfit = directives.TargetMisfit(chifact=1)
# Add sensitivity weights
sensitivity_weights = directives.UpdateSensitivityWeights(everyIter=False)
# The directives are defined as a list.
directives_list = [
sensitivity_weights,
starting_beta,
beta_schedule,
save_iteration,
update_jacobi,
target_misfit,
]
# Here we combine the inverse problem and the set of directives
inv = inversion.BaseInversion(inv_prob, directives_list)
# Run inversion
recovered_model = inv.run(starting_model)
# Plot Recovered Model
fig = plt.figure(figsize=(9, 4))
plotting_map = maps.InjectActiveCells(mesh2, ind_active, np.nan)
ax1 = fig.add_axes([0.1, 0.1, 0.73, 0.8])
#ax1 = fig.add_axes([10.1, 10.1, 73.73, 80.8])
mesh2.plotSlice(
plotting_map * recovered_model,
normal="Y",
ax=ax1,
ind=int(mesh2.nCy / 2),
grid=True,
clim=(np.min(recovered_model), np.max(recovered_model)),
pcolorOpts={"cmap": "viridis"},
)
ax1.set_title("Model slice at y = 0 m")
ax2 = fig.add_axes([0.85, 0.1, 0.05, 0.8])
norm = mpl.colors.Normalize(vmin=np.min(recovered_model), vmax=np.max(recovered_model))
cbar = mpl.colorbar.ColorbarBase(
ax2, norm=norm, orientation="vertical", cmap=mpl.cm.viridis
)
cbar.set_label("$g/cm^3$", rotation=270, labelpad=15, size=12)
plt.show()
dpred = inv_prob.dpred
# Observed data | Predicted data | Normalized data misfit
data_array = np.c_[dobs_test, dpred, (dobs_test - dpred) / uncertainties]
fig = plt.figure(figsize=(17, 4))
plot_title = ["Observed", "Predicted", "Normalized Misfit"]
plot_units = ["mgal", "mgal", ""]
ax1 = 3 * [None]
ax2 = 3 * [None]
norm = 3 * [None]
cbar = 3 * [None]
cplot = 3 * [None]
v_lim = [np.max(np.abs(dobs)), np.max(np.abs(dobs)), np.max(np.abs(data_array[:, 2]))]
for ii in range(0, 3):
ax1[ii] = fig.add_axes([0.33 * ii + 0.03, 0.11, 0.23, 0.84])
cplot[ii] = plot2Ddata(
receiver_list[0].locations,
data_array[:, ii],
ax=ax1[ii],
ncontour=30,
clim=(-v_lim[ii], v_lim[ii]),
contourOpts={"cmap": "bwr"},
)
ax1[ii].set_title(plot_title[ii])
ax1[ii].set_xlabel("x (m)")
ax1[ii].set_ylabel("y (m)")
ax2[ii] = fig.add_axes([0.33 * ii + 0.25, 0.11, 0.01, 0.85])
norm[ii] = mpl.colors.Normalize(vmin=-v_lim[ii], vmax=v_lim[ii])
cbar[ii] = mpl.colorbar.ColorbarBase(
ax2[ii], norm=norm[ii], orientation="vertical", cmap=mpl.cm.bwr
)
cbar[ii].set_label(plot_units[ii], rotation=270, labelpad=15, size=12)
plt.show()
dpred
data_source = "https://storage.googleapis.com/simpeg/doc-assets/gravity.tar.gz"
# download the data
downloaded_data = utils.download(data_source, overwrite=True)
# unzip the tarfile
tar = tarfile.open(downloaded_data, "r")
tar.extractall()
tar.close()
# path to the directory containing our data
dir_path = downloaded_data.split(".")[0] + os.path.sep
# files to work with
topo_filename = dir_path + "gravity_topo.txt"
data_filename = dir_path + "gravity_data.obs"
model_filename = dir_path + "true_model.txt"
xyz_topo = np.loadtxt(str(topo_filename))
xyz_topo.shape
xyzdobs = np.loadtxt(str(data_filename))
xyzdobs.shape
xyz_topo[1]
xyzdobs[0]
xyzdobs
sagrav_test
dobs_test
survey_array_test[0]
receiver_locations_test[0]
print(survey)
survey.nD
data
data.noise_floor
mesh2
xyzdobs
recovered_model
from SimPEG.utils import plot2Ddata, surface2ind_topo
```
| github_jupyter |
# Overview
In this project, I will build an item-based collaborative filtering system using [MovieLens Datasets](https://grouplens.org/datasets/movielens/latest/). Specically, I will train a KNN models to cluster similar movies based on user's ratings and make movie recommendation based on similarity score of previous rated movies.
## [Recommender system](https://en.wikipedia.org/wiki/Recommender_system)
A recommendation system is basically an information filtering system that seeks to predict the "rating" or "preference" a user would give to an item. It is widely used in different internet / online business such as Amazon, Netflix, Spotify, or social media like Facebook and Youtube. By using recommender systems, those companies are able to provide better or more suited products/services/contents that are personalized to a user based on his/her historical consumer behaviors
Recommender systems typically produce a list of recommendations through collaborative filtering or through content-based filtering
This project will focus on collaborative filtering and use item-based collaborative filtering systems make movie recommendation
## [Item-based Collaborative Filtering](https://beckernick.github.io/music_recommender/)
Collaborative filtering based systems use the actions of users to recommend other items. In general, they can either be user based or item based. User based collaborating filtering uses the patterns of users similar to me to recommend a product (users like me also looked at these other items). Item based collaborative filtering uses the patterns of users who browsed the same item as me to recommend me a product (users who looked at my item also looked at these other items). Item-based approach is usually prefered than user-based approach. User-based approach is often harder to scale because of the dynamic nature of users, whereas items usually don't change much, so item-based approach often can be computed offline.
## Data Sets
I use [MovieLens Datasets](https://grouplens.org/datasets/movielens/latest/).
This dataset (ml-latest.zip) describes 5-star rating and free-text tagging activity from [MovieLens](http://movielens.org), a movie recommendation service. It contains 27753444 ratings and 1108997 tag applications across 58098 movies. These data were created by 283228 users between January 09, 1995 and September 26, 2018. This dataset was generated on September 26, 2018.
Users were selected at random for inclusion. All selected users had rated at least 1 movies. No demographic information is included. Each user is represented by an id, and no other information is provided.
The data are contained in the files `genome-scores.csv`, `genome-tags.csv`, `links.csv`, `movies.csv`, `ratings.csv` and `tags.csv`.
## Project Content
1. Load data
2. Exploratory data analysis
3. Train KNN model for item-based collaborative filtering
4. Use this trained model to make movie recommendations to myself
5. Deep dive into the bottleneck of item-based collaborative filtering.
- cold start problem
- data sparsity problem
- popular bias (how to recommend products from the tail of product distribution)
- scalability bottleneck
6. Further study
```
import os
import time
# data science imports
import math
import numpy as np
import pandas as pd
from scipy.sparse import csr_matrix
from sklearn.neighbors import NearestNeighbors
# utils import
from fuzzywuzzy import fuzz
# visualization imports
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# path config
data_path = os.path.join(os.environ['DATA_PATH'], 'MovieLens')
movies_filename = 'movies.csv'
ratings_filename = 'ratings.csv'
```
## 1. Load Data
```
df_movies = pd.read_csv(
os.path.join(data_path, movies_filename),
usecols=['movieId', 'title'],
dtype={'movieId': 'int32', 'title': 'str'})
df_ratings = pd.read_csv(
os.path.join(data_path, ratings_filename),
usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'})
df_movies.info()
df_ratings.info()
df_movies.head()
df_ratings.head()
num_users = len(df_ratings.userId.unique())
num_items = len(df_ratings.movieId.unique())
print('There are {} unique users and {} unique movies in this data set'.format(num_users, num_items))
```
## 2. Exploratory data analysis
- Plot the counts of each rating
- Plot rating frequency of each movie
#### 1. Plot the counts of each rating
we first need to get the counts of each rating from ratings data
```
# get count
df_ratings_cnt_tmp = pd.DataFrame(df_ratings.groupby('rating').size(), columns=['count'])
df_ratings_cnt_tmp
```
We can see that above table does not include counts of zero rating score. So we need to add that in rating count dataframe as well
```
# there are a lot more counts in rating of zero
total_cnt = num_users * num_items
rating_zero_cnt = total_cnt - df_ratings.shape[0]
# append counts of zero rating to df_ratings_cnt
df_ratings_cnt = df_ratings_cnt_tmp.append(
pd.DataFrame({'count': rating_zero_cnt}, index=[0.0]),
verify_integrity=True,
).sort_index()
df_ratings_cnt
```
The count for zero rating score is too big to compare with others. So let's take log transform for count values and then we can plot them to compare
```
# add log count
df_ratings_cnt['log_count'] = np.log(df_ratings_cnt['count'])
df_ratings_cnt
ax = df_ratings_cnt[['count']].reset_index().rename(columns={'index': 'rating score'}).plot(
x='rating score',
y='count',
kind='bar',
figsize=(12, 8),
title='Count for Each Rating Score (in Log Scale)',
logy=True,
fontsize=12,
)
ax.set_xlabel("movie rating score")
ax.set_ylabel("number of ratings")
```
It's interesting that there are more people giving rating score of 3 and 4 than other scores
#### 2. Plot rating frequency of all movies
```
df_ratings.head()
# get rating frequency
df_movies_cnt = pd.DataFrame(df_ratings.groupby('movieId').size(), columns=['count'])
df_movies_cnt.head()
# plot rating frequency of all movies
ax = df_movies_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Movies',
fontsize=12
)
ax.set_xlabel("movie Id")
ax.set_ylabel("number of ratings")
```
The distribution of ratings among movies often satisfies a property in real-world settings,
which is referred to as the long-tail property. According to this property, only a small
fraction of the items are rated frequently. Such items are referred to as popular items. The
vast majority of items are rated rarely. This results in a highly skewed distribution of the
underlying ratings.
Let's plot the same distribution but with log scale
```
# plot rating frequency of all movies in log scale
ax = df_movies_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Movies (in Log Scale)',
fontsize=12,
logy=True
)
ax.set_xlabel("movie Id")
ax.set_ylabel("number of ratings (log scale)")
```
We can see that roughly 10,000 out of 53,889 movies are rated more than 100 times. More interestingly, roughly 20,000 out of 53,889 movies are rated less than only 10 times. Let's look closer by displaying top quantiles of rating counts
```
df_movies_cnt['count'].quantile(np.arange(1, 0.6, -0.05))
```
So about 1% of movies have roughly 97,999 or more ratings, 5% have 1,855 or more, and 20% have 100 or more. Since we have so many movies, we'll limit it to the top 25%. This is arbitrary threshold for popularity, but it gives us about 13,500 different movies. We still have pretty good amount of movies for modeling. There are two reasons why we want to filter to roughly 13,500 movies in our dataset.
- Memory issue: we don't want to run into the “MemoryError” during model training
- Improve KNN performance: lesser known movies have ratings from fewer viewers, making the pattern more noisy. Droping out less known movies can improve recommendation quality
```
# filter data
popularity_thres = 50
popular_movies = list(set(df_movies_cnt.query('count >= @popularity_thres').index))
df_ratings_drop_movies = df_ratings[df_ratings.movieId.isin(popular_movies)]
print('shape of original ratings data: ', df_ratings.shape)
print('shape of ratings data after dropping unpopular movies: ', df_ratings_drop_movies.shape)
```
After dropping 75% of movies in our dataset, we still have a very large dataset. So next we can filter users to further reduce the size of data
```
# get number of ratings given by every user
df_users_cnt = pd.DataFrame(df_ratings_drop_movies.groupby('userId').size(), columns=['count'])
df_users_cnt.head()
# plot rating frequency of all movies
ax = df_users_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Users',
fontsize=12
)
ax.set_xlabel("user Id")
ax.set_ylabel("number of ratings")
df_users_cnt['count'].quantile(np.arange(1, 0.5, -0.05))
```
We can see that the distribution of ratings by users is very similar to the distribution of ratings among movies. They both have long-tail property. Only a very small fraction of users are very actively engaged with rating movies that they watched. Vast majority of users aren't interested in rating movies. So we can limit users to the top 40%, which is about 113,291 users.
```
# filter data
ratings_thres = 50
active_users = list(set(df_users_cnt.query('count >= @ratings_thres').index))
df_ratings_drop_users = df_ratings_drop_movies[df_ratings_drop_movies.userId.isin(active_users)]
print('shape of original ratings data: ', df_ratings.shape)
print('shape of ratings data after dropping both unpopular movies and inactive users: ', df_ratings_drop_users.shape)
```
## 3. Train KNN model for item-based collaborative filtering
- Reshaping the Data
- Fitting the Model
#### 1. Reshaping the Data
For K-Nearest Neighbors, we want the data to be in an (artist, user) array, where each row is a movie and each column is a different user. To reshape the dataframe, we'll pivot the dataframe to the wide format with movies as rows and users as columns. Then we'll fill the missing observations with 0s since we're going to be performing linear algebra operations (calculating distances between vectors). Finally, we transform the values of the dataframe into a scipy sparse matrix for more efficient calculations.
```
# pivot and create movie-user matrix
movie_user_mat = df_ratings_drop_users.pivot(index='movieId', columns='userId', values='rating').fillna(0)
# create mapper from movie title to index
movie_to_idx = {
movie: i for i, movie in
enumerate(list(df_movies.set_index('movieId').loc[movie_user_mat.index].title))
}
# transform matrix to scipy sparse matrix
movie_user_mat_sparse = csr_matrix(movie_user_mat.values)
```
#### 2. Fitting the Model
Time to implement the model. We'll initialize the NearestNeighbors class as model_knn and fit our sparse matrix to the instance. By specifying the metric = cosine, the model will measure similarity bectween artist vectors by using cosine similarity.
```
%env JOBLIB_TEMP_FOLDER=/tmp
# define model
model_knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=20, n_jobs=-1)
# fit
model_knn.fit(movie_user_mat_sparse)
```
## 4. Use this trained model to make movie recommendations to myself
And we're finally ready to make some recommendations!
```
def fuzzy_matching(mapper, fav_movie, verbose=True):
"""
return the closest match via fuzzy ratio. If no match found, return None
Parameters
----------
mapper: dict, map movie title name to index of the movie in data
fav_movie: str, name of user input movie
verbose: bool, print log if True
Return
------
index of the closest match
"""
match_tuple = []
# get match
for title, idx in mapper.items():
ratio = fuzz.ratio(title.lower(), fav_movie.lower())
if ratio >= 60:
match_tuple.append((title, idx, ratio))
# sort
match_tuple = sorted(match_tuple, key=lambda x: x[2])[::-1]
if not match_tuple:
print('Oops! No match is found')
return
if verbose:
print('Found possible matches in our database: {0}\n'.format([x[0] for x in match_tuple]))
return match_tuple[0][1]
def make_recommendation(model_knn, data, mapper, fav_movie, n_recommendations):
"""
return top n similar movie recommendations based on user's input movie
Parameters
----------
model_knn: sklearn model, knn model
data: movie-user matrix
mapper: dict, map movie title name to index of the movie in data
fav_movie: str, name of user input movie
n_recommendations: int, top n recommendations
Return
------
list of top n similar movie recommendations
"""
# fit
model_knn.fit(data)
# get input movie index
print('You have input movie:', fav_movie)
idx = fuzzy_matching(mapper, fav_movie, verbose=True)
# inference
print('Recommendation system start to make inference')
print('......\n')
distances, indices = model_knn.kneighbors(data[idx], n_neighbors=n_recommendations+1)
# get list of raw idx of recommendations
raw_recommends = \
sorted(list(zip(indices.squeeze().tolist(), distances.squeeze().tolist())), key=lambda x: x[1])[:0:-1]
# get reverse mapper
reverse_mapper = {v: k for k, v in mapper.items()}
# print recommendations
print('Recommendations for {}:'.format(fav_movie))
for i, (idx, dist) in enumerate(raw_recommends):
print('{0}: {1}, with distance of {2}'.format(i+1, reverse_mapper[idx], dist))
my_favorite = 'Iron Man'
make_recommendation(
model_knn=model_knn,
data=movie_user_mat_sparse,
fav_movie=my_favorite,
mapper=movie_to_idx,
n_recommendations=10)
```
This is very interesting that my **KNN** model recommends movies that were also produced in very similar years. However, the cosine distance of all those recommendations are actually quite small. This is probabily because there is too many zero values in our movie-user matrix. With too many zero values in our data, the data sparsity becomes a real issue for **KNN** model and the distance in **KNN** model starts to fall apart. So I'd like to dig deeper and look closer inside our data.
#### (extra inspection)
Let's now look at how sparse the movie-user matrix is by calculating percentage of zero values in the data.
```
# calcuate total number of entries in the movie-user matrix
num_entries = movie_user_mat.shape[0] * movie_user_mat.shape[1]
# calculate total number of entries with zero values
num_zeros = (movie_user_mat==0).sum(axis=1).sum()
# calculate ratio of number of zeros to number of entries
ratio_zeros = num_zeros / num_entries
print('There is about {:.2%} of ratings in our data is missing'.format(ratio_zeros))
```
This result confirms my hypothesis. The vast majority of entries in our data is zero. This explains why the distance between similar items or opposite items are both pretty large.
## 5. Deep dive into the bottleneck of item-based collaborative filtering.
- cold start problem
- data sparsity problem
- popular bias (how to recommend products from the tail of product distribution)
- scalability bottleneck
We saw there is 98.35% of user-movie interactions are not yet recorded, even after I filtered out less-known movies and inactive users. Apparently, we don't even have sufficient information for the system to make reliable inferences for users or items. This is called **Cold Start** problem in recommender system.
There are three cases of cold start:
1. New community: refers to the start-up of the recommender, when, although a catalogue of items might exist, almost no users are present and the lack of user interaction makes very hard to provide reliable recommendations
2. New item: a new item is added to the system, it might have some content information but no interactions are present
3. New user: a new user registers and has not provided any interaction yet, therefore it is not possible to provide personalized recommendations
We are not concerned with the last one because we can use item-based filtering to make recommendations for new user. In our case, we are more concerned with the first two cases, especially the second case.
The item cold-start problem refers to when items added to the catalogue have either none or very little interactions. This constitutes a problem mainly for collaborative filtering algorithms due to the fact that they rely on the item's interactions to make recommendations. If no interactions are available then a pure collaborative algorithm cannot recommend the item. In case only a few interactions are available, although a collaborative algorithm will be able to recommend it, the quality of those recommendations will be poor. This arises another issue, which is not anymore related to new items, but rather to unpopular items. In some cases (e.g. movie recommendations) it might happen that a handful of items receive an extremely high number of iteractions, while most of the items only receive a fraction of them. This is also referred to as popularity bias. Please recall previous long-tail skewed distribution of movie rating frequency plot.
In addtition to that, scalability is also a big issue in KNN model too. Its time complexity is O(nd + kn), where n is the cardinality of the training set and d the dimension of each sample. And KNN takes more time in making inference than training, which increase the prediction latency
## 6. Further study
Use spark's ALS to solve above problems
| github_jupyter |
```
import json
import os
from pathlib import Path
import time
import copy
import numpy as np
import pandas as pd
import torch
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
from torchvision import models
from fastai.dataset import open_image
import json
from PIL import ImageDraw, ImageFont
import matplotlib.pyplot as plt
from matplotlib import patches, patheffects
import cv2
from tqdm import tqdm
SIZE = 224
IMAGES = 'images'
ANNOTATIONS = 'annotations'
CATEGORIES = 'categories'
ID = 'id'
NAME = 'name'
IMAGE_ID = 'image_id'
BBOX = 'bbox'
CATEGORY_ID = 'category_id'
FILE_NAME = 'file_name'
!pwd
!ls $HOME/data/pascal
!ls ../input/pascal/pascal
PATH = Path('/home/paperspace/data/pascal')
list(PATH.iterdir())
train_data = json.load((PATH/'pascal_train2007.json').open())
val_data = json.load((PATH/'pascal_val2007.json').open())
test_data = json.load((PATH/'pascal_test2007.json').open())
print('train:', train_data.keys())
print('val:', val_data.keys())
print('test:', test_data.keys())
train_data[ANNOTATIONS][:1]
train_data[IMAGES][:2]
len(train_data[CATEGORIES])
next(iter(train_data[CATEGORIES]))
```
## Categories - 1th indexed
```
categories = {c[ID]:c[NAME] for c in train_data[CATEGORIES]}
categories
len(categories)
IMAGE_PATH = Path(PATH/'JPEGImages/')
list(IMAGE_PATH.iterdir())[:2]
train_filenames = {o[ID]:o[FILE_NAME] for o in train_data[IMAGES]}
print('length:', len(train_filenames))
image1_id, image1_fn = next(iter(train_filenames.items()))
image1_id, image1_fn
train_image_ids = [o[ID] for o in train_data[IMAGES]]
print('length:', len(train_image_ids))
train_image_ids[:5]
IMAGE_PATH
image1_path = IMAGE_PATH/image1_fn
image1_path
str(image1_path)
im = open_image(str(IMAGE_PATH/image1_fn))
print(type(im))
im.shape
len(train_data[ANNOTATIONS])
# get the biggest object label per image
train_data[ANNOTATIONS][0]
bbox = train_data[ANNOTATIONS][0][BBOX]
bbox
def fastai_bb(bb):
return np.array([bb[1], bb[0], bb[3]+bb[1]-1, bb[2]+bb[0]-1])
print(bbox)
print(fastai_bb(bbox))
fbb = fastai_bb(bbox)
fbb
def fastai_bb_hw(bb):
h= bb[3]-bb[1]+1
w = bb[2]-bb[0]+1
return [h,w]
fastai_bb_hw(fbb)
def pascal_bb_hw(bb):
return bb[2:]
pascal_bb_hw(bbox)
train_image_w_area = {i:None for i in train_image_ids}
print(image1_id, train_image_w_area[image1_id])
for x in train_data[ANNOTATIONS]:
bbox = x[BBOX]
new_category_id = x[CATEGORY_ID]
image_id = x[IMAGE_ID]
h, w = pascal_bb_hw(bbox)
new_area = h*w
cat_id_area = train_image_w_area[image_id]
if not cat_id_area:
train_image_w_area[image_id] = (new_category_id, new_area)
else:
category_id, area = cat_id_area
if new_area > area:
train_image_w_area[image_id] = (new_category_id, new_area)
train_image_w_area[image1_id]
plt.imshow(im)
def show_img(im, figsize=None, ax=None):
if not ax:
fig,ax = plt.subplots(figsize=figsize)
ax.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
return ax
show_img(im)
# show_img(im)
# b = bb_hw(im0_a[0])
# draw_rect(ax, b)
image1_fn
def draw_rect(ax, b):
patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor='white', lw=2))
draw_outline(patch, 4)
image1_id
image1_path
plt.imshow(open_image(str(image1_path)))
train_data[ANNOTATIONS][0]
im = open_image(str(image1_path))
ax = show_img(im)
def draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(
linewidth=lw, foreground='black'), patheffects.Normal()])
image1_ann = train_data[ANNOTATIONS][0]
b = fastai_bb(image1_ann[BBOX])
b
def draw_text(ax, xy, txt, sz=14):
text = ax.text(*xy, txt,
verticalalignment='top', color='white', fontsize=sz, weight='bold')
draw_outline(text, 1)
ax = show_img(im)
b = image1_ann[BBOX]
print(b)
draw_rect(ax, b)
draw_text(ax, b[:2], categories[image1_ann[CATEGORY_ID]])
# create a Pandas dataframe for: image_id, filename, category
BIGGEST_OBJECT_CSV = '../input/pascal/pascal/tmp/biggest-object.csv'
IMAGE = 'image'
CATEGORY = 'category'
train_df = pd.DataFrame({
IMAGE_ID: image_id,
IMAGE: str(IMAGE_PATH/image_fn),
CATEGORY: train_image_w_area[image_id][0]
} for image_id, image_fn in train_filenames.items())
train_df.head()
# NOTE: won't work in Kaggle Kernal b/c read-only file system
# train_df.to_csv(BIGGEST_OBJECT_CSV, index=False)
train_df.iloc[0]
len(train_df)
class BiggestObjectDataset(Dataset):
def __init__(self, df):
self.df = df
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
im = open_image(self.df.iloc[idx][IMAGE]) # HW
resized_image = cv2.resize(im, (SIZE, SIZE)) # HW
image = np.transpose(resized_image, (2, 0, 1)) # CHW
category = self.df.iloc[idx][CATEGORY]
return image, category
dataset = BiggestObjectDataset(train_df)
inputs, label = dataset[0]
label
inputs.shape
hwc_image = np.transpose(inputs, (1, 2, 0))
plt.imshow(hwc_image)
```
# DataLoader
```
BATCH_SIZE = 64
NUM_WORKERS = 4
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=NUM_WORKERS)
batch_inputs, batch_labels = next(iter(dataloader))
batch_inputs.size()
batch_labels
np_batch_inputs = batch_inputs.numpy()
i = np.random.randint(0,20)
print(categories[batch_labels[i].item()])
chw_image = np_batch_inputs[i]
print(chw_image.shape)
hwc_image = np.transpose(chw_image, (1, 2, 0))
plt.imshow(hwc_image)
NUM_CATEGORIES = len(categories)
NUM_CATEGORIES
```
## train the model
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# device = torch.device('cpu')
print('device:', device)
model_ft = models.resnet18(pretrained=True)
# freeze pretrained model
for layer in model_ft.parameters():
layer.requires_grad = False
num_ftrs = model_ft.fc.in_features
print('final layer in/out:', num_ftrs, NUM_CATEGORIES)
model_ft.fc = nn.Linear(num_ftrs, NUM_CATEGORIES)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer = optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 10
epoch_losses = []
epoch_accuracies = []
for epoch in range(EPOCHS):
print('epoch:', epoch)
running_loss = 0.0
running_correct = 0
for inputs, labels in tqdm(dataloader):
inputs = inputs.to(device)
labels = labels.to(device)
# clear gradients
optimizer.zero_grad()
# forward pass
outputs = model_ft(inputs)
_, preds = torch.max(outputs, dim=1)
labels_0_indexed = labels-1
loss = criterion(outputs, labels_0_indexed)
# backwards pass
loss.backward()
optimizer.step()
# step stats
running_loss += loss.item() * inputs.size(0)
running_correct += torch.sum(preds == labels_0_indexed)
# epoch stats
epoch_loss = running_loss / len(dataset)
epoch_acc = running_correct.double().item() / len(dataset)
epoch_losses.append(epoch_loss)
epoch_accuracies.append(epoch_acc)
print('loss:', epoch_loss, 'acc:', epoch_acc)
epoch_losses
epoch_accuracies
plt.plot(epoch_losses)
plt.plot(epoch_accuracies)
```
| github_jupyter |
## Bibliotecas:
```
#importanto bibliotecas
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn import datasets, linear_model, preprocessing
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler
from datetime import datetime
from sklearn.linear_model import Lasso, LassoCV
from sklearn.impute import SimpleImputer
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error as MSE
from sklearn.model_selection import train_test_split
```
#### Lendo csv e removendo algumas colunas redundantes:
```
Y_train=pd.read_csv('shared/bases_budokai_ufpr/produtividade_soja_modelagem.csv')
df=pd.read_csv('shared/bases_budokai_ufpr/agroclimatology_budokai.csv')
Y_train.head()
Y_train['nivel'].unique()
#Y_codigo=Y_train['codigo_ibge']
Y_train=Y_train.drop(columns=['nivel', 'name'])
Y_train.head()
```
### De acordo com as respostas esperadas temos:
```
codigo_saida=[4102000,4104303,4104428,4104808,4104907,4109401,4113205,4113700,4113734,4114005,4114401,4117701,4117909,4119608,4119905,4127007,4127403,4127502,4127700,4128005
]
dataset=pd.DataFrame(columns=df.keys())
#dataset.loc[dataset['codigo_ibge']==codigo_saida[j]]
for k in codigo_saida:
dataset=pd.concat([df.loc[df['codigo_ibge']==k],dataset])
dataset=dataset.reset_index(drop=True)
dataset.info()
dataset['codigo_ibge']=list(map(int,dataset['codigo_ibge']))
#transformar o codigo em inteiro novamente
#df.loc[df['codigo_ibge']==codigo_saida[0]]
```
### Vamos começar transformando os dados de data em 'datetime' para melhor agrupar em semanas, meses ou anos.
```
def t_data_day(data):
data=str(data)
data=datetime.strptime(data,"%Y%m%d")
return data.day
def t_data_month(data):
data=str(data)
data=datetime.strptime(data,"%Y%m%d")
return data.month
def t_data_year(data):
data=str(data)
data=datetime.strptime(data,"%Y%m%d")
return data.year
#return data.day, data.month, data.year
dataset['day']=list(map(t_data_day, dataset['data']))
dataset['month']=list(map(t_data_month, dataset['data']))
dataset['year']=list(map(t_data_year, dataset['data']))
dataset.head()
```
## *Uma feature importante é o valor da produção do ano passado, inserir depois.
```
dataset=dataset.drop(dataset.loc[dataset['year']==2003].index).reset_index(drop=True)
```
#### A ideia posteriormente aqui é fazer um algoritmo de pesos para cada mês (media ponderada)
### A titulo de teste do sistema podemos implementar um algoritmo rápido para previsões baseado na média dos valores anuais.
```
dataset=dataset.drop(columns=['data','latitude','longitude','day','month'])
```
#### Para cada codigo postal agrupamos os dados anuais pela média.
```
codigos=dataset['codigo_ibge'].unique()
datanew=pd.DataFrame(columns=dataset.keys())
for i in codigos:
aux = dataset.loc[dataset['codigo_ibge']==i].groupby(by='year').agg('mean').reset_index()
datanew=pd.concat([datanew,aux])
datanew=datanew.reset_index(drop=True)
datanew.head()
X_train=datanew.loc[datanew['year']<2018]
X_test=datanew.loc[datanew['year']>2017]
```
#### Scalando os dados:
```
colunas_=list(X_train.keys())
colunas_.remove('year')
colunas_.remove('codigo_ibge')
#removendo os dados que não serão escalados
scaler=StandardScaler()
X_train_scaled=scaler.fit_transform(X_train[colunas_])
X_train_scaled=pd.DataFrame(X_train_scaled, columns=colunas_)
X_test_scaled=scaler.transform(X_test[colunas_])
X_test_scaled=pd.DataFrame(X_test_scaled, columns=colunas_)
```
### Precisamos reinserir o codigo e ano e resetar o indice.
```
X_train_scaled[['codigo_ibge','year']]=X_train[['codigo_ibge','year']].reset_index(drop=True)
X_test_scaled[['codigo_ibge','year']]=X_test[['codigo_ibge','year']].reset_index(drop=True)
X_train_scaled
#k-fold para achar melhor valor de alpha
model = LassoCV(cv=5, random_state=0, max_iter=10000)
df_out=pd.DataFrame(columns=['codigo_ibge','2018','2019','2020'])
for i in codigo_saida:
X_train_menor=X_train_scaled.loc[X_train_scaled['codigo_ibge']==i].drop(columns=['codigo_ibge','year'])
Y_train_menor=Y_train[Y_train['codigo_ibge']==i].drop(columns='codigo_ibge').T
model.fit(X_train_menor, Y_train_menor)
##########################
# Teste com árvore de decisão
dt = DecisionTreeRegressor(max_depth=4, min_samples_leaf=0.1, random_state=3)
dt.fit(X_train_menor,Y_train_menor)
##############################################3
lasso_t= Lasso(alpha=model.alpha_, max_iter= 10000).fit(X_train_menor,Y_train_menor)
print(f'\n Alpha usado: {model.alpha_}')
print('Features non-zero para {}: {}'.format(i, np.sum(lasso_t.coef_ != 0)))
print('Features com valores non-zero (ordenados pela magnitude absoluta):')
for e in sorted (list(zip(list(X_train), lasso_t.coef_)), key = lambda e: -abs(e[1])):
if e[1] != 0:
print('\t{}, {:.3f}'.format(e[0], e[1]))
X_test_menor=X_test_scaled.loc[X_test_scaled['codigo_ibge']==i].drop(columns=['codigo_ibge','year'])
##################
Y_prd = dt.predict(X_test_menor)
predicoes = Y_prd
#Analisando a acuracia
##################
#predicoes=lasso_t.predict(X_test_menor)
data_saida={'codigo_ibge':i, '2018':predicoes[0], '2019':predicoes[1], '2020':predicoes[2]}
data_saida=pd.DataFrame([data_saida])
df_out=pd.concat([df_out,data_saida])
df_out=df_out.reset_index(drop=True)
df_out
df_out.to_csv('submission.csv',index=False)
```
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
activation(torch.sum(weights * features) + bias)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
activation(torch.matmul(weights, features.t()) + bias)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
Quick study to investigate oscillations in reported infections in Germany. Here is the plot of the data in question:
```
import coronavirus
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ['svg']
coronavirus.display_binder_link("2020-05-10-notebook-weekly-fluctuations-in-data-from-germany.ipynb")
# get data
cases, deaths, country_label = coronavirus.get_country_data("Germany")
# plot daily changes
fig, ax = plt.subplots(figsize=(8, 4))
coronavirus.plot_daily_change(ax, cases, 'C1')
```
The working assumption is that during the weekend fewer numbers are captured or reported. The analysis below seems to confirm this.
We compute a discrete Fourier transform of the data, and expect a peak at a frequency corresponding to a period of 7 days.
## Data selection
We start with data from 1st March as numbers before were small. It is convenient to take a number of days that can be divided by seven (for alignment of the freuency axis in Fourier space, so we choose 63 days from 1st of March):
```
data = cases['2020-03-01':'2020-05-03']
# compute daily change
diff = data.diff().dropna()
# plot data points (corresponding to bars in figure above:)
fig, ax = plt.subplots()
ax.plot(diff.index, diff, '-C1',
label='daily new cases Germany')
fig.autofmt_xdate() # avoid x-labels overlap
# How many data points (=days) have we got?
diff.size
diff2 = diff.resample("24h").asfreq() # ensure we have one data point every day
diff2.size
```
## Compute the frequency spectrum
```
fig, ax = plt.subplots()
# compute power density spectrum
change_F = abs(np.fft.fft(diff2))**2
# determine appropriate frequencies
n = change_F.size
freq = np.fft.fftfreq(n, d=1)
# We skip values at indices 0, 1 and 2: these are large because we have a finite
# sequence and not substracted the mean from the data set
# We also only plot the the first n/2 frequencies as for high n, we get negative
# frequencies with the same data content as the positive ones.
ax.plot(freq[3:n//2], change_F[3:n//2], 'o-C3')
ax.set_xlabel('frequency [cycles per day]');
```
A signal with oscillations on a weekly basis would correspond to a frequency of 1/7 as frequency is measured in `per day`. We thus expect the peak above to be at 1/7 $\approx 0.1428$.
We can show this more easily by changing the frequency scale from cycles per day to cycles per week:
```
fig, ax = plt.subplots()
ax.plot(freq[3:n//2] * 7, change_F[3:n//2], 'o-C3')
ax.set_xlabel('frequency [cycles per week]');
```
In other words: there as a strong component of the data with a frequency corresponding to one week.
This is the end of the notebook.
# Fourier transform basics
A little playground to explore properties of discrete Fourier transforms.
```
time = np.linspace(0, 4, 1000)
signal_frequency = 3 # choose this freely
signal = np.sin(time * 2 * np.pi * signal_frequency)
fourier = np.abs(np.fft.fft(signal))
# compute frequencies in fourier spectrum
n = signal.size
timestep = time[1] - time[0]
freqs = np.fft.fftfreq(n, d=timestep)
fig, ax = plt.subplots()
ax.plot(time, signal, 'oC9', label=f'signal, frequency={signal_frequency}')
ax.set_xlabel('time')
ax.legend()
fig, ax = plt.subplots()
ax.plot(freqs[0:n//2][:20], fourier[0:n//2][0:20], 'o-C8', label="Fourier transform")
ax.legend()
ax.set_xlabel('frequency');
coronavirus.display_binder_link("2020-05-10-notebook-weekly-fluctuations-in-data-from-germany.ipynb")
```
| github_jupyter |
# Predict Happiness Source
- Importing the Packages
```
# importing packages
import pandas as pd
import numpy as np # For mathematical calculations
import seaborn as sns # For data visualization
import matplotlib.pyplot as plt # For plotting graphs
%matplotlib inline
import warnings # To ignore any warnings warnings.filterwarnings("ignore")
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
#from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from nltk.stem.porter import PorterStemmer
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
import nltk
import re
import codecs
import seaborn as sns
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
#from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score
import pandas as pd
import numpy as np
#import xgboost as xgb
from tqdm import tqdm
from sklearn.svm import SVC
from keras.models import Sequential
from keras.layers.recurrent import LSTM, GRU
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.embeddings import Embedding
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from sklearn import preprocessing, decomposition, model_selection, metrics, pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from keras.layers import GlobalMaxPooling1D, Conv1D, MaxPooling1D, Flatten, Bidirectional, SpatialDropout1D
from keras.preprocessing import sequence, text
from keras.callbacks import EarlyStopping
from nltk import word_tokenize
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
```
- Reading the data
```
train=pd.read_csv('hm_train.csv')
test=pd.read_csv('hm_test.csv')
submission=pd.read_csv('sample_submission.csv')
print(train.columns,test.columns,submission.columns)
print(train.head(),test.head(),submission.head())
print(train.shape,test.shape,submission.shape)
```
- Let’s make a copy of train and test data so that even if we have to make any changes in these datasets we would not lose the original datasets.
```
train_copy=pd.read_csv('hm_train.csv').copy()
test_copy=pd.read_csv('hm_test.csv').copy()
submission_copy=pd.read_csv('sample_submission.csv').copy()
```
# Univariate Analysis
```
print(train.dtypes,test.dtypes,submission.dtypes)
train['predicted_category'].value_counts(normalize=True)
# Read as percentage after multiplying by 100
train['predicted_category'].value_counts(normalize=True).plot.bar()
```
- exercise featurescontributes approx 2%, nature contributes 3.5 %, leisure contributes 7.5%, enjoy_the_moment does 10%, bonding does 10%, achievement does 34%, and affection does34.5 % to the population sample.
On Looking at the datsets, we identified that there are 3 Data Types:
- Continuous : reflection_period, cleaned_hm ,num_sentence
- Categorical :Category
- Text : cleaned_hm
Let's go for Continuous Data Type Exploration. We know that we use BarGraphs for Categorical variable, Histogram or ScatterPlot for continuous variables.
```
# reflection_period
plt.figure(figsize=(6, 6))
sns.countplot(train["reflection_period"])
plt.title('reflection_period')
plt.show()
# num_sentence
plt.figure(figsize=(10, 6))
sns.countplot(train["num_sentence"])
plt.title('num_sentence')
plt.show()
# predicted_category
plt.figure(figsize=(13, 6))
sns.countplot(train["predicted_category"])
plt.title('predicted_category')
plt.show()
# Pairplot for cross visualisation of continuous variables
plt.figure(figsize=(30,30))
sns.pairplot(train, diag_kind='kde');
```
Data Preprocessing
- Checking Missing Values
```
def missing_value(df):
total = df.isnull().sum().sort_values(ascending = False)
percent = (df.isnull().sum()/df.isnull().count()*100).sort_values(ascending=False)
missing_df = pd.concat([total, percent], axis=1, keys = ['Total', 'Percent'])
return missing_df
mis_train = missing_value(train)
mis_train
mis_test = missing_value(test)
mis_test
```
Text Preprocessing
- Let's try to understand the writing style if possible :)
```
grouped_df = train.groupby('predicted_category')
for name, group in grouped_df:
print("Text : ", name)
cnt = 0
for ind, row in group.iterrows():
print(row["cleaned_hm"])
cnt += 1
if cnt == 5:
break
print("\n")
```
Though, there are not very special Characters but some are present like expression of emoji's i.e. :), vs, etc
### Feature Engineering:
Now let us come try to do some feature engineering. This consists of two main parts.
- Meta features - features that are extracted from the text like number of words, number of stop words, number of punctuations etc
- Text based features - features directly based on the text / words like frequency, svd, word2vec etc.
#### Meta Features:
We will start with creating meta featues and see how good are they at predicting the happiness source. The feature list is as follows:
- Number of words in the text
- Number of unique words in the text
- Number of characters in the text
- Number of stopwords
- Number of punctuations
- Number of upper case words
- Number of title case words
- Average length of the words
Feature Engineering function
```
import string
def feature_engineering(df):
## Number of words in the text ##
df["num_words"] = df['cleaned_hm'].apply(lambda x: len(str(x).split()))
## Number of unique words in the text ##
df["num_unique_words"] = df['cleaned_hm'].apply(lambda x: len(set(str(x).split())))
## Number of characters in the text ##
df["num_chars"] = df['cleaned_hm'].apply(lambda x: len(str(x)))
## Number of stopwords in the text ##
from nltk.corpus import stopwords
eng_stopwords = stopwords.words('english')
df["num_stopwords"] = df['cleaned_hm'].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
## Number of punctuations in the text ##
df["num_punctuations"] =df['cleaned_hm'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]))
## Number of title case words in the text ##
df["num_words_upper"] = df['cleaned_hm'].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
## Number of title case words in the text ##
df["num_words_title"] = df['cleaned_hm'].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
## Average length of the words in the text ##
df["mean_word_len"] = df['cleaned_hm'].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
df = pd.concat([df, df["num_words"],df["num_unique_words"],df["num_chars"],df["num_stopwords"],df["num_punctuations"],
df["num_words_upper"],df["num_words_title"],df["mean_word_len"]],axis=1)
#X = dataset.loc[:,['Transaction-Type','Complaint-reason','Company-response','Consumer-disputes','delayed','converted_text','convertion_language']]
return df
train = feature_engineering(train)
test = feature_engineering(test)
train.head()
```
Let us now plot some of our new variables to see of they will be helpful in predictions.
CLEANING THE TEXT !!!
```
def preprocess_text(df):
""" Here, The function preprocesses the text. It performs stemming, lemmatization
, stopwords removal, common words and rare words removal and removal of unwanted chractaers. """
#removing non-letter symbols and converting text in 'cleaned_hm' to lowercase
df = df.apply(lambda x: "".join(re.sub(r"[^A-Za-z\s]", '',str(x))))
# lower casing the Text
df = df.apply(lambda x: " ".join(x.lower() for x in x.split()))
#Removing punctuations
#adding characters list which needs to remove that is PUNCTUATION
punc = ['.', ',', '"', "'", '?','#', '!', ':','vs',':)', ';', '(', ')', '[', ']', '{', '}',"%",'/','<','>','br','�','^','XX','XXXX','xxxx','xx']
#removing extra characters
df = df.apply(lambda x: " ".join(x for x in x.split() if x not in punc))
import nltk
nltk.download('stopwords')
#removal of stopwords
from nltk.corpus import stopwords
stop = stopwords.words('english')
df = df.apply(lambda x: " ".join(x for x in x.split() if x not in stop))
#common words removal
freq_df = pd.Series(' '.join(df).split()).value_counts()[:10]
freq_df = list(freq_df.index)
df = df.apply(lambda x: " ".join(x for x in x.split() if x not in freq_df))
#rare words removal
freq_df_rare = pd.Series(' '.join(df).split()).value_counts()[-10:]
freq_df_rare = list(freq_df_rare.index)
df = df.apply(lambda x: " ".join(x for x in x.split() if x not in freq_df_rare))
#STEMMING
st = PorterStemmer()
df=df.apply(lambda x: " ".join([st.stem(w) for w in x.split()]))
# WordNet lexical database for lemmatization
#from nltk.stem import WordNetLemmatizer
#lem = WordNetLemmatizer()
#df=df.apply(lambda x: " ".join([lem.lemmatize(w) for w in x.split()]))
return df
train['cleaned_hm'] = preprocess_text(train['cleaned_hm'])
test['cleaned_hm'] = preprocess_text(test['cleaned_hm'])
# Removing 'h or m extension
train['reflection_period'] = train['reflection_period'].str.rstrip('h | m')
test['reflection_period'] = test['reflection_period'].str.rstrip('h | m')
train['reflection_period'].fillna(train['reflection_period'].mode()[0], inplace=True)
train['cleaned_hm'].fillna(train['cleaned_hm'].mode()[0], inplace=True)
train['num_sentence'].fillna(train['num_sentence'].mode()[0], inplace=True)
train['predicted_category'].fillna(train['predicted_category'].mode()[0], inplace=True)
test['reflection_period'].fillna(test['reflection_period'].mode()[0], inplace=True)
test['cleaned_hm'].fillna(test['cleaned_hm'].mode()[0], inplace=True)
test['num_sentence'].fillna(test['num_sentence'].mode()[0], inplace=True)
#NOw We will convert the target variable into LabelEncoder
y_train = train.loc[:,['predicted_category']]
labelencoder1 = LabelEncoder()
labelencoder1.fit(y_train.values)
y_train=labelencoder1.transform(y_train)
train.columns
x = train.loc[:,["num_sentence","reflection_period",'num_sentence',
'num_words', 'num_unique_words', 'num_chars',
'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len', 'num_words', 'num_unique_words',
'num_chars', 'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len']]
x1 = test.loc[:,["num_sentence","reflection_period",'num_sentence',
'num_words', 'num_unique_words', 'num_chars',
'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len', 'num_words', 'num_unique_words',
'num_chars', 'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len']]
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
import gensim
from gensim.models import Word2Vec
wv = gensim.models.KeyedVectors.load_word2vec_format("GoogleNews-vectors-negative300.bin.gz",limit=50000, binary=True)
wv.init_sims(replace=True)
from itertools import islice
list(islice(wv.vocab, 13030, 13050))
def word_averaging(wv, words):
all_words, mean = set(), []
for word in words:
if isinstance(word, np.ndarray):
mean.append(word)
elif word in wv.vocab:
mean.append(wv.syn0norm[wv.vocab[word].index])
all_words.add(wv.vocab[word].index)
if not mean:
logging.warning("cannot compute similarity with no input %s", words)
# FIXME: remove these examples in pre-processing
return np.zeros(wv.vector_size,)
mean = gensim.matutils.unitvec(np.array(mean).mean(axis=0)).astype(np.float32)
return mean
def word_averaging_list(wv, text_list):
return np.vstack([word_averaging(wv, post) for post in text_list ])
def w2v_tokenize_text(text):
tokens = []
for sent in nltk.sent_tokenize(text, language='english'):
for word in nltk.word_tokenize(sent, language='english'):
if len(word) < 2:
continue
tokens.append(word)
return tokens
nltk.download('punkt')
alldata = pd.concat([train, test], axis=1)
# initialise the functions - we'll create separate models for each type.
countvec = CountVectorizer(analyzer='word', ngram_range = (1,2), max_features=500)
tfidfvec = TfidfVectorizer(analyzer='word', ngram_range = (1,2), max_features=500)
# create features
bagofwords = countvec.fit_transform(alldata['cleaned_hm'])
tfidfdata = tfidfvec.fit_transform(alldata['cleaned_hm'])
# create dataframe for features
bow_df = pd.DataFrame(bagofwords.todense())
tfidf_df = pd.DataFrame(tfidfdata.todense())
# set column names
bow_df.columns = ['col'+ str(x) for x in bow_df.columns]
tfidf_df.columns = ['col' + str(x) for x in tfidf_df.columns]
# create separate data frame for bag of words and tf-idf
bow_df_train = bow_df[:len(train)]
bow_df_test = bow_df[len(train):]
tfid_df_train = tfidf_df[:len(train)]
tfid_df_test = tfidf_df[len(train):]
# split the merged data file into train and test respectively
train_feats = alldata[~pd.isnull(alldata.predicted_category)]
test_feats = alldata[pd.isnull(alldata.predicted_category)]
# merge count (bag of word) features into train
x_train = pd.concat([x, bow_df_train], axis = 1)
x_test = pd.concat([x1, bow_df_test], axis=1)
x_test.reset_index(drop=True, inplace=True)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(n_jobs=-1, penalty = 'l1',C=1.0,random_state = 0)
logreg = logreg.fit(x_train, train['cleaned_hm'])
y_pred = logreg.predict(x_test)
ans = labelencoder1.inverse_transform(y_pred)
type(ans)
ans = pd.DataFrame(ans)
id1=test.loc[:,['hmid']]
final_ans = [id1, ans]
final_ans = pd.concat(final_ans, axis=1)
final_ans.columns = ['hmid', 'predicted_category']
final_ans.to_csv('vdemo_HK.csv',index=False)
```
| github_jupyter |
```
from scipy.special import expit
from rbm import RBM
from sampler import VanillaSampler, PartitionedSampler
from trainer import VanillaTrainier
from performance import Result
import numpy as np
import datasets, performance, plotter, mnist, pickle, rbm, os, logging
logger = logging.getLogger()
# Set the logging level to logging.DEBUG to
logger.setLevel(logging.INFO)
%matplotlib inline
models_names = [
"one","two","three","four","five","six","seven", "eight", "nine", "bar","two_three"]
# RBM's keyed by a label of what they were trained on
models = datasets.load_models(models_names)
data_set_size = 40
number_gibbs_alternations = 1000
# the model we will be `corrupting` the others with, in this case we are adding bars to the digit models
corruption_model_name = "bar"
def result_key(data_set_size, num_gibbs_alternations, model_name, corruption_name):
return '{}Size_{}nGibbs_{}Model_{}Corruption'.format(data_set_size, number_gibbs_alternations, model_name, corruption_name)
def results_for_models(models, corruption_model_name, data_set_size, num_gibbs_alternations):
results = {}
for model_name in models:
if model_name is not corruption_model_name:
key = result_key(data_set_size, number_gibbs_alternations, model_name, corruption_model_name)
logging.info("Getting result for {}".format(model_name))
model_a = models[model_name]
model_b = models[corruption_model_name]
model_a_data = model_a.visible[:data_set_size]#visibles that model_a was fit to.
model_b_data = model_b.visible[:data_set_size]#visibles that model_b was fit to.
r = Result(data_set_size, num_gibbs_alternations, model_a, model_b,model_a_data, model_b_data)
r.calculate_result()
results[key] = r
return results
results = results_for_models(models, corruption_model_name, data_set_size, number_gibbs_alternations)
for key in models:
# plotter.plot(results[key].composite)
# plotter.plot(results[key].visibles_for_stored_hidden(9)[0])
# plotter.plot(results[key].vis_van_a)
plotter.plot(models[key].visible[:40])
```
#In the cell below #
I have calculated in the previous cell the loglikelyhood score of the partitioned sampling and vanilla sampling technique image-wise. So I have a score for each image. I have done this for all the MNIST digits that have been 'corrupted' by the bar images. That is RBM's trained models 1 - 9 and an RBM trained on 2's and 3's
The `wins` for a given model are where the partitioned scored better than the vanilla sampling technique
Conversly, `losses` are images where the vanilla score better.
Intuitively, `ties` is where they scored the same, which could only really occur when the correction would be zero, or ultimately cancelled out.
```
for key in results:
logging.info("Plotting, win, lose and tie images for the {}".format(key))
results[key].plot_various_images()
```
#Thoughts#
So on a dataset of size 50, with 100 gibbs alterations we see in all cases that for the digit model, 1,2,3,..,9 that the partitioned sampling technique does either better or the same more often than the vanilla does. Let's try some different configurations.
```
results.update(results_for_models(models, corruption_model_name, 400, 500))
```
results.update(results_for_models(models, corruption_model_name, 10, 1))
```
results
# with open('results_dict', 'wb') as f3le:
# pickle.dump(results,f3le, protocol = None)
with open('results_dict', 'rb') as f4le:
results = pickle.load(f4le)
# for key in results:
# if key.startswith('400'):
# logging.info("Results for hiddens")
# r = results[key].stored_hiddens
# for i in range(len(r)):
# print(results[key].imagewise_score())
```
| github_jupyter |
```
!pip install matplotlib
import os
import argparse
import time
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
class Args:
method = 'dopri5' # choices=['dopri5', 'adams']
data_size = 1000
batch_time = 10
batch_size = 20
niters = 2000
test_freq = 20
viz = True
gpu = True
adjoint = False
args=Args()
if args.adjoint:
from torchdiffeq import odeint_adjoint as odeint
else:
from torchdiffeq import odeint
device = torch.device('cuda:' + str(args.gpu) if torch.cuda.is_available() else 'cpu')
true_y0 = torch.tensor([[2., 0.]])
t = torch.linspace(0., 25., args.data_size)
true_A = torch.tensor([[-0.1, 2.0], [-2.0, -0.1]])
class Lambda(nn.Module):
def forward(self, t, y):
return torch.mm(y**3, true_A)
with torch.no_grad():
true_y = odeint(Lambda(), true_y0, t, method='dopri5')
def get_batch():
s = torch.from_numpy(np.random.choice(np.arange(args.data_size - args.batch_time, dtype=np.int64), args.batch_size, replace=False))
batch_y0 = true_y[s] # (M, D)
batch_t = t[:args.batch_time] # (T)
batch_y = torch.stack([true_y[s + i] for i in range(args.batch_time)], dim=0) # (T, M, D)
return batch_y0, batch_t, batch_y
def makedirs(dirname):
if not os.path.exists(dirname):
os.makedirs(dirname)
def visualize(true_y, pred_y, odefunc, itr):
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12, 4), facecolor='white')
ax_traj = fig.add_subplot(131, frameon=False)
ax_phase = fig.add_subplot(132, frameon=False)
ax_vecfield = fig.add_subplot(133, frameon=False)
makedirs('png')
if args.viz:
ax_traj.cla()
ax_traj.set_title('Trajectories')
ax_traj.set_xlabel('t')
ax_traj.set_ylabel('x,y')
ax_traj.plot(t.numpy(), true_y.numpy()[:, 0, 0], t.numpy(), true_y.numpy()[:, 0, 1], 'g-')
ax_traj.plot(t.numpy(), pred_y.numpy()[:, 0, 0], '--', t.numpy(), pred_y.numpy()[:, 0, 1], 'b--')
ax_traj.set_xlim(t.min(), t.max())
ax_traj.set_ylim(-2, 2)
ax_traj.legend()
ax_phase.cla()
ax_phase.set_title('Phase Portrait')
ax_phase.set_xlabel('x')
ax_phase.set_ylabel('y')
ax_phase.plot(true_y.numpy()[:, 0, 0], true_y.numpy()[:, 0, 1], 'g-')
ax_phase.plot(pred_y.numpy()[:, 0, 0], pred_y.numpy()[:, 0, 1], 'b--')
ax_phase.set_xlim(-2, 2)
ax_phase.set_ylim(-2, 2)
ax_vecfield.cla()
ax_vecfield.set_title('Learned Vector Field')
ax_vecfield.set_xlabel('x')
ax_vecfield.set_ylabel('y')
y, x = np.mgrid[-2:2:21j, -2:2:21j]
dydt = odefunc(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2))).cpu().detach().numpy()
mag = np.sqrt(dydt[:, 0]**2 + dydt[:, 1]**2).reshape(-1, 1)
dydt = (dydt / mag)
dydt = dydt.reshape(21, 21, 2)
ax_vecfield.streamplot(x, y, dydt[:, :, 0], dydt[:, :, 1], color="black")
ax_vecfield.set_xlim(-2, 2)
ax_vecfield.set_ylim(-2, 2)
fig.tight_layout()
plt.savefig('png/{:03d}'.format(itr))
plt.draw()
plt.pause(0.001);
class ODEFunc(nn.Module):
def __init__(self):
super(ODEFunc, self).__init__()
self.net = nn.Sequential(
nn.Linear(2, 50),
nn.Tanh(),
nn.Linear(50, 2),
)
for m in self.net.modules():
if isinstance(m, nn.Linear):
nn.init.normal_(m.weight, mean=0, std=0.1)
nn.init.constant_(m.bias, val=0)
def forward(self, t, y):
return self.net(y**3)
class RunningAverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, momentum=0.99):
self.momentum = momentum
self.reset()
def reset(self):
self.val = None
self.avg = 0
def update(self, val):
if self.val is None:
self.avg = val
else:
self.avg = self.avg * self.momentum + val * (1 - self.momentum)
self.val = val
%matplotlib inline
ii = 0
func = ODEFunc()
optimizer = optim.RMSprop(func.parameters(), lr=1e-3)
end = time.time()
time_meter = RunningAverageMeter(0.97)
loss_meter = RunningAverageMeter(0.97)
for itr in range(1, args.niters + 1):
optimizer.zero_grad()
batch_y0, batch_t, batch_y = get_batch()
pred_y = odeint(func, batch_y0, batch_t)
loss = torch.mean(torch.abs(pred_y - batch_y))
loss.backward()
optimizer.step()
time_meter.update(time.time() - end)
loss_meter.update(loss.item())
if itr % args.test_freq == 0:
with torch.no_grad():
pred_y = odeint(func, true_y0, t)
loss = torch.mean(torch.abs(pred_y - true_y))
print('Iter {:04d} | Total Loss {:.6f}'.format(itr, loss.item()))
visualize(true_y, pred_y, func, ii)
ii += 1
end = time.time()
```
| github_jupyter |
```
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import accuracy_score
from keras.layers import Dense
from keras.models import Sequential
from keras.optimizers import SGD
from matplotlib import pyplot as plt
import matplotlib as mpl
import seaborn as sns
import numpy as np
import pandas as pd
import category_encoders as ce
import os
import pickle
import gc
from tqdm import tqdm
import pickle
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn import linear_model
from sklearn.neighbors import KNeighborsRegressor
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn import ensemble
import xgboost as xgb
def encode_text_features(encode_decode, data_frame, encoder_isa=None, encoder_mem_type=None):
# Implement Categorical OneHot encoding for ISA and mem-type
if encode_decode == 'encode':
encoder_isa = ce.one_hot.OneHotEncoder(cols=['isa'])
encoder_mem_type = ce.one_hot.OneHotEncoder(cols=['mem-type'])
encoder_isa.fit(data_frame, verbose=1)
df_new1 = encoder_isa.transform(data_frame)
encoder_mem_type.fit(df_new1, verbose=1)
df_new = encoder_mem_type.transform(df_new1)
encoded_data_frame = df_new
else:
df_new1 = encoder_isa.transform(data_frame)
df_new = encoder_mem_type.transform(df_new1)
encoded_data_frame = df_new
return encoded_data_frame, encoder_isa, encoder_mem_type
def absolute_percentage_error(Y_test, Y_pred):
error = 0
for i in range(len(Y_test)):
if(Y_test[i]!= 0 ):
error = error + (abs(Y_test[i] - Y_pred[i]))/Y_test[i]
error = error/ len(Y_test)
return error
def process_all(dataset_path, dataset_name, path_for_saving_data):
################## Data Preprocessing ######################
df = pd.read_csv(dataset_path)
encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df,
encoder_isa = None, encoder_mem_type=None)
# total_data = encoded_data_frame.drop(columns = ['arch', 'arch1'])
total_data = encoded_data_frame.drop(columns = ['arch', 'sys','sysname','executable','PS'])
total_data = total_data.fillna(0)
X_columns = total_data.drop(columns = 'runtime').columns
X = total_data.drop(columns = ['runtime']).to_numpy()
Y = total_data['runtime'].to_numpy()
# X_columns = total_data.drop(columns = 'PS').columns
# X = total_data.drop(columns = ['runtime','PS']).to_numpy()
# Y = total_data['runtime'].to_numpy()
print('Data X and Y shape', X.shape, Y.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
################## Data Preprocessing ######################
# Put best models here using grid search
# 1. SVR
best_svr =SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1,
kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
# 2. LR
best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=True)
# 3. RR
best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False,
random_state=None, solver='svd', tol=0.001)
# 4. KNN
best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=2, p=1,
weights='distance')
# 5. GPR
best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None,
n_restarts_optimizer=0, normalize_y=True,
optimizer='fmin_l_bfgs_b', random_state=None)
# 6. Decision Tree
best_dt = DecisionTreeRegressor(criterion='mse', max_depth=7, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
presort=False, random_state=None, splitter='best')
# 7. Random Forest
best_rf = RandomForestRegressor(bootstrap=True, criterion='friedman_mse', max_depth=7,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start='False')
# 8. Extra Trees Regressor
best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=15,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=None,
oob_score=False, random_state=None, verbose=0,
warm_start='True')
# 9. GBR
best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None,
learning_rate=0.1, loss='lad', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_iter_no_change=None, presort='auto',
random_state=42, subsample=1.0, tol=0.0001,
validation_fraction=0.1, verbose=0, warm_start=False)
# 10. XGB
best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.3, gamma=0,
importance_type='gain', learning_rate=0.5, max_delta_step=0,
max_depth=10, min_child_weight=1, missing=None, n_estimators=100,
n_jobs=1, nthread=None, objective='reg:linear', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=1, validate_parameters=False, verbosity=1)
best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb]
best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr'
, 'best_gbr', 'best_xgb']
k = 0
df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ])
for model in best_models:
print('Running model number:', k+1, 'with Model Name: ', best_models_name[k])
r2_scores = []
mse_scores = []
mape_scores = []
mae_scores = []
# cv = KFold(n_splits = 10, random_state = 42, shuffle = True)
cv = ShuffleSplit(n_splits=10, random_state=0, test_size = 0.4)
# print(cv)
fold = 1
for train_index, test_index in cv.split(X):
model_orig = model
# print("Train Index: ", train_index, "\n")
# print("Test Index: ", test_index)
X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index]
# print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape)
model_orig.fit(X_train_fold, Y_train_fold)
Y_pred_fold = model_orig.predict(X_test_fold)
# save the folds to disk
data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold]
filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle'
# pickle.dump(data, open(filename, 'wb'))
# save the model to disk
# filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav'
fold = fold + 1
# pickle.dump(model_orig, open(filename, 'wb'))
# some time later...
'''
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, Y_test)
print(result)
'''
# scores.append(best_svr.score(X_test, y_test))
'''
plt.figure()
plt.plot(Y_test_fold, 'b')
plt.plot(Y_pred_fold, 'r')
'''
# print('Accuracy =',accuracy_score(Y_test, Y_pred))
r2_scores.append(r2_score(Y_test_fold, Y_pred_fold))
mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold))
mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold))
mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold))
df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name
, 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True)
k = k + 1
print(df.head())
df.to_csv(r'runtimes_final_npb_ep_60.csv')
dataset_name = 'runtimes_final_npb_ep'
dataset_path = 'C:\\Users\\Rajat\\Desktop\\DESKTOP_15_05_2020\\Evaluating-Machine-Learning-Models-for-Disparate-Computer-Systems-Performance-Prediction\\Dataset_CSV\\PhysicalSystems\\runtimes_final_npb_ep.csv'
path_for_saving_data = 'data\\' + dataset_name
process_all(dataset_path, dataset_name, path_for_saving_data)
df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ])
df
```
| github_jupyter |
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---
# Assignment 2 - Introduction to NLTK
In part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling.
## Part 1 - Analyzing Moby Dick
```
import nltk
import pandas as pd
import numpy as np
import nltk
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
```
### Example 1
How many tokens (words and punctuation symbols) are in text1?
*This function should return an integer.*
```
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
example_one()
```
### Example 2
How many unique tokens (unique words and punctuation) does text1 have?
*This function should return an integer.*
```
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
example_two()
```
### Example 3
After lemmatizing the verbs, how many unique tokens does text1 have?
*This function should return an integer.*
```
from nltk.stem import WordNetLemmatizer
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
example_three()
```
### Question 1
What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)
*This function should return a float.*
```
def answer_one():
unique = len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
tot = len(nltk.word_tokenize(moby_raw))
return unique/tot
answer_one()
```
### Question 2
What percentage of tokens is 'whale'or 'Whale'?
*This function should return a float.*
```
def answer_two():
tot = nltk.word_tokenize(moby_raw)
count = [w for w in tot if w == "Whale" or w == "whale"]
return 100*len(count)/len(tot)
answer_two()
```
### Question 3
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
*This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.*
```
def answer_three():
tot = nltk.word_tokenize(moby_raw)
dist = nltk.FreqDist(tot)
return dist.most_common(20)
answer_three()
```
### Question 4
What tokens have a length of greater than 5 and frequency of more than 150?
*This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`*
```
def answer_four():
tot = nltk.word_tokenize(moby_raw)
dist = nltk.FreqDist(tot)
count = [w for w in dist if len(w)>5 and dist[w]>150]
return sorted(count)
answer_four()
```
### Question 5
Find the longest word in text1 and that word's length.
*This function should return a tuple `(longest_word, length)`.*
```
def answer_five():
tot = nltk.word_tokenize(moby_raw)
dist = nltk.FreqDist(tot)
max_length = max([len(w) for w in dist])
word = [w for w in dist if len(w)==max_length]
return (word[0],max_length)
answer_five()
```
### Question 6
What unique words have a frequency of more than 2000? What is their frequency?
"Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation."
*This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.*
```
def answer_six():
tot = nltk.word_tokenize(moby_raw)
dist = nltk.FreqDist(tot)
words = [w for w in dist if dist[w]>2000 and w.isalpha()]
words_count = [dist[w] for w in words]
ans = list(zip(words_count,words))
ans.sort(key=lambda tup: tup[0],reverse=True)
return ans
answer_six()
```
### Question 7
What is the average number of tokens per sentence?
*This function should return a float.*
```
def answer_seven():
tot = nltk.sent_tokenize(moby_raw)
dist = nltk.FreqDist(tot)
tot1 = nltk.word_tokenize(moby_raw)
return len(tot1)/len(tot)
answer_seven()
```
### Question 8
What are the 5 most frequent parts of speech in this text? What is their frequency?
*This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.*
```
def answer_eight():
tot = nltk.word_tokenize(moby_raw)
dist1 = nltk.pos_tag(tot)
frequencies = nltk.FreqDist([tag for (word, tag) in dist1])
return frequencies.most_common(5)
answer_eight()
```
## Part 2 - Spelling Recommender
For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.
For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.
*Each of the three different recommenders will use a different distance measure (outlined below).
Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`.
```
import pandas
from nltk.corpus import words
nltk.download('words')
from nltk.metrics.distance import (
edit_distance,
jaccard_distance,
)
from nltk.util import ngrams
correct_spellings = words.words()
spellings_series = pandas.Series(correct_spellings)
#spellings_series
```
### Question 9
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def Jaccard(words, n_grams):
outcomes = []
for word in words:
spellings = spellings_series[spellings_series.str.startswith(word[0])]
distances = ((jaccard_distance(set(ngrams(word, n_grams)), set(ngrams(k, n_grams))), k) for k in spellings)
closest = min(distances)
outcomes.append(closest[1])
return outcomes
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
return Jaccard(entries,3)
answer_nine()
```
### Question 10
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
return Jaccard(entries,4)
answer_ten()
```
### Question 11
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def Edit(words):
outcomes = []
for word in words:
spellings = spellings_series[spellings_series.str.startswith(word[0])]
distances = ((edit_distance(word,k),k) for k in spellings)
closest = min(distances)
outcomes.append(closest[1])
return outcomes
def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
return Edit(entries)
answer_eleven()
```
| github_jupyter |
# Assignment 4: Word Sense Disambiguation: from start to finish
## Due: Tuesday 6 December 2016 15:00 p.m.
Please name your Jupyter notebook using the following naming convention: ASSIGNMENT_4_FIRSTNAME_LASTNAME.ipynb
Please send your assignment to `m.c.postma@vu.nl`.
A well-known NLP task is [Word Sense Disambiguation (WSD)](https://en.wikipedia.org/wiki/Word-sense_disambiguation). The goal is to identify the sense of a word in a sentence. Here is an example of the output of one of the best systems, called [Babelfy](http://babelfy.org/index). 
Since 1998, there have been WSD competitions: [Senseval and SemEval](https://en.wikipedia.org/wiki/SemEval). The idea is very simple. A few people annotate words in a sentence with the correct meaning and systems try to the do same. Because we have the manual annotations, we can score how well each system performs. In this exercise, we are going to compete in [SemEval-2013 task 12: Multilingual Word Sense Disambiguation](https://www.cs.york.ac.uk/semeval-2013/task12.html).
The main steps in this exercise are:
* Introduction of the data and goals
* Performing WSD
* Loading manual annotations (which we will call **gold data**)
* System output
* Write an XML file containing both the gold data and our system output
* Read the XML file and evaluate our performance
* please only use **xpath** if you are comfortable with using it. It is not needed to complete the assignment.
## Introduction of the data and goals
We will use the following data (originating from [SemEval-2013 task 12 test data](https://www.cs.york.ac.uk/semeval-2013/task12/data/uploads/datasets/semeval-2013-task12-test-data.zip)):
* **system input**: data/multilingual-all-words.en.xml
* **gold data**: data/sem2013-aw.key
Given a word in a sentence, the goal of our system is to determine the corect meaning of that word. For example, look at the **system input** file (data/multilingual-all-words.en.xml) at lines 1724-1740.
All the *instance* elements are the ones we have to provide a meaning for. Please note that the *sentence* element has *wf* and *instance* children. The *instance* elements are the ones for which we have to provide a meaning.
```xml
<sentence id="d003.s005">
<wf lemma="frankly" pos="RB">Frankly</wf>
<wf lemma="," pos=",">,</wf>
<wf lemma="the" pos="DT">the</wf>
<instance id="d003.s005.t001" lemma="market" pos="NN">market</instance>
<wf lemma="be" pos="VBZ">is</wf>
<wf lemma="very" pos="RB">very</wf>
<wf lemma="calm" pos="JJ">calm</wf>
<wf lemma="," pos=",">,</wf>
<wf lemma="observe" pos="VVZ">observes</wf>
<wf lemma="Mace" pos="NP">Mace</wf>
<wf lemma="Blicksilver" pos="NP">Blicksilver</wf>
<wf lemma="of" pos="IN">of</wf>
<wf lemma="Marblehead" pos="NP">Marblehead</wf>
<instance id="d003.s005.t002" lemma="asset_management" pos="NE">Asset_Management</instance>
<wf lemma="." pos="SENT">.</wf>
</sentence>
```
As a way to determine the possible meanings of a word, we will use [WordNet](https://wordnet.princeton.edu/). For example, for the lemma **market**, Wordnet lists the following meanings:
```
from nltk.corpus import wordnet as wn
for synset in wn.synsets('market', pos='n'):
print(synset, synset.definition())
```
In order to know which meaning the manual annotators chose, we go to the **gold data** (data/sem2013-aw.key). For the identifier *d003.s005.t001*, we find:
d003 d003.s005.t001 market%1:14:01::
In order to know to which synset *market%1:14:01::* belongs, we can do the following:
```
lemma = wn.lemma_from_key('market%1:14:01::')
synset = lemma.synset()
print(synset, synset.definition())
```
Hence, the manual annotators chose **market.n.04**.
## Performing WSD
As a first step, we will perform WSD. For this, we will use the [**lesk** WSD algorithm](http://www.d.umn.edu/~tpederse/Pubs/banerjee.pdf) as implemented in the [NLTK](http://www.nltk.org/howto/wsd.html). One of the applications of the Lesk algorithm is to determine which senses of words are related. Imagine that **cone** has three senses, and **pine** has three senses (example from [paper](http://www.d.umn.edu/~tpederse/Pubs/banerjee.pdf)):
**Cone**
* Sense 1: kind of *evergreen tree* with needle–shaped leaves
* Sense 2: waste away through sorrow or illness.
**Pine**
* Sense 1: solid body which narrows to a point
* Sense 2: something of this shape whether solid or hollow
* Sense 3: fruit of certain *evergreen tree*
As you can see, **sense 1 of cone** and **sense 3 of pine** have an overlap in their definitions and hence indicate that these senses are related. This idea can then be used to perform WSD. The words in the sentence of a word are compared against the definition of each sense of word. The word sense that has the highest number of overlapping words between the sentence and the definition of the word sense is chosen as the correct sense according to the algorithm.
```
from nltk.wsd import lesk
```
Given is a function that allows you to perform WSD on a sentence. The output is a **WordNet sensekey**, hence an identifier of a sense.
#### the function is given, but it is important that you understand how to call it.
```
def perform_wsd(sent, lemma, pos):
'''
perform WSD using the lesk algorithm as implemented in the nltk
:param list sent: list of words
:param str lemma: a lemma
:param str pos: a pos (n | v | a | r)
:rtype: str
:return: wordnet sensekey or not_found
'''
sensekey = 'not_found'
wsd_result = lesk(sent, lemma, pos)
if wsd_result is not None:
for lemma_obj in wsd_result.lemmas():
if lemma_obj.name() == lemma:
sensekey = lemma_obj.key()
return sensekey
sent = ['I', 'went', 'to', 'the', 'bank', 'to', 'deposit', 'money', '.']
assert perform_wsd(sent, 'bank', 'n') == 'bank%1:06:01::', 'key is %s' % perform_wsd(sent, 'bank', 'n')
assert perform_wsd(sent, 'dfsdf', 'n') == 'not_found', 'key is %s' % perform_wsd(sent, 'money', 'n')
print(perform_wsd(sent, 'bank', 'n'))
```
## Loading manual annotations
Your job now is to load the manual annotations from 'data/sem2013-aw.key'.
* Tip, you can use [**repr**](https://docs.python.org/3/library/functions.html#repr) to check which delimiter (space, tab, etc) was used.
* sometimes there is more than one sensekey given for an identifier (see line 25 for example)
You can use the **set** function to convert a list to a set
```
a_list = [1, 1, 2, 1, 3]
a_set = set(a_list)
print(a_set)
def load_gold_data(path_to_gold_key):
'''
given the path to gold data of semeval2013 task 12,
this function creates a dictionary mapping the identifier to the
gold answers
HINT: sometimes, there is more than one sensekey for identifier
:param str path_to_gold_key: path to where gold data file is stored
:rtype: dict
:return: identifier (str) -> goldkeys (set)
'''
gold = {}
with open(path_to_gold_key) as infile:
for line in infile:
# find the identifier and the goldkeys
# add them to the dictionary
# gold[identifier] = goldkeys
return gold
```
Please check if your functions works correctly by running the cell below.
```
gold = load_gold_data('data/sem2013-aw.key')
assert len(gold) == 1644, 'number of gold items is %s' % len(gold)
```
## Combining system input + system output + gold data
We are going to create a dictionary that looks like this:
```python
{10: {'sent_id' : 1
'text': 'banks',
'lemma' : 'bank',
'pos' : 'n',
'instance_id' : 'd003.s005.t001',
'gold_keys' : {'bank%1:14:00::'},
'system_key' : 'bank%1:14:00::'}
}
```
This dictionary maps a number (int) to a dictionary. Combining all relevant information in one dictionary will help us to create the NAF XML file. In order to do this, we will write several functions. To work with XML, we will first import the lxml module.
```
from lxml import etree
def load_sentences(semeval_2013_input):
'''
given the path to the semeval input xml,
this function creates a dictionary mapping sentence identfier
to the sentence (list of words)
HINT: you need the text of both:
text/sentence/instance and text/sentence/wf elements
:param str semeval_2013_input: path to semeval 2013 input xml
:rtype: dict
:return: mapping sentence identifier -> list of words
'''
sentences = dict()
doc = etree.parse(semeval_2013_input)
# insert code here
return sentences
```
please check that your function works by running the cell below.
```
sentences = load_sentences('data/multilingual-all-words.en.xml')
assert len(sentences) == 306, 'number of sentences is different from needed 306: namely %s' % len(sentences)
def load_input_data(semeval_2013_input):
'''
given the path to input xml file, we will create a dictionary that looks like this:
:rtype: dict
:return: {10: {
'sent_id' : 1
'text': 'banks',
'lemma' : 'bank',
'pos' : 'n',
'instance_id' : 'd003.s005.t001',
'gold_keys' : {},
'system_key' : ''}
}
'''
data = dict()
doc = etree.parse(semeval_2013_input)
identifier = 1
for sent_el in doc.findall('text/sentence'):
# insert code here
for child_el in sent_el.getchildren():
# insert code here
info = {
'sent_id' : # to fill,
'text': # to fill,
'lemma' : # to fill,
'pos' : # to fill,
'instance_id' : # to fill if instance element else empty string,
'gold_keys' : set(), # this is ok for now
'system_key' : '' # this is ok for now
}
data[identifier] = info
identifier += 1
return data
data = load_input_data('data/multilingual-all-words.en.xml')
assert len(data) == 8142, 'number of token is not the needed 8142: namely %s' % len(data)
def add_gold_and_wsd_output(data, gold, sentences):
'''
the goal of this function is to fill the keys 'system_key'
and 'gold_keys' for the entries in which the 'instance_id' is not an empty string.
:param dict data: see output function 'load_input_data'
:param dict gold: see output function 'load_gold_data'
:param dict sentences: see output function 'load_sentences'
NOTE: not all instance_ids have a gold answer!
:rtype: dict
:return: {10: {'sent_id' : 1
'text': 'banks',
'lemma' : 'bank',
'pos' : 'n',
'instance_id' : 'd003.s005.t001',
'gold_keys' : {'bank%1:14:00::'},
'system_key' : 'bank%1:14:00::'}
}
'''
for identifier, info in data.items():
# get the instance id
if instance_id:
# perform wsd and get sensekey that lesk proposes
# add system key to our dictionary
# info['system_key'] = sensekey
if instance_id in gold:
info['gold_keys'] = gold[instance_id]
```
Call the function to combine all information.
```
add_gold_and_wsd_output(data, gold, sentences)
```
## Create NAF with system run and gold information
We are going to create one [NAF XML](http://www.newsreader-project.eu/files/2013/01/techreport.pdf) containing both the gold information and our system run. In order to do this, we will guide you through the process of doing this.
### [CODE IS GIVEN] Step a: create an xml object
**NAF** will be our root element.
```
new_root = etree.Element('NAF')
new_tree = etree.ElementTree(new_root)
new_root = new_tree.getroot()
```
We can inspect what we have created by using the **etree.dump** method. As you can see, we only have the root node **NAF** currently in our document.
```
etree.dump(new_root)
```
### [CODE IS GIVEN] Step b: add children
We will now add the elements in which we will place the **wf** and **term** elements.
```
text_el = etree.Element('text')
terms_el = etree.Element('terms')
new_root.append(text_el)
new_root.append(terms_el)
etree.dump(new_root)
```
### Step c: functions to create wf and term elements
For this step, the code is not given. Please complete the functions.
#### TIP: check the subsection *Creating your own XML* from Topic 5
```
def create_wf_element(identifier, sent_id, text):
'''
create NAF wf element, such as:
<wf id="11" sent_id="d001.s002">conference</wf>
:param int identifier: our own identifier (convert this to string)
:param str sent_id: the sentence id of the competition
:param str text: the text
'''
# complete from here
wf_el = etree.Element(
return wf_el
```
#### TIP: **externalRef** elements are children of **term** elements
```
def create_term_element(identifier, instance_id, system_key, gold_keys):
'''
create NAF xml element, such as:
<term id="3885">
<externalRef instance_id="d007.s013.t004" provenance="lesk" wordnetkey="player%1:18:04::"/>
<externalRef instance_id="d007.s013.t004" provenance="gold" wordnetkey="player%1:18:01::"/>
</term>
:param int identifier: our own identifier (convert this to string)
:param str system_key: system output
:param set gold_keys: goldkeys
'''
# complete code here
term_el = etree.Element(
return term_el
```
### [CODE IS GIVEN] Step d: add wf and term elements
```
counter = 0
for identifier, info in data.items():
wf_el = create_wf_element(identifier, info['sent_id'], info['text'])
text_el.append(wf_el)
term_el = create_term_element(identifier,
info['instance_id'],
info['system_key'],
info['gold_keys'])
terms_el.append(term_el)
```
### [CODE IS GIVEN]: write to file
```
with open('semeval2013_run1.naf', 'wb') as outfile:
new_tree.write(outfile,
pretty_print=True,
xml_declaration=True,
encoding='utf-8')
```
## Score our system run
Read the NAF file and extract relevant statistics, such as:
* overall performance (how many are correct?)
* [optional]: anything that you find interesting
| github_jupyter |
<a href="https://colab.research.google.com/github/sarahalyahya/SoftwareArt-Text/blob/main/LousyFairytaleGenerator_Assemblage_Project1_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Lousy Fairytale Plot Generator | Sarah Al-Yahya
*scroll to the end and click run to check the presentation first!*
##Idea
For this project, I spent a lot of time browsing through project gutenberg. While I was doing that, I got a an urge to look up what kind of children's books are on the website, inspired by a phone call with my niece. My niece is a very curious child, and I can only imagine that by the time she starts getting interested in stories and fairytales, her curiosity will mean that her parents will run out of made-up stories to tell her. So, I wanted to create a tool that helps them, well kind of...I wanted to capture that confused tone that parents have sometimes when they make up a story as they go.
##Research
What makes a fairytale? A google search led to Missouri Southern State University's [website](https://libguides.mssu.edu/c.php?g=185298&p=1223898#:~:text=The%20basic%20structure%20of%20a,a%20solution%20can%20be%20found.). To summarize, the page explains that a fairy tale's main elements are:
* Characters
* The Moral Lesson
* Obstacles
* Magic
* A Happily Ever After
So, these elements formed the structure of my output.
##Elements
*(More specific comments are in the code)*
####Characters
For the characters, I scraped a few pages from a website called World of Tales. I chose it over Project Gutenberg as I found the HTML easier to work with. The program picks a random fairytale from the ones I have in an array, parses the HTML, finds all paragraphs and cleans them from new line and trailing characters before appending them to a list. Afterwards, to extract the main character's name I used NLTK and a function that counts proper nouns to find the most repeated proper noun which becomes my main character. As you can imagine, this isn't a perfect method since we could get "I" or "King's". I tried to do my best to limit such results, but it isn't perfect.
###The Moral Lesson
Here, I scraped a random blog post that lists 101 valuable life lessons. I thought this would add a humorous tone to the work as some of the lessons are completely unrelated to the fairytale world, e.g. "Don’t go into bad debt (debt taken on for consumption)". Then, I parsed and cleaned the text, and sliced the results to remove any paragraph elements that weren't part of the list. For this part I also used Regular Expressions to remove list numbers ("1."). Unfortunately, there's a bit of hardcoded replacements and removals in this part. I found it difficult to figure out the pronouns, such as turning all you's into a neutral they resulting in "theyself"...etc. I left some of the failed RegEx attempts commented if anyone has any suggestions! This part picks a random lesson from the list everytime.
###Obstacle
My obstacle was basically a villain the main character needs to fight. Which I got from a webpage that lists 10 fairytale villains. The process was similar, I parsed the page and cleaned the strings using replace() and regEx then randomly chose one of the villains listed.
###Magic
Using an approach almost identical to the one for villains. I scraped a webpage that lists a bunch of superpowers and magical ability, and made a list out of them to pick a random one when the program runs. I relied on random web pages to compliment that "confused" "humorous" tone of scrambling to make up a random story on the spot!
###The Happily Ever After
I made sure to end the text generation with a reference to the happily ever after and the fact that the hero beats the villain using the specific superpower.
##Presentation
I made use of the Sleep() function to create a rhythm for the conversation. I embedded my generated text into human sounding text to really make it feel like you're talking to someone. Obviously, some outputs make it very obvious that this is a bot, such as when the main character is called "I", but I think the general feel is quite natural.
##Impact
I would say the impact is more entertaining and maybe humorous than functional, although it is quite functional sometimes and suggests ideas that could make great fairytales!
##Final Reflection
While working I realized that RegEx really need a lot of practice, and that the more I do it the more it'll make sense...so I'm excited to do that! However,
This was a really fun assignment to work on! It was very rewarding to gradually go through my ideas and realize that there's something in our "toolbox" that can help me achieve what I have in mind.
```
import requests
from bs4 import BeautifulSoup
import random
import nltk
import re
from time import sleep
nltk.download('averaged_perceptron_tagger')
from nltk.tag import pos_tag
fairytaleLinks = ["Charles_Perrault/Little_Thumb.html#gsc.tab=0","Brothers_Grimm/Margaret_Hunt/The_Story_of_the_Youth_who_Went_Forth_to_Learn_What_Fear_Was.html#gsc.tab=0",
"Charles_Perrault/THE_FAIRY.html#gsc.tab=0","Hans_Christian_Andersen/Andersen_fairy_tale_17.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Cinderella.html#gsc.tab=0"
"Brothers_Grimm/Margaret_Hunt/Little_Snow-white.html#gsc.tab=0",
"Hans_Christian_Andersen/Andersen_fairy_tale_17.html#gsc.tab=0","Charles_Perrault/THE_MASTER_CAT,_OR_PUSS_IN_BOOTS.html#gsc.tab=0",
"Brothers_Grimm/Grimm_household_tales/The_Sleeping_Beauty.html#gsc.tab=0", "Hans_Christian_Andersen/Andersen_fairy_tale_31.html#gsc.tab=0",
"Hans_Christian_Andersen/Andersen_fairy_tale_47.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Snow-White_And_Rose-Red.html#gsc.tab=0"
"Brothers_Grimm/Margaret_Hunt/Hansel_and_Grethel.html#gsc.tab=0","Brothers_Grimm/RUMPELSTILTSKIN.html#gsc.tab=0"
"Brothers_Grimm/THE%20ELVES%20AND%20THE%20SHOEMAKER.html#gsc.tab=0","Brothers_Grimm/THE%20JUNIPER-TREE.html#gsc.tab=0","Brothers_Grimm/THE%20GOLDEN%20GOOSE.html#gsc.tab=0",
"Brothers_Grimm/Margaret_Hunt/The_Frog-King,_or_Iron_Henry.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Snow-White_And_Rose-Red.html#gsc.tab=0"]
fairytaleIndex = random.randint(0,(len(fairytaleLinks))-1)
fairytaleTargetUrl = "https://www.worldoftales.com/fairy_tales/" + fairytaleLinks[fairytaleIndex]
reqFairytale = requests.get(fairytaleTargetUrl)
moralLessonTargetUrl = "https://daringtolivefully.com/life-lessons"
reqMoralLesson = requests.get(moralLessonTargetUrl)
superpowerTargetUrl = "http://www.superheronation.com/2007/12/30/list-of-superpowers/"
reqSuperpower = requests.get(superpowerTargetUrl)
villainTargetUrl = "https://beat102103.com/life/top-10-fairytale-villains-ranked/"
reqVillain = requests.get(villainTargetUrl)
#FOR CHARACTER
soupFairytale = BeautifulSoup(reqFairytale.content, 'html.parser')
story = soupFairytale.find_all("p")
paragraphs = []
for i in story:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
paragraphs.append(content)
#print(paragraphs)
story = " "
story = story.join(paragraphs)
#print(story)
#Code from StackOverflow: https://stackoverflow.com/questions/17669952/finding-proper-nouns-using-nltk-wordnet
tagged_sent = pos_tag(story.split())
propernouns = [word for word,pos in tagged_sent if pos == 'NNP']
#print(propernouns)
highestCount = 0
#A loop that looks for the most repeated proper noun
for i in propernouns:
currentCount = propernouns.count(i)
if currentCount > highestCount:
highestCount = currentCount
countDictionary = {i:propernouns.count(i) for i in propernouns}
#Had an issue with this at first as there's no direct function to get the key using the value, so I researched solutions and used this page:
# https://www.geeksforgeeks.org/python-get-key-from-value-in-dictionary/
def get_key(val):
for key, value in countDictionary.items():
if val == value:
return key
characterName = get_key(highestCount)
#to eliminate instances like "king's"
characterName = characterName.replace("'", "")
characterName = characterName.replace('"', "")
#print(characterName)
#FOR MORAL LESSON
soupLesson = BeautifulSoup(reqMoralLesson.content, 'html.parser')
lessonsHTML = soupLesson.find_all("p")
lessons = []
for i in lessonsHTML:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
lessons.append(content)
#used .index() to figure out the slicing here:
del lessons[0]
del lessons[101:len(lessons)+1]
# to make things more convenient :)
toRemove = ['2. Don’t postpone joy.',
'8. Pay yourself first: save 10% of what you earn.',
'10. Don’t go into bad debt (debt taken on for consumption).',
'15. Remember Henry Ford’s admonishment: “Whether you think you can or whether you think you can’t, you’re right.”',
'19. Don’t smoke. Don’t abuse alcohol. Don’t do drugs.',
'36. Don’t take action when you’re angry. Take a moment to calm down and ask yourself if what you’re thinking of doing is really in your best interest.',
'38. Worry is a waste of time; it’s a misuse of your imagination.',
'44. Don’t gossip. Remember the following quote by Eleanor Roosevelt: “Great minds discuss ideas, average minds discuss events, small minds discuss people.”',
'52. Don’t procrastinate; procrastination is the thief of time.',
'61. Don’t take yourself too seriously.',
'62. During difficult times remember this: “And this too shall pass.”',
'63. When things go wrong remember that few things are as bad as they first seem.',
'64. Keep in mind that mistakes are stepping stones to success. Success and failure are a team; success is the hero and failure is the sidekick. Don’t be afraid to fail.',
'70. If you don’t know the answer, say so; then go and find the answer.',
'77. Don’t renege on your promises, whether to others or to yourself.',
'80. Don’t worry about what other people think.',
'89. Every time you fall simply get back up again.',
'95. Don’t argue for your limitations.',
'97. Listen to Eleanor Roosevelt’s advice: “No one can make you feel inferior without your consent.”',
'99. Remember the motto: “You catch more flies with honey.”']
for i in toRemove:
lessons.remove(i)
#print(lessons)
x = 0
strippedLessons = []
for i in lessons:
lessons[x] = lessons[x].strip()
#remove any digits + the period after the digits
lessons[x] = re.sub("\d+""[.]", "", i)
#lessons[x] = re.sub("n'^Dot", "not to", lessons[x]) // attempt to turn all don't(s) at the beginning of the sentence to "not to"
#remove periods ONLY at the end
lessons[x] = re.sub("\.$", " ", lessons[x])
lessons[x] = lessons[x].replace("theirs","others'").replace("your","their").replace("you","they").replace("theyself","themselves").lower()
strippedLessons.append(lessons[x])
x+=1
#specifics = {"you've":"they've","theirself":"themself","theirs":"others'"} // #an attempt to fix any awkward results due to the replace function above
#y = 0
#for word in strippedLessons[y]:
# if word.lower() in specifics:
# srippedLessons = text.replace(word, specifics[word.lower()])
# y+=1
randomLessonIndex = random.randint(0, len(strippedLessons)-1)
chosenMoralLesson = strippedLessons[randomLessonIndex]
#print(chosenMoralLesson)
#FOR SUPERPOWER
soupSuperpower = BeautifulSoup(reqSuperpower.content, 'html.parser')
superpowers = soupSuperpower.find_all("li")
allSuperpowers = []
for i in superpowers:
content = (str(i.text)).strip()
content = content.replace("\r"," ")
content = content.replace("\n"," ")
content = content.replace("\t","")
allSuperpowers.append(content)
#allSuperpowers.index('Superstrength')
#removing all non-Superpower elements
del allSuperpowers[67:len(allSuperpowers)+1]
del allSuperpowers[0:5]
toRemove2 = ['Skills and/or knowledge Popular categories: science, mechanical, computer/electronics, weapons-handling/military, driving, occult/magical.',
'Popular categories: science, mechanical, computer/electronics, weapons-handling/military, driving, occult/magical.',
'Resourcefulness (“I’m never more than a carton of baking soda away from a doomsday device”)']
for i in toRemove2:
allSuperpowers.remove(i)
randomSuperpowerIndex = random.randint(0, len(allSuperpowers)-1)
chosenSuperpower = allSuperpowers[randomSuperpowerIndex].lower()
#print(chosenSuperpower)
#FOR VILLAIN:
soupVillain = BeautifulSoup(reqVillain.content, 'html.parser')
villainsHTML = soupVillain.find_all("strong")
villains = []
for i in villainsHTML:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
content = content.replace("\xa0"," ")
villains.append(content)
x = 0
for v in villains:
villains[x] = re.sub("\d+""[.]", "", v)
villains[x].strip()
x+=1
randomVillainIndex = random.randint(0, len(villains)-1)
chosenVillain = villains[randomVillainIndex].lower()
#print(chosenVillain)
print(u"\U0001F5E8"+ " " +"Oh? You're out of bedtime stories to tell?")
sleep(1.5)
print(u"\U0001F5E8"+ " " +"hmmm...")
sleep(2)
print(u"\U0001F5E8"+ " " +"how about you tell a story about")
sleep(1)
print(u"\U0001F5E8"+ " " +".....")
sleep(3)
print(u"\U0001F5E8"+ " " +characterName+"?")
sleep(2)
print(u"\U0001F5E8"+ " " +"yeah...yeah, tell a story about " + characterName +" " + "and how they learnt to...")
sleep(1.5)
print(u"\U0001F5E8"+ " " +"I don't know...like..")
sleep(3)
print(u"\U0001F5E8"+ " " +"how they learnt to..." + chosenMoralLesson)
sleep(1.5)
print(u"\U0001F5E8"+ " " +"Yes! that sounds good I guess.")
sleep(2)
print(u"\U0001F5E8"+ " " +"and of course it isn't that easy...it isn't all rainbows and sunshine you know?")
sleep(3)
print(u"\U0001F5E8"+ " " +"I don't know... maybe talk about their struggles with...")
sleep(4)
print(u"\U0001F5E8"+ " " +"with" + chosenVillain + "...yikes")
sleep(4)
print(u"\U0001F5E8"+ " " +"but we need a happily ever after, so maybe say that " + characterName + " was able to defeat" + chosenVillain + " somehow...")
sleep(5)
print(u"\U0001F5E8"+ " " +"like by " + chosenSuperpower + " or something...does that make sense?")
sleep(2)
print(u"\U0001F5E8"+ " " +"I mean even if it doesn't...that's all I can give you tonight")
sleep(3)
print(u"\U0001F5E8"+ " " +"you should practice being imaginative or something...")
sleep(2)
print(u"\U0001F5E8"+ " " +"anyways, it's way past bedtime. Go tell your story!")
print("\n\n\nERROR:chat disconnected")
```
| github_jupyter |
# Imports
```
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import cvxpy as cp
import time
import collections
from typing import Dict
from typing import List
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
import imp
import os
import pickle as pk
import scipy as sp
from statsmodels.tsa.stattools import grangercausalitytests
%matplotlib inline
import sys
sys.path.insert(0, '../../../src/')
import network_utils
import utils
```
# Loading the preprocessed data
```
loaded_d = utils.load_it('/home/omid/Downloads/DT/cvx_data.pk')
obs = loaded_d['obs']
T = loaded_d['T']
periods = [['1995-01-01', '1995-03-26'],
['1995-03-26', '1995-06-18'],
['1995-06-18', '1995-09-10'],
['1995-09-10', '1995-12-03'],
['1995-12-03', '1996-02-25'],
['1996-02-25', '1996-05-19'],
['1996-05-19', '1996-08-11'],
['1996-08-11', '1996-11-03'],
['1996-11-03', '1997-01-26'],
['1997-01-26', '1997-04-20'],
['1997-04-20', '1997-07-13'],
['1997-07-13', '1997-10-05'],
['1997-10-05', '1997-12-28'],
['1997-12-28', '1998-03-22'],
['1998-03-22', '1998-06-14'],
['1998-06-14', '1998-09-06'],
['1998-09-06', '1998-11-29'],
['1998-11-29', '1999-02-21'],
['1999-02-21', '1999-05-16'],
['1999-05-16', '1999-08-08'],
['1999-08-08', '1999-10-31'],
['1999-10-31', '2000-01-23'],
['2000-01-23', '2000-04-16'],
['2000-04-16', '2000-07-09'],
['2000-07-09', '2000-10-01'],
['2000-10-01', '2000-12-24'],
['2000-12-24', '2001-03-18'],
['2001-03-18', '2001-06-10'],
['2001-06-10', '2001-09-02'],
['2001-09-02', '2001-11-25'],
['2001-11-25', '2002-02-17'],
['2002-02-17', '2002-05-12'],
['2002-05-12', '2002-08-04'],
['2002-08-04', '2002-10-27'],
['2002-10-27', '2003-01-19'],
['2003-01-19', '2003-04-13'],
['2003-04-13', '2003-07-06'],
['2003-07-06', '2003-09-28'],
['2003-09-28', '2003-12-21'],
['2003-12-21', '2004-03-14'],
['2004-03-14', '2004-06-06'],
['2004-06-06', '2004-08-29'],
['2004-08-29', '2004-11-21'],
['2004-11-21', '2005-02-13'],
['2005-02-13', '2005-05-08'],
['2005-05-08', '2005-07-31'],
['2005-07-31', '2005-10-23'],
['2005-10-23', '2006-01-15'],
['2006-01-15', '2006-04-09'],
['2006-04-09', '2006-07-02'],
['2006-07-02', '2006-09-24'],
['2006-09-24', '2006-12-17'],
['2006-12-17', '2007-03-11'],
['2007-03-11', '2007-06-03'],
['2007-06-03', '2007-08-26'],
['2007-08-26', '2007-11-18'],
['2007-11-18', '2008-02-10'],
['2008-02-10', '2008-05-04'],
['2008-05-04', '2008-07-27'],
['2008-07-27', '2008-10-19'],
['2008-10-19', '2009-01-11'],
['2009-01-11', '2009-04-05'],
['2009-04-05', '2009-06-28'],
['2009-06-28', '2009-09-20'],
['2009-09-20', '2009-12-13'],
['2009-12-13', '2010-03-07'],
['2010-03-07', '2010-05-30'],
['2010-05-30', '2010-08-22'],
['2010-08-22', '2010-11-14'],
['2010-11-14', '2011-02-06'],
['2011-02-06', '2011-05-01'],
['2011-05-01', '2011-07-24'],
['2011-07-24', '2011-10-16'],
['2011-10-16', '2012-01-08'],
['2012-01-08', '2012-04-01'],
['2012-04-01', '2012-06-24'],
['2012-06-24', '2012-09-16'],
['2012-09-16', '2012-12-09'],
['2012-12-09', '2013-03-03'],
['2013-03-03', '2013-05-26'],
['2013-05-26', '2013-08-18'],
['2013-08-18', '2013-11-10'],
['2013-11-10', '2014-02-02'],
['2014-02-02', '2014-04-27'],
['2014-04-27', '2014-07-20'],
['2014-07-20', '2014-10-12'],
['2014-10-12', '2015-01-04'],
['2015-01-04', '2015-03-29'],
['2015-03-29', '2015-06-21'],
['2015-06-21', '2015-09-13'],
['2015-09-13', '2015-12-06'],
['2015-12-06', '2016-02-28'],
['2016-02-28', '2016-05-22'],
['2016-05-22', '2016-08-14'],
['2016-08-14', '2016-11-06'],
['2016-11-06', '2017-01-29'],
['2017-01-29', '2017-04-23'],
['2017-04-23', '2017-07-16'],
['2017-07-16', '2017-10-08'],
['2017-10-08', '2017-12-31'],
['2017-12-31', '2018-03-25'],
['2018-03-25', '2018-06-17'],
['2018-06-17', '2018-09-09']]
sns.set(rc={'figure.figsize': (30, 8)})
acc_from_prev_l2norm_dists = []
n = len(T)
for i in range(1, n):
current = T[i]
prev = T[i-1]
acc_from_prev_l2norm_dists.append(np.linalg.norm(prev - current))
plt.plot(acc_from_prev_l2norm_dists)
plt.ylabel('Frobenius-norm Difference of Consecutive Matrices.')
# seting xticks
ax = plt.axes()
number_of_periods = len(periods)
ax.set_xticks(list(range(number_of_periods)))
labels = ['[{}, {}] to [{}, {}]'.format(periods[i][0][:7], periods[i][1][:7], periods[i+1][0][:7], periods[i+1][1][:7]) for i in range(number_of_periods-1)]
ax.set_xticklabels(labels, rotation=45);
for tick in ax.xaxis.get_majorticklabels():
tick.set_horizontalalignment("right")
```
# Death data analysis
```
all_death_data = pd.read_csv(
'/home/omid/Datasets/deaths/battle-related-deaths-in-state-based-conflicts-since-1946-by-world-region.csv')
all_death_data.drop(columns=['Code'], inplace=True)
all_death_data.head()
all_death_data.Entity.unique()
# death_data = all_death_data[all_death_data['Entity'] == 'Asia and Oceania']
death_data = all_death_data
ad = death_data.groupby('Year').sum()
annual_deaths = np.array(ad['Battle-related deaths'])
years = death_data.Year.unique()
indices = np.where(years >= 1995)[0]
years = years[indices]
annual_deaths = annual_deaths[indices]
years
sns.set(rc={'figure.figsize': (8, 6)})
plt.plot(annual_deaths);
len(years)
frob_norms = []
for i in range(len(years)):
index = i * 4
frob_norms.append(np.linalg.norm(T[index+1] - T[index]))
sns.set(rc={'figure.figsize': (8, 6)})
plt.plot(frob_norms);
# It tests whether the time series in the second column Granger causes the time series in the first column.
grangercausalitytests(
np.column_stack((frob_norms, annual_deaths)),
maxlag=4)
sp.stats.pearsonr(frob_norms, annual_deaths)
```
# Relationship with trade (similar to Jackson's pnas paper)
```
# 1995 to 2017.
trade_in_percent_of_gdp = np.array(
[43.403, 43.661, 45.613, 46.034, 46.552,
51.156, 50.012, 49.66, 50.797, 54.085,
56.169, 58.412, 58.975, 60.826, 52.31,
56.82, 60.427, 60.474, 60.021, 59.703, 57.798, 56.096, 57.85])
frob_norms = []
for i in range(len(trade_in_percent_of_gdp)):
index = i * 4
frob_norms.append(np.linalg.norm(T[index+1] - T[index]))
# It tests whether the time series in the second column Granger causes the time series in the first column.
grangercausalitytests(
np.column_stack((frob_norms, trade_in_percent_of_gdp)),
maxlag=4)
# It tests whether the time series in the second column Granger causes the time series in the first column.
grangercausalitytests(
np.column_stack((trade_in_percent_of_gdp, frob_norms)),
maxlag=4)
sp.stats.pearsonr(frob_norms, trade_in_percent_of_gdp)
sp.stats.pearsonr(frob_norms, 1/trade_in_percent_of_gdp)
sp.stats.spearmanr(frob_norms, trade_in_percent_of_gdp)
sns.set(rc={'figure.figsize': (8, 6)})
fig, ax1 = plt.subplots()
color = 'tab:blue'
# ax1.set_xlabel('time (s)')
ax1.set_ylabel('Frobenius-norm Difference of Consecutive Matrices', color=color)
ax1.plot(frob_norms, '-p', color=color)
ax1.tick_params(axis='y', labelcolor=color)
# ax|1.legend(['Distance of matrices'])
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:red'
ax2.set_ylabel('Global Trade (% of GDP)', color=color) # we already handled the x-label with ax1
ax2.plot(trade_in_percent_of_gdp, '-x', color=color, linestyle='--')
ax2.tick_params(axis='y', labelcolor=color)
# ax2.legend(['Trades'], loc='center')
# seting xticks
labels = [year for year in range(1995, 2018)]
ax1.set_xticks(list(range(len(labels))))
ax1.set_xticklabels(labels, rotation=45);
for tick in ax1.xaxis.get_majorticklabels():
tick.set_horizontalalignment("right")
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig('frobenius_vs_trade.pdf', bbox_inches='tight')
sns.set(rc={'figure.figsize': (8, 6)})
fig, ax1 = plt.subplots()
color = 'tab:blue'
# ax1.set_xlabel('time (s)')
ax1.set_ylabel('Frobenius-norm Difference of Consecutive Matrices', color=color)
ax1.plot(frob_norms, '-p', color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:red'
ax2.set_ylabel('Inverse Global Trade (% of GDP)', color=color) # we already handled the x-label with ax1
ax2.plot(1/trade_in_percent_of_gdp, '-x', color=color, linestyle='--')
ax2.tick_params(axis='y', labelcolor=color)
# seting xticks
labels = [year for year in range(1995, 2018)]
ax1.set_xticks(list(range(len(labels))))
ax1.set_xticklabels(labels, rotation=45);
for tick in ax1.xaxis.get_majorticklabels():
tick.set_horizontalalignment("right")
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig('frobenius_vs_inversetrade.pdf', bbox_inches='tight')
```
| github_jupyter |
```
import seaborn as sb
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
from sklearn.neighbors import NearestNeighbors
link='/Users/afatade/Downloads/anime_cleaned.csv'
data=pd.read_csv(link)
data.head()
len(data)
#We have a lot of data. Lets see what we can do about our type and genre columns.
#Now we want to make use of our genres and type columns. These are very important when considering the type of anime one
#wishes to make use of. We can call the pd.get_dummies function to generate a one hot encoded dataset.
#For the genres column, we will call str.get_dummies() and set our seperator to ','. This was we can
#do the dame for the genres column.
data.isnull().sum()
#We see we're missing some entrie for certain variables, but thats okay. We wont be using these variables.
#Lets isolate type, source,episodes and genre. These are really important features for recommending
#similar anime. Lets keep it simple.
#We need to generate one hot encoded columns for type, source and tv. Lets go
type_encoded=pd.get_dummies(data['type'])
type_encoded.head()
source_encoded=pd.get_dummies(data['source'])
source_encoded.head()
#The type column has a lot of factors that the typical anime fanatic doesn't look into.
#Typically we only look at manga for the most part. However, we will consider using these features later.
#There are a lot of values per row in the genres column. It would be logical to make use of a
#one hot encoding mechanism that makes use of a seperator.
genre_encoded=data['genre'].str.get_dummies(sep=',')
genre_encoded.head().values
#We see our values have been encoded successfully.
#Lets create a new data frame with our episodes list and our encoded values
features=pd.concat([genre_encoded, type_encoded,data['episodes']],axis=1)
features.head()
#Now we know our data has features with differing magnitudes. It would be logical to scale this data.
features_scaled=MinMaxScaler().fit_transform(features)
features_scaled[0]
#This is just one feature element that we are going to chuck into our KNN algorithm
collaborative_filter=NearestNeighbors().fit(features_scaled)
collaborative_filter.kneighbors([features_scaled[0]])
data['title'].iloc[[0, 4893, 2668, 3943, 4664],].values
#Okay. We see that this filtering mechanism works very well!
#Lets see if we can generate a function such that when a user enters a name, a new anime is recommended
#Lets create some functions so this data looks neater overall.
def preprocess_data():
link='/Users/afatade/Downloads/anime_cleaned.csv'
data=pd.read_csv(link)
type_encoded=pd.get_dummies(data['type'])
source_encoded=pd.get_dummies(data['source'])
genre_encoded=data['genre'].str.get_dummies(sep=',')
features=pd.concat([genre_encoded, type_encoded,data['episodes']],axis=1)
features_scaled=MinMaxScaler().fit_transform(features)
return features_scaled
#This function will return the name of the anime in the dataset even if it is entered partially
def get_partial_names(title):
names=list(data.title.values)
for name in names:
if title in name:
return [name, names.index(name)]
#This function will return features for recommendation
def get_features(title):
values=get_partial_names(title)
return values[1]
def get_vector(title):
index=get_features(title)
data=preprocess_data()
return data[index]
def collaborative_filter():
data=preprocess_data()
filtering=NearestNeighbors().fit(data)
return filtering
def get_recommendations(title):
vectorized_input=get_vector(title)
filter_model=collaborative_filter()
indices=filter_model.kneighbors([vectorized_input])[1]
recommendations=data['title'].iloc[indices[0],].values
return recommendations
get_recommendations("One Piece")
#Lovely! We'll work on the user interface later.
```
| github_jupyter |
# <font color='blue'>Data Science Academy</font>
# <font color='blue'>Big Data Real-Time Analytics com Python e Spark</font>
# <font color='blue'>Capítulo 6</font>
# Machine Learning em Python - Parte 2 - Regressão
```
from IPython.display import Image
Image(url = 'images/processo.png')
import sklearn as sl
import warnings
warnings.filterwarnings("ignore")
sl.__version__
```
## Definição do Problema de Negócio
Vamos criar um modelo preditivo que seja capaz de prever o preço de casas com base em uma série de variáveis (características) sobre diversas casas em um bairro de Boston, cidade dos EUA.
Dataset: https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
## Avaliando a Performance
https://scikit-learn.org/stable/modules/model_evaluation.html
As métricas que você escolhe para avaliar a performance do modelo vão influenciar a forma como a performance é medida e comparada com modelos criados com outros algoritmos.
### Métricas para Algoritmos de Regressão
Métricas Para Avaliar Modelos de Regressão
- Mean Squared Error (MSE)
- Root Mean Squared Error (RMSE)
- Mean Absolute Error (MAE)
- R Squared (R²)
- Adjusted R Squared (R²)
- Mean Square Percentage Error (MSPE)
- Mean Absolute Percentage Error (MAPE)
- Root Mean Squared Logarithmic Error (RMSLE)
```
from IPython.display import Image
Image(url = 'images/mse.png')
from IPython.display import Image
Image(url = 'images/rmse.png')
from IPython.display import Image
Image(url = 'images/mae.png')
from IPython.display import Image
Image(url = 'images/r2.png')
```
Como vamos agora estudar as métricas para regressão, usaremos outro dataset, o Boston Houses.
#### MSE
É talvez a métrica mais simples e comum para a avaliação de regressão, mas também provavelmente a menos útil. O MSE basicamente mede o erro quadrado médio de nossas previsões. Para cada ponto, calcula a diferença quadrada entre as previsões e o valor real da variável alvo e, em seguida, calcula a média desses valores.
Quanto maior esse valor, pior é o modelo. Esse valor nunca será negativo, já que estamos elevando ao quadrado os erros individuais de previsão, mas seria zero para um modelo perfeito.
```
# MSE - Mean Squared Error
# Similar ao MAE, fornece a magnitude do erro do modelo.
# Quanto maior, pior é o modelo!
# Ao extrairmos a raiz quadrada do MSE convertemos as unidades de volta ao original,
# o que pode ser útil para descrição e apresentação. Isso é chamado RMSE (Root Mean Squared Error)
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = LinearRegression()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
#### MAE
```
# MAE
# Mean Absolute Error
# É a soma da diferença absoluta entre previsões e valores reais.
# Fornece uma ideia de quão erradas estão nossas previsões.
# Valor igual a 0 indica que não há erro, sendo a previsão perfeita.
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = LinearRegression()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mae = mean_absolute_error(Y_test, Y_pred)
print("O MAE do modelo é:", mae)
```
### R^2
```
# R^2
# Essa métrica fornece uma indicação do nível de precisão das previsões em relação aos valores observados.
# Também chamado de coeficiente de determinação.
# Valores entre 0 e 1, sendo 0 o valor ideal.
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = LinearRegression()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
r2 = r2_score(Y_test, Y_pred)
print("O R2 do modelo é:", r2)
```
# Algoritmos de Regressão
## Regressão Linear
Assume que os dados estão em Distribuição Normal e também assume que as variáveis são relevantes para a construção do modelo e que não sejam colineares, ou seja, variáveis com alta correlação (cabe a você, Cientista de Dados, entregar ao algoritmo as variáveis realmente relevantes).
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = LinearRegression()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## Ridge Regression
Extensão para a regressão linear onde a loss function é modificada para minimizar a complexidade do modelo.
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import Ridge
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = Ridge()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## Lasso Regression
Lasso (Least Absolute Shrinkage and Selection Operator) Regression é uma modificação da regressão linear e assim como a Ridge Regression, a loss function é modificada para minimizar a complexidade do modelo.
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import Lasso
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = Lasso()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## ElasticNet Regression
ElasticNet é uma forma de regularização da regressão que combina as propriedades da regressão Ridge e LASSO. O objetivo é minimizar a complexidade do modelo, penalizando o modelo usando a soma dos quadrados dos coeficientes.
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import ElasticNet
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = ElasticNet()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## KNN
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.neighbors import KNeighborsRegressor
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = KNeighborsRegressor()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## CART
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = DecisionTreeRegressor()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## SVM
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.svm import SVR
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Divide os dados em treino e teste
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
# Criando o modelo
modelo = SVR()
# Treinando o modelo
modelo.fit(X_train, Y_train)
# Fazendo previsões
Y_pred = modelo.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
## Otimização do Modelo - Ajuste de Parâmetros
Todos os algoritmos de Machine Learning são parametrizados, o que significa que você pode ajustar a performance do seu modelo preditivo, através do tuning (ajuste fino) dos parâmetros. Seu trabalho é encontrar a melhor combinação entre os parâmetros em cada algoritmo de Machine Learning. Esse processo também é chamado de Otimização Hyperparâmetro. O scikit-learn oferece dois métodos para otimização automática dos parâmetros: Grid Search Parameter Tuning e Random Search Parameter Tuning.
### Grid Search Parameter Tuning
Este método realiza metodicamente combinações entre todos os parâmetros do algoritmo, criando um grid. Vamos experimentar este método utilizando o algoritmo de Regressão Ridge. No exemplo abaixo veremos que o valor 1 para o parâmetro alpha atingiu a melhor performance.
```
# Import dos módulos
from pandas import read_csv
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:8]
Y = array[:,8]
# Definindo os valores que serão testados
valores_alphas = np.array([1,0.1,0.01,0.001,0.0001,0])
valores_grid = dict(alpha = valores_alphas)
# Criando o modelo
modelo = Ridge()
# Criando o grid
grid = GridSearchCV(estimator = modelo, param_grid = valores_grid)
grid.fit(X, Y)
# Print do resultado
print("Melhores Parâmetros do Modelo:\n", grid.best_estimator_)
```
### Random Search Parameter Tuning
Este método gera amostras dos parâmetros dos algoritmos a partir de uma distribuição randômica uniforme para um número fixo de interações. Um modelo é construído e testado para cada combinação de parâmetros. Neste exemplo veremos que o valor muito próximo de 1 para o parâmetro alpha é o que vai apresentar os melhores resultados.
```
# Import dos módulos
from pandas import read_csv
import numpy as np
from scipy.stats import uniform
from sklearn.linear_model import Ridge
from sklearn.model_selection import RandomizedSearchCV
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:8]
Y = array[:,8]
# Definindo os valores que serão testados
valores_grid = {'alpha': uniform()}
seed = 7
# Criando o modelo
modelo = Ridge()
iterations = 100
rsearch = RandomizedSearchCV(estimator = modelo,
param_distributions = valores_grid,
n_iter = iterations,
random_state = seed)
rsearch.fit(X, Y)
# Print do resultado
print("Melhores Parâmetros do Modelo:\n", rsearch.best_estimator_)
```
# Salvando o resultado do seu trabalho
```
# Import dos módulos
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import Ridge
import pickle
# Carregando os dados
arquivo = 'data/boston-houses.csv'
colunas = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO','B', 'LSTAT', 'MEDV']
dados = read_csv(arquivo, delim_whitespace = True, names = colunas)
array = dados.values
# Separando o array em componentes de input e output
X = array[:,0:13]
Y = array[:,13]
# Definindo os valores para o número de folds
teste_size = 0.35
seed = 7
# Criando o dataset de treino e de teste
X_treino, X_teste, Y_treino, Y_teste = train_test_split(X, Y, test_size = teste_size, random_state = seed)
# Criando o modelo
modelo = Ridge()
# Treinando o modelo
modelo.fit(X_treino, Y_treino)
# Salvando o modelo
arquivo = 'modelos/modelo_regressor_final.sav'
pickle.dump(modelo, open(arquivo, 'wb'))
print("Modelo salvo!")
# Carregando o arquivo
modelo_regressor_final = pickle.load(open(arquivo, 'rb'))
print("Modelo carregado!")
# Print do resultado
# Fazendo previsões
Y_pred = modelo_regressor_final.predict(X_test)
# Resultado
mse = mean_squared_error(Y_test, Y_pred)
print("O MSE do modelo é:", mse)
```
# Fim
### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
| github_jupyter |
## Plotting Results
```
experiment_name = ['l1000_AE','l1000_cond_VAE','l1000_VAE','l1000_env_prior_VAE']
import numpy as np
from scipy.spatial.distance import cosine
from scipy.linalg import svd, inv
import pandas as pd
import matplotlib.pyplot as plt
import dill as pickle
import os
import pdb
import torch
import ai.causalcell
from ai.causalcell.training import set_seed
from ai.causalcell.utils import configuration
os.chdir(os.path.join(os.path.dirname(ai.__file__), ".."))
print("Working in", os.getcwd())
def load_all_losses(res, name='recon_loss'):
all_train_loss = []
for epoch in range(len(res['losses']['train'])):
train_loss = np.mean([res['losses']['train'][epoch][name]
])
all_train_loss.append(train_loss)
all_valid_loss = []
for epoch in range(len(res['losses']['valid'])):
valid_loss = np.mean([res['losses']['valid'][epoch][name]
])
all_valid_loss.append(valid_loss)
return all_train_loss, all_valid_loss
def epoch_length(i):
return results[i]['n_samples_in_split']['train']
def get_tube(x_coord, valid_loss1, valid_loss2, valid_loss3):
min_length = min(len(valid_loss1), len(valid_loss2), len(valid_loss3))
concat_lists = np.array([valid_loss1[:min_length], valid_loss2[:min_length], valid_loss3[:min_length]])
st_dev_list = np.std(concat_lists, 0)
mean_list = np.mean(concat_lists, 0)
return x_coord[:min_length], mean_list, st_dev_list
result_dir = os.path.join(os.getcwd(), "results", experiment_name[1])
results = []
for exp_id in range(1,4):
with open(os.path.join(result_dir,'results_'
+ str(exp_id) + '.pkl'), 'rb') as f:
results.append(pickle.load(f))
```
### Reconstruction Loss
```
all_train_loss, all_valid_loss = load_all_losses(results[1])
plt.plot(all_train_loss, label="train")
plt.plot(all_valid_loss, label="valid")
plt.title("reconstruction loss")
plt.legend()
plt.show()
```
### Reconstruction Loss log scale
```
plt.yscale("log")
plt.plot(all_train_loss, label="train")
plt.plot(all_valid_loss, label="valid")
plt.title("reconstruction loss log scale")
plt.legend()
plt.show()
```
### Reconstruction Loss with std deviation
```
plt.figure(figsize=(6,4), dpi=200)
for exp in experiment_name:
results = []
all_exp_losses = []
result_dir = os.path.join(os.getcwd(), "results", exp)
for exp_id in range(1,4):
with open(os.path.join(result_dir,'results_'
+ str(exp_id) + '.pkl'), 'rb') as f:
results.append(pickle.load(f))
for exp_id in range(3):
all_exp_losses.append(load_all_losses(results[exp_id]))
exp_id =0
valid_loss1 = all_exp_losses[exp_id][1]
valid_loss2 = all_exp_losses[exp_id+1][1]
valid_loss3 = all_exp_losses[exp_id+2][1]
x_coord = [epoch_length(exp_id)*i for i in range(len(valid_loss1))]
x_coord_tube, mean_list, st_dev_list = get_tube(x_coord, valid_loss1, valid_loss2, valid_loss3)
plt.fill_between(x_coord_tube, mean_list - st_dev_list, mean_list + st_dev_list, alpha=.2)
label = list(results[exp_id]['config']['model'].keys())[0] \
+ " with " + str(results[exp_id]['n_envs_in_split']['train']) + " envs"
plt.plot(x_coord_tube, mean_list, label=label)
plt.title("reconstruction losses")
#plt.yscale("log")
#plt.xlim((0,3000000))
plt.legend()
plt.show()
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd.variable as Variable
import torch.utils.data as data
import torchvision
from torchvision import transforms
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import sparse
import lightfm
%matplotlib inline
filepath = 'D:/Data_Science/Recommender systems/the-movies-dataset/'
filename = 'movies.csv'
data_movie_names = pd.read_csv(filepath + filename)
data_movie_names = data_movie_names[['movieId','title']]
data_movie_names.head()
movie_names_dict = data_movie_names.set_index('movieId').to_dict()['title']
movie_names_dict
filepath = 'D:/Data_Science/Recommender systems/the-movies-dataset/'
filename = 'ratings_small.csv'
data = pd.read_csv(filepath + filename)
data.head()
data.shape
#make interaction dictionary
interaction_dict = {}
cid_to_idx = {}
idx_to_cid = {}
uid_to_idx ={}
idx_to_uid = {}
cidx = 0
uidx = 0
input_file = filepath + filename
with open(input_file) as fp:
next(fp)
for line in fp:
row = line.split(',')
uid = int(row[0])
cid = int(row[1])
rating = float(row[2])
if uid_to_idx.get(uid) == None :
uid_to_idx[uid] = uidx
idx_to_uid[uidx] = uid
interaction_dict[uid] = {}
uidx+=1
if cid_to_idx.get(cid) == None :
cid_to_idx[cid] = cidx
idx_to_cid[cidx] = cid
cidx+=1
interaction_dict[uid][cid] = rating
fp.close()
print("unique users : {}".format(data.userId.nunique()))
print("unique movies : {}".format(data.movieId.nunique()))
#interaction_dict
row = []
column = []
data_1 = []
for uid in interaction_dict.keys():
for cid in interaction_dict[uid].keys():
row.append(cid_to_idx[cid])
column.append(uid_to_idx[uid])
data_1.append(interaction_dict[uid][cid])
item_user_data = sparse.csr_matrix((data_1,(column,row)))
item_user_data
item_user_data.shape
torch.tensor(item_user_data[0].todense())[0]
input_dim = len(cid_to_idx)
h_layer_2 = int(round(len(cid_to_idx) / 4))
h_layer_3 = int(round(h_layer_2 / 4))
h_layer_3
class AutoEncoder(nn.Module):
def __init__(self): #Class contructor
super(AutoEncoder,self).__init__() #Caal parent constructor
self.fc1 = nn.Linear(in_features = input_dim , out_features = h_layer_2) #out_features = size of output tensor. This is rank1 tensor
self.fc2 = nn.Linear(in_features = h_layer_2 , out_features = h_layer_3)
self.fc3 = nn.Linear(in_features = h_layer_3 , out_features = h_layer_2)
self.out = nn.Linear(in_features = h_layer_2 , out_features = input_dim)
def forward(self,t):
#implement forward pass
#1. Input layer
t = self.fc1(t)
t = F.relu(t)
#2. Hidden Linear Layer
t = self.fc2(t)
t = F.relu(t)
#3. Hidden Linear Layer
t = self.fc3(t)
t = F.relu(t)
#3. Output layer
t = self.out(t)
t = F.relu(t)
return t
self_ae = AutoEncoder() #Runs the class contructor
self_ae.double().cuda()
#torchvision.datasets.DatasetFolder('')
#train_data_loader = data.DataLoader(item_user_data, 256)
#next(iter(train_data_loader))
#item_user_data[batch]
learning_rate = 0.001
optimizer = torch.optim.Adam(self_ae.parameters(), lr=learning_rate)
criterion = F.mse_loss
epochs = 10
for epoch in range(1,epochs):
for batch in range(0,item_user_data.shape[0]):
if batch % 100 == 0:
print('processing epoch :{} , batch : {}'.format(epoch , batch+1))
inputs = torch.tensor(np.array(item_user_data[batch].todense())[0])
inputs = inputs.cuda()
target = inputs
# zero the parameter gradients
optimizer.zero_grad()
y_pred = self_ae(inputs.double())
loss = criterion(y_pred, target)
loss.backward()
optimizer.step()
print("epoch : {}\t batch : {}\t loss : {}".format(epoch,batch+1,loss.item()))
torch.save(self_ae.state_dict(), ('model'+str(epoch)))
torch.save(self_ae.state_dict(), 'model.final')
self_ae.eval().cpu()
idx = uid_to_idx[24]
inputs = np.array(item_user_data[idx].todense())[0]
watched_movie_idx = np.argsort(inputs)[-10:][::-1]
inputs = torch.tensor(inputs)
print('WATCHED MOVIES :')
for i in watched_movie_idx:
movie_id = idx_to_cid[i]
try :
name = movie_names_dict[movie_id]
except :
name = 'unknown'
print('index : {}\t id : {}\t name : {}'.format(i,movie_id,name))
y_pred = self_ae(inputs)
y_pred = y_pred.detach().numpy()
pred_idx = np.argsort(y_pred)[-10:][::-1]
print('PREDICTED MOVIES')
for i in pred_idx: #reverse list
movid_id = idx_to_cid[i]
try :
name = movie_names_dict[movid_id]
except :
name = 'unknown'
print('index : {}\t id : {}\t name : {}'.format(i,movid_id,name))
```
| github_jupyter |
```
#
# This small example shows you how to access JS-based requests via Selenium
# Like this, one can access raw data for scraping,
# for example on many JS-intensive/React-based websites
#
import time
from selenium import webdriver
from selenium.webdriver import DesiredCapabilities
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
from selenium.common.exceptions import ElementClickInterceptedException
from selenium.common.exceptions import ElementNotInteractableException
from selenium.common.exceptions import WebDriverException
import json
from datetime import datetime
import pandas as pd
def process_browser_log_entry(entry):
response = json.loads(entry['message'])['message']
return response
def log_filter(log_):
return (
# is an actual response
log_["method"] == "Network.responseReceived"
# and json
and "json" in log_["params"]["response"]["mimeType"]
)
def init_page():
#fetch a site that does xhr requests
driver.get("https://www.youtube.com/watch?v=DWcJFNfaw9c")
main_content_wait = WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.XPATH, '//iframe[@id="chatframe"]'))
)
time.sleep(3)
video_box = driver.find_element_by_xpath('//div[@id="movie_player"]')
video_box.click()
frame = driver.find_elements_by_xpath('//iframe[@id="chatframe"]')
# switch the webdriver object to the iframe.
driver.switch_to.frame(frame[0])
try:
#enable 'all' livechat
try:
driver.find_element_by_xpath('//div[@id="label-text"][@class="style-scope yt-dropdown-menu"]').click()
except ElementNotInteractableException:
init_page()
time.sleep(2.1)
driver.find_element_by_xpath('//a[@class="yt-simple-endpoint style-scope yt-dropdown-menu"][@tabindex="-1"]').click()
except ElementClickInterceptedException:
print('let\'s try again...')
init_page()
# make chrome log requests
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # newer: goog:loggingPrefs
driver = webdriver.Chrome(
desired_capabilities=capabilities
)
init_page()
iter_num = 0
while True:
iter_num += 1
if iter_num >= 100:
iter_num = 0
init_page()
# extract requests from logs
logs_raw = driver.get_log("performance")
logs = [json.loads(lr["message"])["message"] for lr in logs_raw]
json_list = []
for log in filter(log_filter, logs):
request_id = log["params"]["requestId"]
resp_url = log["params"]["response"]["url"]
#print(f"Caught {resp_url}")
try:
if 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat?key=' in resp_url:
body = driver.execute_cdp_cmd("Network.getResponseBody", {"requestId": request_id})
json_list.append(body)
except WebDriverException:
print('web driver exception!!!')
continue
'''
with open('look.txt', 'a', encoding='utf-8') as text_file:
body = driver.execute_cdp_cmd("Network.getResponseBody", {"requestId": request_id})
text_file.write(str(body))
json_list.append(body)
'''
#print(len(json_list))
message_list = []
self_message_list = []
for i in range(len(json_list)):
json_data = json.loads(json_list[i]['body'].replace('\n','').strip())
try:
actions = (json_data['continuationContents']['liveChatContinuation']['actions'])
except:
continue
for j in range(len(actions)):
try:
item = actions[j]['addChatItemAction']['item']['liveChatTextMessageRenderer']
author_channel_id = item['authorExternalChannelId']
author_name = item['authorName']['simpleText']
text = item['message']['runs'][0]['text']
post_time = item['timestampUsec']
post_time = post_time[0:10]
post_time = int(post_time)
author_photo = item['authorPhoto']['thumbnails'][0]['url']
post_time = datetime.utcfromtimestamp(post_time)
post_item = {
"Author" : author_name,
"Message" : text,
"Date" : post_time,
"Channel ID" : author_channel_id,
"Channel" : f'https://youtube.com/channel/{author_channel_id}'
}
message_list.append(post_item)
if 'biss' in text.lower():
self_message_list.append(post_item)
#print(post_item)
except Exception as e:
print(str(e))
continue
#message_list = list(set(message_list))
df = pd.DataFrame(message_list)
df = df.drop_duplicates()
#print(df)
df.to_csv('./data/youtube_lofi/test_run.csv', index=False, mode='a')
reply_df = pd.DataFrame(self_message_list)
reply_df = reply_df.drop_duplicates()
if len(self_message_list) > 0 :
reply_df.to_csv('./data/youtube_lofi/reply_runs_cumulative.csv', index=False, mode='a')
reply_df.to_csv('./data/youtube_lofi/reply_runs.csv', index=False, mode='a')
if len(message_list) < 1:
print('The world is ending!')
time.sleep(30)
```
| github_jupyter |
Evaluating performance of FFT2 and IFFT2 and checking for accuracy. <br><br>
Note that the ffts from fft_utils perform the transformation in place to save memory.<br><br>
As a rule of thumb, it's good to increase the number of threads as the size of the transform increases until one hits a limit <br><br>
pyFFTW uses lower memory and is slightly slower.(using icc to compile fftw might fix this, haven't tried it)
```
import numpy as np
import matplotlib.pyplot as plt
#from multislice import fft_utils
import pyfftw,os
import scipy.fftpack as sfft
%load_ext memory_profiler
%run obj_fft
```
Loading libraries and the profiler to be used
```
N = 15000 #size of transform
t = 12 #number of threads.
```
Creating a test signal to perform on which we will perform 2D FFT
```
a= np.random.random((N,N))+1j*np.random.random((N,N))
print('time for numpy forward')
%timeit np.fft.fft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('time for scipy forward')
%timeit sfft.fft2(a,overwrite_x='True')
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
fft_obj = FFT_2d_Obj(np.shape(a),direction='FORWARD',flag='PATIENT',threads=t)
print('time for pyFFTW forward')
%timeit fft_obj.run_fft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for numpy forward')
%memit np.fft.fft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for scipy forward')
%memit sfft.fft2(a,overwrite_x='True')
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for pyFFTW forward')
%memit fft_obj.run_fft2(a)
del(a)
```
The results depend on how the libraries are complied. mkl linked scipy is fast but the fftw uses less memory. Also note that the fftw used in this test wasn't installed using icc.
Creating a test signal to perform on which we will perform 2D IFFT.
```
a= np.random.random((N,N))+1j*np.random.random((N,N))
print('time for numpy backward')
%timeit np.fft.ifft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('time for scipy backward')
%timeit sfft.ifft2(a,overwrite_x='True')
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
del fft_obj
fft_obj = FFT_2d_Obj(np.shape(a),direction='BACKWARD',flag='PATIENT',threads=t)
print('time for pyFFTW backward')
%timeit fft_obj.run_ifft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for numpy forward')
%memit np.fft.ifft2(a)
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for scipy forward')
%memit sfft.ifft2(a,overwrite_x='True')
del(a)
a = np.random.random((N,N))+1j*np.random.random((N,N))
print('Memory for pyFFTW backward')
%memit fft_obj.run_ifft2(a)
del(a)
```
The results depend on how the libraries are complied. mkl linked scipy is fast but the fftw uses less memory. Also note that the fftw used in this test wasn't installed using icc.
Testing for accuracy of 2D FFT:
```
N = 5000
a = np.random.random((N,N)) + 1j*np.random.random((N,N))
fft_obj = FFT_2d_Obj(np.shape(a),threads=t)
A1 = np.fft.fft2(a)
fft_obj.run_fft2(a)
np.allclose(A1,a)
```
Testing for accuracy of 2D IFFT:
```
N = 5000
a = np.random.random((N,N)) + 1j*np.random.random((N,N))
A1 = np.fft.ifft2(a)
fft_obj.run_ifft2(a)
np.allclose(A1,a)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import warnings
warnings.filterwarnings('ignore')
import math
from time import time
import pickle
import pandas as pd
import numpy as np
from time import time
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.metrics import accuracy_score, f1_score
import sys
sys.path.append('../src')
from preprocessing import *
from utils import *
from plotting import *
```
# Splitting the dataset
```
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
df_db = group_datafiles_byID('../datasets/preprocessed/HT_Sensor_prep_metadata.dat', '../datasets/preprocessed/HT_Sensor_prep_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db.head()
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
```
# Basic Neural Network
```
def printResults(n_hid_layers,n_neur,accuracy,elapsed):
print('========================================')
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
print('Accuracy:', accuracy)
print('Time (minutes):', (elapsed)/60)
def printScores(xtest,ytest,clf):
xback, yback = xtest[ytest=='background'], ytest[ytest=='background']
print('Score del background:', clf.score(xback,yback))
xrest, yrest = xtest[ytest!='background'], ytest[ytest!='background']
print('Score del resto:', clf.score(xrest,yrest))
num_back = len(yback)
num_wine = len(yrest[yrest=='wine'])
num_banana = len(yrest[yrest=='banana'])
func = lambda x: 1/num_back if x=='background' else (1/num_wine if x=='wine' else 1/num_banana)
weights = np.array([func(x) for x in ytest])
# Score donde las tres clases ponderan igual
print('Score con pesos:', clf.score(xtest,ytest,weights))
print('========================================')
# NN with 2 hidden layers and 15 neurons per layer
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start = time.time()
clf = MLPClassifier(hidden_layer_sizes=(15,15))
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time.time()
printResults(2,15,score,final-start)
# Adding early stopping and more iterations
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start = time.time()
clf = MLPClassifier(hidden_layer_sizes=(15,15),early_stopping=True,max_iter=2000)
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time.time()
printResults(2,15,score,final-start)
# Análisis del score
print('Proporcion de background:',len(ytest[ytest=='background'])/len(ytest))
printScores(xtest,ytest,clf)
```
Demasiado sesgo hacia el background, hay que reducirlo aunque el score baje
# Removing excess of background
```
# prop: ejemplos que no son background que habrá por cada ejemplo de background
def remove_bg(df,prop=2):
new_df = df[df['class']!='background'].copy()
useful_samples = new_df.shape[0]
new_df = new_df.append(df[df['class']=='background'].sample(n=int(useful_samples/2)).copy())
return new_df
# Para evitar el sesgo quitamos elementos clasificados como background, pero solo en el train set
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
df_train = remove_bg(df_train)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start = time()
clf = MLPClassifier(hidden_layer_sizes=(15,15),early_stopping=True,max_iter=2000)
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time()
printResults(2,15,score,final-start)
# Análisis del score
printScores(xtest,ytest,clf)
```
Aunque se ponga la misma cantidad de background que de bananas o wine sigue habiendo un sesgo hacia el background.
# Hyperparameter analysis
```
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time.time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
```
# Two Neural Networks
## 1. Classify background
```
def printScoresBack(xtest,ytest,clf):
xback, yback = xtest[ytest=='background'], ytest[ytest=='background']
print('Score del background:', clf.score(xback,yback))
xrest, yrest = xtest[ytest!='background'], ytest[ytest!='background']
print('Score del resto:', clf.score(xrest,yrest))
num_back = len(yback)
num_rest = len(ytest)-num_back
func = lambda x: 1/num_back if x=='background' else 1/num_rest
weights = np.array([func(x) for x in ytest])
# Score donde las tres clases ponderan igual
print('Score con pesos:', clf.score(xtest,ytest,weights))
print('========================================')
df_db = group_datafiles_byID('../datasets/raw/HT_Sensor_metadata.dat', '../datasets/raw/HT_Sensor_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db.loc[df_db['class']!='background','class'] = 'not-background'
df_db[df_db['class']!='background'].head()
# Primero probamos a no quitar el exceso de background
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time.time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (end_total-start_total)/(60))
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# En más de la mitad de ocasiones aquellos datos que no son background son clasificados erroneamente.
# Veamos si es cuestión de quitar background.
# Ahora, lo mismo quitando el exceso de background
df_train, df_test = split_series_byID(0.75, df_db)
df_train = remove_bg(df_train,prop=1)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
clf_nn.fit(xtrain, ytrain)
score = clf_nn.score(xtest, ytest)
final = time()
printResults(n_hid_layers,n_neur,score,final-start)
printScoresBack(xtest,ytest,clf_nn)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
```
## 2. Classify wine and bananas
```
df_db = group_datafiles_byID('../datasets/raw/HT_Sensor_metadata.dat', '../datasets/raw/HT_Sensor_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db = df_db[df_db['class']!='background']
df_db.head()
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time()
for n_hid_layers in range(1,5):
for n_neur in [5,10,15,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
clf_nn.fit(xtrain, ytrain)
score = clf_nn.score(xtest, ytest)
final = time()
printResults(n_hid_layers,n_neur,score,final-start)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
```
# 3. Merge the 2 NN
```
class doubleNN:
def __init__(self, n_hid_layers, n_neur):
self.hid_layers = n_hid_layers
self.neur = n_neur
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
self.backNN = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
self.wineNN = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
def fit_bg(self, xtrain, ytrain):
ytrain_copy = np.array([x if x=='background' else 'not-background' for x in ytrain])
self.backNN.fit(xtrain, ytrain_copy)
def fit_wine(self,xtrain,ytrain):
self.wineNN.fit(xtrain, ytrain)
def predict(self,xtest):
ypred = self.backNN.predict(xtest)
ypred[ypred=='not-background'] = self.wineNN.predict(xtest[ypred=='not-background'])
return ypred
def score(self,xtest,ytest):
ypred = self.predict(ytest)
score = np.sum(np.equal(ypred,ytest))/len(ytest)
return score
# With all the background
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time.time()
for n_hid_layers in range(2,4):
for n_neur in [10,20]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = doubleNN(2,20)
clf_nn.fit_bg(xtrain, ytrain)
xtrain_notbg = xtrain[ytrain != 'background']
ytrain_notbg = ytrain[ytrain != 'background']
clf_nn.fit_wine(xtrain_notbg, ytrain_notbg)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# Removing background
df_train, df_test = split_series_byID(0.75, df_db)
df_train = remove_bg(df_train,prop=1)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time.time()
for n_hid_layers in range(2,4):
for n_neur in [10,20]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = doubleNN(2,20)
clf_nn.fit_bg(xtrain, ytrain)
xtrain_notbg = xtrain[ytrain != 'background']
ytrain_notbg = ytrain[ytrain != 'background']
clf_nn.fit_wine(xtrain_notbg, ytrain_notbg)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
```
# Creating Windows
```
# with open('../datasets/preprocessed/window120_dataset.pkl', 'wb') as f:
# pickle.dump(win_df, f)
win_df = pd.read_pickle('../datasets/preprocessed/window120_dataset.pkl')
xtrain, ytrain, xtest, ytest = split_train_test(win_df,0.75)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = (32,16),
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=0.01,
learning_rate_init=0.01
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
# Varía ciertos hiperparámetros con ventanas e imprime los resultados más relevantes
def hyper_sim(win_df,num_val,n_hid_layers,n_neur,alpha):
errs_acc = []
errs_f1 = []
rec_ban = []
loss = []
for i in range(num_val):
df_train, df_test = split_series_byID(0.75, win_df)
df_train, df_test = norm_train_test(df_train,df_test,features_to_norm=features)
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
clf_nn = MLPClassifier(
hidden_layer_sizes=tup,
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=alpha,
learning_rate='adaptive'
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
errs_acc.append(accuracy_score(ytest,ypred))
errs_f1.append(f1_score(ytest,ypred,average='weighted'))
rec_ban.append(np.sum(np.logical_and(ytest=='banana',ypred=='banana'))/np.sum(ytest=='banana'))
loss.append(clf_nn.loss_)
errs_acc = np.array(errs_acc)
errs_f1 = np.array(errs_f1)
rec_ban = np.array(rec_ban)
loss = np.array(loss)
print('Train loss:',np.mean(loss),'+-',np.std(loss))
print('Accuracy:',np.mean(errs_acc),'+-',np.std(errs_acc))
print('F1-score:',np.mean(errs_f1),'+-',np.std(errs_f1))
print('Recall bananas:',np.mean(rec_ban),'+-',np.std(rec_ban))
for alpha in [0.1,0.01,0.001]:
print('<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>')
print('Alpha:',alpha)
for n_hid_layers in range(1,4):
print('##############################################')
print('\t Hidden layers:',n_hid_layers)
for n_neur in [4,8,16]:
print('==============================================')
print('\t \t Neurons per layer:',n_neur)
hyper_sim(win_df,3,n_hid_layers,n_neur,alpha)
print('==============================================')
# Nos quedamos con:
# alpha: 0.01
# hidden_layers: 3
# n_neurons: 4
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
errs_acc = []
errs_f1 = []
rec_ban = []
for i in range(5):
df_train, df_test = split_series_byID(0.75, win_df)
df_train, df_test = norm_train_test(df_train,df_test,features_to_norm=features)
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
clf_nn = MLPClassifier(
hidden_layer_sizes=(4,4,4),
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=0.01,
learning_rate='adaptive'
)
bag = BaggingClassifier(base_estimator=clf_nn,n_estimators=100,n_jobs=3)
bag.fit(xtrain, ytrain)
ypred = bag.predict(xtest)
metric_report(ytest, ypred)
errs_acc.append(accuracy_score(ytest,ypred))
errs_f1.append(f1_score(ytest,ypred,average='weighted'))
rec_ban.append(np.sum(np.logical_and(ytest=='banana',ypred=='banana'))/np.sum(ytest=='banana'))
errs_acc = np.array(errs_acc)
errs_f1 = np.array(errs_f1)
rec_ban = np.array(rec_ban)
print('Accuracy:',np.mean(errs_acc),'+-',np.std(errs_acc))
print('F1-score:',np.mean(errs_f1),'+-',np.std(errs_f1))
print('Recall bananas:',np.mean(rec_ban),'+-',np.std(rec_ban))
with open('../datasets/preprocessed/nn_optimal.pkl', 'wb') as f:
pickle.dump(bag, f)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
batch_size, dropout = 0.5, beam_width = 15):
def lstm_cell(size, reuse=False):
return tf.nn.rnn_cell.GRUCell(size, reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
batch_size = tf.shape(self.X)[0]
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
for n in range(num_layers):
(out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = lstm_cell(size_layer // 2),
cell_bw = lstm_cell(size_layer // 2),
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32,
scope = 'bidirectional_rnn_%d'%(n))
encoder_embedded = tf.concat((out_fw, out_bw), 2)
bi_state = tf.concat((state_fw, state_bw), -1)
self.encoder_state = tuple([bi_state] * num_layers)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(size_layer) for _ in range(num_layers)])
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, len(short_questions), batch_size):
index = min(k+batch_size, len(short_questions))
batch_x, seq_x = pad_sentence_batch(X[k: index], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: index ], PAD)
predicted, accuracy,loss, _ = sess.run([model.predicting_ids,
model.accuracy, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss += loss
total_accuracy += accuracy
total_loss /= (len(short_questions) / batch_size)
total_accuracy /= (len(short_questions) / batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(model.predicting_ids, feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 12: Reinforcement Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 12 Video Material
* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)
* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)
* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)
* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)
* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
```
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
```
# Part 12.4: Atari Games with Keras Neural Networks
The Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).
Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.
* [Virtual Atari](http://www.virtualatari.org/listP.html)
Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.
**Figure 12.ATARI: The Atari 2600**

### Actual Atari 2600 Specs
* CPU: 1.19 MHz MOS Technology 6507
* Audio + Video processor: Television Interface Adapter (TIA)
* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.
* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).
* Ball and missile sprites: 1 x 192 pixels (NTSC).
* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.
* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.
* 2 channels of 1-bit monaural sound with 4-bit volume control.
### OpenAI Lab Atari Pong
OpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30)
This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.
This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.
We begin by importing the needed Python packages.
```
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
```
## Hyperparameters
The hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
```
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
```
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train.
## Atari Environment's
You must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
```
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
```
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
```
env.reset()
PIL.Image.fromarray(env.render())
```
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
```
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
```
## Agent
I used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
```
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
```
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
```
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
```
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure.
The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list.
The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.
The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.
Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created.
```
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
```
## Metrics and Evaluation
There are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
```
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
```
## Replay Buffer
DQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
```
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
```
## Random Collection
The algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
```
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
```
## Training the agent
We are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
```
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
```
## Visualization
The notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
```
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
```
### Videos
We now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
```
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
```
First, we will observe the trained agent play the game.
```
create_policy_eval_video(agent.policy, "trained-agent")
```
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
```
create_policy_eval_video(random_policy, "random-agent")
```
| github_jupyter |
# Disclaimer
Released under the CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/)
# Purpose of this notebook
The purpose of this document is to show how I approached the presented problem and to record my learning experience in how to use Tensorflow 2 and CatBoost to perform a classification task on text data.
If, while reading this document, you think _"Why didn't you do `<this>` instead of `<that>`?"_, the answer could be simply because I don't know about `<this>`. Comments, questions and constructive criticism are of course welcome.
# Intro
This simple classification task has been developed to get familiarized with Tensorflow 2 and CatBoost handling of text data. In summary, the task is to predict the author of a short text.
To get a number of train/test examples, it is enough to create a twitter app and, using the python client library for twitter, read the user timeline of multiple accounts. This process is not covered here. If you are interested in this topic, feel free to contact me.
## Features
It is assumed the collected raw data consists of:
1. The author handle (the label that will be predicted)
2. The timestamp of the post
3. The raw text of the post
### Preparing the dataset
When preparing the dataset, the content of the post is preprocessed using these rules:
1. Newlines are replaced with a space
2. Links are replaced with a placeholder (e.g. `<link>`)
3. For each possible unicode char category, the number of chars in that category is added as a feature
4. The number of words for each tweet is added as a feature
5. Retweets (even retweets with comment) are discarded. Only responses and original tweets are taken into account
The dataset has been randomly split into three different files for train (70%), validation (10%) and test (20%). For each label, it has been verified that the same percentages hold in all three files.
Before fitting the data and before evaluation on the test dataset, the timestamp values are normalized, using the mean and standard deviation computed on the train dataset.
# TensorFlow 2 model
The model has four different input features:
1. The normalized timestamp.
2. The input text, represented as the whole sentence. This will be transformed in a 128-dimensional vector by an embedding layer.
3. The input text, this time represented as a sequence of words, expressed as indexes of tokens. This representation will be used by a LSTM layer to try to extract some meaning from the actual sequence of the used words.
4. The unicode character category usage. This should help in identify handles that use emojis, a lot of punctuation or unusual chars.
The resulting layers are concatenated, then after a sequence of two dense layers (with an applied dropout) the final layer computes the logits for the different classes. The used loss function is *sparse categorical crossentropy*, since the labels are represented as indexes of a list of twitter handles.
## Imports for the TensorFlow 2 model
```
import functools
import os
from tensorflow.keras import Input, layers
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import regularizers
import pandas as pd
import numpy as np
import copy
import calendar
import datetime
import re
from tensorflow.keras.preprocessing.text import Tokenizer
import unicodedata
#masking layers and GPU don't mix
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
```
## Definitions for the TensorFlow 2 model
```
#Download size: ~446MB
hub_layer = hub.KerasLayer(
"https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1",
output_shape=[512],
input_shape=[],
dtype=tf.string,
trainable=False
)
embed = hub.load("https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1")
unicode_data_categories = [
"Cc",
"Cf",
"Cn",
"Co",
"Cs",
"LC",
"Ll",
"Lm",
"Lo",
"Lt",
"Lu",
"Mc",
"Me",
"Mn",
"Nd",
"Nl",
"No",
"Pc",
"Pd",
"Pe",
"Pf",
"Pi",
"Po",
"Ps",
"Sc",
"Sk",
"Sm",
"So",
"Zl",
"Zp",
"Zs"
]
column_names = [
"handle",
"timestamp",
"text"
]
column_names.extend(unicode_data_categories)
train_file = os.path.realpath("input.csv")
n_tokens = 100000
tokenizer = Tokenizer(n_tokens, oov_token='<OOV>')
#List of handles (labels)
#Fill with the handles you want to consider in your dataset
handles = [
]
end_token = "XEND"
train_file = os.path.realpath("data/train.csv")
val_file = os.path.realpath("data/val.csv")
test_file = os.path.realpath("data/test.csv")
```
## Preprocessing and computing dataset features
```
def get_pandas_dataset(input_file, fit_tokenizer=False, timestamp_mean=None, timestamp_std=None, pad_sequence=None):
pd_dat = pd.read_csv(input_file, names=column_names)
pd_dat = pd_dat[pd_dat.handle.isin(handles)]
if(timestamp_mean is None):
timestamp_mean = pd_dat.timestamp.mean()
if(timestamp_std is None):
timestamp_std = pd_dat.timestamp.std()
pd_dat.timestamp = (pd_dat.timestamp - timestamp_mean) / timestamp_std
pd_dat["handle_index"] = pd_dat['handle'].map(lambda x: handles.index(x))
if(fit_tokenizer):
tokenizer.fit_on_texts(pd_dat["text"])
pad_sequence = tokenizer.texts_to_sequences([[end_token]])[0][0]
pd_dat["sequence"] = tokenizer.texts_to_sequences(pd_dat["text"])
max_seq_length = 30
pd_dat = pd_dat.reset_index(drop=True)
#max length
pd_dat["sequence"] = pd.Series(el[0:max_seq_length] for el in pd_dat["sequence"])
#padding
pd_dat["sequence"] = pd.Series([el + ([pad_sequence] * (max_seq_length - len(el))) for el in pd_dat["sequence"]])
pd_dat["words_in_tweet"] = pd_dat["text"].str.strip().str.split(" ").str.len() + 1
return pd_dat, timestamp_mean, timestamp_std, pad_sequence
train_dataset, timestamp_mean, timestamp_std, pad_sequence = get_pandas_dataset(train_file, fit_tokenizer=True)
test_dataset, _, _, _= get_pandas_dataset(test_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std, pad_sequence=pad_sequence)
val_dataset, _, _, _ = get_pandas_dataset(val_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std, pad_sequence=pad_sequence)
#selecting as features only the unicode categories that are used in the train dataset
non_null_unicode_categories = []
for unicode_data_category in unicode_data_categories:
category_name = unicode_data_category
category_sum = train_dataset[category_name].sum()
if(category_sum > 0):
non_null_unicode_categories.append(category_name)
print("Bucketized unicode categories used as features: " + repr(non_null_unicode_categories))
```
## Defining input/output features from the datasets
```
def split_inputs_and_outputs(pd_dat):
labels = pd_dat['handle_index'].values
icolumns = pd_dat.columns
timestamps = pd_dat.loc[:, "timestamp"].astype(np.float32)
text = pd_dat.loc[:, "text"]
sequence = np.asarray([np.array(el) for el in pd_dat.loc[:, "sequence"]])
#unicode_char_ratios = pd_dat[unicode_data_categories].astype(np.float32)
unicode_char_categories = {
category_name: pd_dat[category_name] for category_name in non_null_unicode_categories
}
words_in_tweet = pd_dat['words_in_tweet']
return timestamps, text, sequence, unicode_char_categories, words_in_tweet, labels
timestamps_train, text_train, sequence_train, unicode_char_categories_train, words_in_tweet_train, labels_train = split_inputs_and_outputs(train_dataset)
timestamps_val, text_val, sequence_val, unicode_char_categories_val, words_in_tweet_val, labels_val = split_inputs_and_outputs(val_dataset)
timestamps_test, text_test, sequence_test, unicode_char_categories_test, words_in_tweet_test, labels_test = split_inputs_and_outputs(test_dataset)
```
## Input tensors
```
input_timestamp = Input(shape=(1, ), name='input_timestamp', dtype=tf.float32)
input_text = Input(shape=(1, ), name='input_text', dtype=tf.string)
input_sequence = Input(shape=(None, 1 ), name="input_sequence", dtype=tf.float32)
input_unicode_char_categories = [
Input(shape=(1, ), name="input_"+category_name, dtype=tf.float32) for category_name in non_null_unicode_categories
]
input_words_in_tweet = Input(shape=(1, ), name="input_words_in_tweet", dtype=tf.float32)
inputs_train = {
'input_timestamp': timestamps_train,
"input_text": text_train,
"input_sequence": sequence_train,
'input_words_in_tweet': words_in_tweet_train,
}
inputs_train.update({
'input_' + category_name: unicode_char_categories_train[category_name] for category_name in non_null_unicode_categories
})
outputs_train = labels_train
inputs_val = {
'input_timestamp': timestamps_val,
"input_text": text_val,
"input_sequence": sequence_val,
'input_words_in_tweet': words_in_tweet_val
}
inputs_val.update({
'input_' + category_name: unicode_char_categories_val[category_name] for category_name in non_null_unicode_categories
})
outputs_val = labels_val
inputs_test = {
'input_timestamp': timestamps_test,
"input_text": text_test,
"input_sequence": sequence_test,
'input_words_in_tweet': words_in_tweet_test
}
inputs_test.update({
'input_' + category_name: unicode_char_categories_test[category_name] for category_name in non_null_unicode_categories
})
outputs_test = labels_test
```
## TensorFlow 2 model definition
```
def get_model():
reg = None
activation = 'relu'
reshaped_text = layers.Reshape(target_shape=())(input_text)
embedded = hub_layer(reshaped_text)
x = layers.Dense(256, activation=activation)(embedded)
masking = layers.Masking(mask_value=pad_sequence)(input_sequence)
lstm_layer = layers.Bidirectional(layers.LSTM(32))(masking)
flattened_lstm_layer = layers.Flatten()(lstm_layer)
x = layers.concatenate([
input_timestamp,
flattened_lstm_layer,
*input_unicode_char_categories,
input_words_in_tweet,
x
])
x = layers.Dense(n_tokens // 30, activation=activation, kernel_regularizer=reg)(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(n_tokens // 50, activation=activation, kernel_regularizer=reg)(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(256, activation=activation, kernel_regularizer=reg)(x)
y = layers.Dense(len(handles), activation='linear')(x)
model = tf.keras.Model(
inputs=[
input_timestamp,
input_text,
input_sequence,
*input_unicode_char_categories,
input_words_in_tweet
],
outputs=[y]
)
cce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(
optimizer='adam',
loss=cce,
metrics=['sparse_categorical_accuracy']
)
return model
model = get_model()
tf.keras.utils.plot_model(model, to_file='twitstar.png', show_shapes=True)
```
## TensorFlow 2 model fitting
```
history = model.fit(
inputs_train,
outputs_train,
epochs=15,
batch_size=64,
verbose=True,
validation_data=(inputs_val, outputs_val),
callbacks=[
tf.keras.callbacks.ModelCheckpoint(
os.path.realpath("weights.h5"),
monitor="val_sparse_categorical_accuracy",
save_best_only=True,
verbose=2
),
tf.keras.callbacks.EarlyStopping(
patience=3,
monitor="val_sparse_categorical_accuracy"
),
]
)
```
## TensorFlow 2 model plots for train loss and accuracy
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show()
plt.plot(history.history['sparse_categorical_accuracy'])
plt.plot(history.history['val_sparse_categorical_accuracy'])
plt.title('Accuracy vs. epochs')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show()
```
## TensorFlow 2 model evaluation
```
#loading the "best" weights
model.load_weights(os.path.realpath("weights.h5"))
model.evaluate(inputs_test, outputs_test)
```
### TensorFlow 2 model confusion matrix
Using predictions on the test set, a confusion matrix is produced
```
def tf2_confusion_matrix(inputs, outputs):
predictions = model.predict(inputs)
wrong_labelled_counter = np.zeros((len(handles), len(handles)))
wrong_labelled_sequences = np.empty((len(handles), len(handles)), np.object)
for i in range(len(handles)):
for j in range(len(handles)):
wrong_labelled_sequences[i][j] = []
tot_wrong = 0
for i in range(len(predictions)):
predicted = int(predictions[i].argmax())
true_value = int(outputs[i])
wrong_labelled_counter[true_value][predicted] += 1
wrong_labelled_sequences[true_value][predicted].append(inputs.get('input_text')[i])
ok = (int(true_value) == int(predicted))
if(not ok):
tot_wrong += 1
return wrong_labelled_counter, wrong_labelled_sequences, predictions
def print_confusion_matrix(wrong_labelled_counter):
the_str = "\t"
for handle in handles:
the_str += handle + "\t"
print(the_str)
ctr = 0
for row in wrong_labelled_counter:
the_str = handles[ctr] + '\t'
ctr+=1
for i in range(len(row)):
the_str += str(int(row[i]))
if(i != len(row) -1):
the_str += "\t"
print(the_str)
wrong_labelled_counter, wrong_labelled_sequences, predictions = tf2_confusion_matrix(inputs_test, outputs_test)
print_confusion_matrix(wrong_labelled_counter)
```
# CatBoost model
This CatBoost model instance was developed reusing the ideas presented in these tutorials from the official repository: [classification](https://github.com/catboost/tutorials/blob/master/classification/classification_tutorial.ipynb) and [text features](https://github.com/catboost/tutorials/blob/master/text_features/text_features_in_catboost.ipynb)
## Imports for the CatBoost model
```
import functools
import os
import pandas as pd
import numpy as np
import copy
import calendar
import datetime
import re
import unicodedata
from catboost import Pool, CatBoostClassifier
```
## Definitions for the CatBoost model
```
unicode_data_categories = [
"Cc",
"Cf",
"Cn",
"Co",
"Cs",
"LC",
"Ll",
"Lm",
"Lo",
"Lt",
"Lu",
"Mc",
"Me",
"Mn",
"Nd",
"Nl",
"No",
"Pc",
"Pd",
"Pe",
"Pf",
"Pi",
"Po",
"Ps",
"Sc",
"Sk",
"Sm",
"So",
"Zl",
"Zp",
"Zs"
]
column_names = [
"handle",
"timestamp",
"text"
]
column_names.extend(unicode_data_categories)
#List of handles (labels)
#Fill with the handles you want to consider in your dataset
handles = [
]
train_file = os.path.realpath("./data/train.csv")
val_file = os.path.realpath("./data/val.csv")
test_file = os.path.realpath("./data/test.csv")
```
## Preprocessing and computing dataset features
```
def get_pandas_dataset(input_file, timestamp_mean=None, timestamp_std=None):
pd_dat = pd.read_csv(input_file, names=column_names)
pd_dat = pd_dat[pd_dat.handle.isin(handles)]
if(timestamp_mean is None):
timestamp_mean = pd_dat.timestamp.mean()
if(timestamp_std is None):
timestamp_std = pd_dat.timestamp.std()
pd_dat.timestamp = (pd_dat.timestamp - timestamp_mean) / timestamp_std
pd_dat["handle_index"] = pd_dat['handle'].map(lambda x: handles.index(x))
pd_dat = pd_dat.reset_index(drop=True)
return pd_dat, timestamp_mean, timestamp_std
train_dataset, timestamp_mean, timestamp_std = get_pandas_dataset(train_file)
test_dataset, _, _ = get_pandas_dataset(test_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std)
val_dataset, _, _ = get_pandas_dataset(val_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std)
def split_inputs_and_outputs(pd_dat):
labels = pd_dat['handle_index'].values
del(pd_dat['handle'])
del(pd_dat['handle_index'])
return pd_dat, labels
X_train, labels_train = split_inputs_and_outputs(train_dataset)
X_val, labels_val = split_inputs_and_outputs(val_dataset)
X_test, labels_test = split_inputs_and_outputs(test_dataset)
```
## CatBoost model definition
```
def get_model(catboost_params={}):
cat_features = []
text_features = ['text']
catboost_default_params = {
'iterations': 1000,
'learning_rate': 0.03,
'eval_metric': 'Accuracy',
'task_type': 'GPU',
'early_stopping_rounds': 20
}
catboost_default_params.update(catboost_params)
model = CatBoostClassifier(**catboost_default_params)
return model, cat_features, text_features
model, cat_features, text_features = get_model()
```
## CatBoost model fitting
```
def fit_model(X_train, X_val, y_train, y_val, model, cat_features, text_features, verbose=100):
learn_pool = Pool(
X_train,
y_train,
cat_features=cat_features,
text_features=text_features,
feature_names=list(X_train)
)
val_pool = Pool(
X_val,
y_val,
cat_features=cat_features,
text_features=text_features,
feature_names=list(X_val)
)
model.fit(learn_pool, eval_set=val_pool, verbose=verbose)
return model
model = fit_model(X_train, X_val, labels_train, labels_val, model, cat_features, text_features)
```
## CatBoost model evaluation
Also for the CatBoost model, predictions on the test set, a confusion matrix is produced
```
def predict(X, model, cat_features, text_features):
pool = Pool(
data=X,
cat_features=cat_features,
text_features=text_features,
feature_names=list(X)
)
probs = model.predict_proba(pool)
return probs
def check_predictions_on(inputs, outputs, model, cat_features, text_features, handles):
predictions = predict(inputs, model, cat_features, text_features)
labelled_counter = np.zeros((len(handles), len(handles)))
labelled_sequences = np.empty((len(handles), len(handles)), np.object)
for i in range(len(handles)):
for j in range(len(handles)):
labelled_sequences[i][j] = []
tot_wrong = 0
for i in range(len(predictions)):
predicted = int(predictions[i].argmax())
true_value = int(outputs[i])
labelled_counter[true_value][predicted] += 1
labelled_sequences[true_value][predicted].append(inputs.get('text').values[i])
ok = (int(true_value) == int(predicted))
if(not ok):
tot_wrong += 1
return labelled_counter, labelled_sequences, predictions
def confusion_matrix(labelled_counter, handles):
the_str = "\t"
for handle in handles:
the_str += handle + "\t"
the_str += "\n"
ctr = 0
for row in labelled_counter:
the_str += handles[ctr] + '\t'
ctr+=1
for i in range(len(row)):
the_str += str(int(row[i]))
if(i != len(row) -1):
the_str += "\t"
the_str += "\n"
return the_str
labelled_counter, labelled_sequences, predictions = check_predictions_on(
X_test,
labels_test,
model,
cat_features,
text_features,
handles
)
confusion_matrix_string = confusion_matrix(labelled_counter, handles)
print(confusion_matrix_string)
```
# Evaluation
To perform some experiments and evaluate the two models, 18 Twitter users were selected and, for each user, a number of tweets and responses to other users' tweets were collected. In total 39786 tweets were collected. The difference in class representation could be eliminated, for example limiting the number of tweets for each label to the number of tweets in the less represented class. This difference, however, was not eliminated, in order to test if it represents an issue for the accuracy of the two trained models.
The division of the tweets corresponding to each twitter handle for each file (train, test, validation) is reported in the following table. To avoid policy issues (better safe than sorry), the actual user handle is masked using C_x placeholders and a brief description of the twitter user is presented instead.
|Description|Handle|Train|Test|Validation|Sum|
|-------|-------|-------|-------|-------|-------|
|UK-based labour politician|C_1|1604|492|229|2325|
|US-based democratic politician|C_2|1414|432|195|2041|
|US-based democratic politician|C_3|1672|498|273|2443|
|US-based actor|C_4|1798|501|247|2546|
|UK-based actress|C_5|847|243|110|1200|
|US-based democratic politician|C_6|2152|605|304|3061|
|US-based singer|C_7|2101|622|302|3025|
|US-based singer|C_8|1742|498|240|2480|
|Civil rights activist|C_9|314|76|58|448|
|US-based republican politician|C_10|620|159|78|857|
|US-based TV host|C_11|2022|550|259|2831|
|Parody account of C_15 |C_12|2081|624|320|3025|
|US-based democratic politician|C_13|1985|557|303|2845|
|US-based actor/director|C_14|1272|357|183|1812|
|US-based republican politician|C_15|1121|298|134|1553|
|US-based writer|C_16|1966|502|302|2770|
|US-based writer|C_17|1095|305|153|1553|
|US-based entrepreneur|C_18|2084|581|306|2971|
|Sum||27890|7900|3996|39786|
## TensorFlow 2 model
The following charts show loss and accuracy vs epochs for train and validation for a typical run of the TF2 model:


If the images do not show correctly, they can be found at these links: [loss](https://github.com/icappello/ml-predict-text-author/blob/master/img/tf2_train_val_loss.png) [accuracy](https://github.com/icappello/ml-predict-text-author/blob/master/img/tf2_train_val_accuracy.png)
After a few epochs, the model starts overfitting on the train data, and the accuracy for the validation set quickly reaches a plateau.
The obtained accuracy on the test set is 0.672
## CatBoost model
The fit procedure stopped after 303 iterations. The obtained accuracy on the test set is 0.808
## Confusion matrices
The confusion matrices for the two models are reported [here](https://docs.google.com/spreadsheets/d/17JGDXYRajnC4THrBnZrbcqQbgzgjo0Jb7KAvPYenr-w/edit?usp=sharing), since large tables are not displayed correctly in the embedded github viewer for jupyter notebooks. Rows represent the actual classes, while columns represent the predicted ones.
## Summary
The CatBoost model obtained a better accuracy overall, as well as a better accuracy on all but one label. No particular optimization was done on the definition of the CatBoost model. The TF2 model could need more data, as well as some changes to its definition, to perform better (comments and pointers on this are welcome). Some variants of the TF2 model were tried: a deeper model with more dense layers, higher dropout rate, more/less units in layers, using only a subset of features, regularization methods (L1, L2, batch regularization), different activation functions (sigmoid, tanh) but none performed significantly better than the one presented.
Looking at the results summarized in the confusion matrices, tweets from C_9 clearly represented a problem, either for the under-representation relative to the other classes or for the actual content of the tweets (some were not written in english). Also, tweets from handles C_5 and C_14 were hard to correctly classify for both models, even if they were not under-represented w.r.t other labels.
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:** This method also converts to lowercase all the words and remove stop words. Stop words are common language words such as "the", "a", "an" or "in".
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for sentence in data:
for word in sentence:
if word in word_count.keys():
word_count[word] += 1
else:
word_count[word]=1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = [k for k, _ in sorted(word_count.items(), key=lambda item: item[1],reverse=True)]
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:** The most frequently words in the training set are {"hostil", "assort", "handicap", "monti", "sparkl"}. Well, if we take into account that the training set has positive and negative reviews, we can associate "hostil" to negative movies and "sparkl" to positive ones, but honestly, I have not a clear idea if they have a sense in this context. My intuition tell me that this is the idea because the words are tokenized but maybe this is because of my English level :/.
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
top_wors_training=[k for k, _ in sorted(word_dict.items(), key=lambda item: item[1],reverse=True)]
top_wors_training[:5]
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print('One of the review in the training data after codification is:\n {}'.format(train_X[1]))
print('This review have a length of:\n {}'.format(train_X_len[1]))
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:** In my opinion, the most important point here is to remember that the dictionary was constructed only on the training data, so, don't exist data leakage. Also, these functions take as argument the dictionary and a dataset, so, they are totally independent when processing the training or testing data. In conclusion, I don't see a problem to process the data using this approach.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
# Extract the samples
batch_X, batch_y = batch
# Import the samples to the available device
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
# zero gradients
model.zero_grad()
# Forward propagation (return prediction and hidden)
prediction = model(batch_X)
# Calculate the loss and perform backpropagation and optimization
loss = loss_fn(prediction,batch_y)
loss.backward()
# Optimizer step
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.m4.xlarge',#'ml.p2.xlarge'
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
estimator_predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, estimator_predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:** With the XGBoost model I obtained an accuracy of $\approx 0.82$, while with the RNN the accuracy is around 0.85, which suggest that the RNN performance is better than XGBoost. Well, in my opinion this difference could be explained because two main factors:
1. The model itself: On one hand, XGBoost is a boosting tree approach that uses the information about the residuals to assign weights to the wrongly classified instances. These kinds of approaches are good to reduce variance (like any other ensemble method) but don't have memory. On the other hand, RNN has a memory component that makes predictions more suitable in these kinds of scenarios. Specifically, the LSTM cell included in the architecture of the current RNN is widely known for its great performance in these kinds of NLP problems.
2. The data preparation: For the XGBoost model, we use a Bag of Words approach to count the number of times that words in our dictionary occur in a given review, while in the case of RNN we assign a unique tag (integer) to each word in the dictionary and creates the input vectors using these tags. The difference here remains in that with the first alternative we only consider the frequency information, while with the second we merge the frequency and the spatial (place, moment, location) information in a single input vector. This subtle difference provides, in my opinion, room to the RNN for "understand" the relationship between the variables.
In my opinion, the better option for sentimental analysis is RNN. With this model, we are plenty of options to test differents architectures, besides the "memory" factor, which has vital importance in these problems.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
sentence_X, sentence_X_len = convert_and_pad(word_dict, review_to_words(test_review))
test_data = np.array([sentence_X_len] + sentence_X,ndmin=2)
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
estimator_predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator_predictor.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(int(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:** I tried multiple examples of reviews. I am include only one of the positive and another negative.
* This movie was very bad, I hate it! --> Your review was NEGATIVE!
* This movie is adorable! I want to see it again. --> Your review was POSITIVE!
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/120Davies/DS-Unit-4-Sprint-3-Deep-Learning/blob/master/Ro_Davies_LS_DS_431_RNN_and_LSTM_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 4 Sprint 3 Assignment 1*
# Recurrent Neural Networks and Long Short Term Memory (LSTM)

It is said that [infinite monkeys typing for an infinite amount of time](https://en.wikipedia.org/wiki/Infinite_monkey_theorem) will eventually type, among other things, the complete works of Wiliam Shakespeare. Let's see if we can get there a bit faster, with the power of Recurrent Neural Networks and LSTM.
This text file contains the complete works of Shakespeare: https://www.gutenberg.org/files/100/100-0.txt
Use it as training data for an RNN - you can keep it simple and train character level, and that is suggested as an initial approach.
Then, use that trained RNN to generate Shakespearean-ish text. Your goal - a function that can take, as an argument, the size of text (e.g. number of characters or lines) to generate, and returns generated text of that size.
Note - Shakespeare wrote an awful lot. It's OK, especially initially, to sample/use smaller data and parameters, so you can have a tighter feedback loop when you're trying to get things running. Then, once you've got a proof of concept - start pushing it more!
```
# TODO - Words, words, mere words, no matter from the heart.
import numpy as np
with open('./shakespear/100-0.txt', 'r') as f:
text = f.read()
text = " ".join(text[:250000])
chars = list(set(text))
num_chars = len(chars)
txt_size = len(text)
print('Number of unique characters:', num_chars)
print('Total characters in text:', txt_size)
char_to_int = dict((c,i) for i,c in enumerate(chars))
int_to_char = dict((i,c) for i,c in enumerate(chars))
print(char_to_int)
print('-'*50)
print(int_to_char)
integer_encoded = [char_to_int[i] for i in text]
print(len(integer_encoded))
# hyperparameters
iteration = 10
sequence_length = 40
batch_size = round((txt_size /sequence_length)+0.5) # = math.ceil
hidden_size = 500 # size of hidden layer of neurons.
learning_rate = 1e-1
# model parameters
W_xh = np.random.randn(hidden_size, num_chars)*0.01 # weight input -> hidden.
W_hh = np.random.randn(hidden_size, hidden_size)*0.01 # weight hidden -> hidden
W_hy = np.random.randn(num_chars, hidden_size)*0.01 # weight hidden -> output
b_h = np.zeros((hidden_size, 1)) # hidden bias
b_y = np.zeros((num_chars, 1)) # output bias
h_prev = np.zeros((hidden_size,1)) # h_(t-1)
def forwardprop(inputs, targets, h_prev):
# Since the RNN receives the sequence, the weights are not updated during one sequence.
xs, hs, ys, ps = {}, {}, {}, {} # dictionary
hs[-1] = np.copy(h_prev) # Copy previous hidden state vector to -1 key value.
loss = 0 # loss initialization
for t in range(len(inputs)): # t is a "time step" and is used as a key(dic).
xs[t] = np.zeros((num_chars,1))
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(W_xh, xs[t]) + np.dot(W_hh, hs[t-1]) + b_h) # hidden state.
ys[t] = np.dot(W_hy, hs[t]) + b_y # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars.
# Softmax. -> The sum of probabilities is 1 even without the exp() function, but all of the elements are positive through the exp() function.
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss). Efficient and simple code
# y_class = np.zeros((num_chars, 1))
# y_class[targets[t]] =1
# loss += np.sum(y_class*(-np.log(ps[t]))) # softmax (cross-entropy loss)
return loss, ps, hs, xs
def backprop(ps, inputs, hs, xs, targets):
dWxh, dWhh, dWhy = np.zeros_like(W_xh), np.zeros_like(W_hh), np.zeros_like(W_hy) # make all zero matrices.
dbh, dby = np.zeros_like(b_h), np.zeros_like(b_y)
dhnext = np.zeros_like(hs[0]) # (hidden_size,1)
# reversed
for t in reversed(range(len(inputs))):
dy = np.copy(ps[t]) # shape (num_chars,1). "dy" means "dloss/dy"
dy[targets[t]] -= 1 # backprop into y. After taking the soft max in the input vector, subtract 1 from the value of the element corresponding to the correct label.
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(W_hy.T, dy) + dhnext # backprop into h.
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity #tanh'(x) = 1-tanh^2(x)
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(W_hh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients.
return dWxh, dWhh, dWhy, dbh, dby
%%time
data_pointer = 0
# memory variables for Adagrad
mWxh, mWhh, mWhy = np.zeros_like(W_xh), np.zeros_like(W_hh), np.zeros_like(W_hy)
mbh, mby = np.zeros_like(b_h), np.zeros_like(b_y)
for i in range(iteration):
h_prev = np.zeros((hidden_size,1)) # reset RNN memory
data_pointer = 0 # go from start of data
for b in range(batch_size):
inputs = [char_to_int[ch] for ch in text[data_pointer:data_pointer+sequence_length]]
targets = [char_to_int[ch] for ch in text[data_pointer+1:data_pointer+sequence_length+1]] # t+1
if (data_pointer+sequence_length+1 >= len(text) and b == batch_size-1): # processing of the last part of the input data.
# targets.append(char_to_int[txt_data[0]]) # When the data doesn't fit, add the first char to the back.
targets.append(char_to_int[" "]) # When the data doesn't fit, add space(" ") to the back.
# forward
loss, ps, hs, xs = forwardprop(inputs, targets, h_prev)
# print(loss)
# backward
dWxh, dWhh, dWhy, dbh, dby = backprop(ps, inputs, hs, xs, targets)
# perform parameter update with Adagrad
for param, dparam, mem in zip([W_xh, W_hh, W_hy, b_h, b_y],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam # elementwise
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
data_pointer += sequence_length # move data pointer
if i % 2 == 0:
print ('iter %d, loss: %f' % (i, loss)) # print progress
def predict(test_char, length):
x = np.zeros((num_chars, 1))
x[char_to_int[test_char]] = 1
ixes = []
h = np.zeros((hidden_size,1))
for t in range(length):
h = np.tanh(np.dot(W_xh, x) + np.dot(W_hh, h) + b_h)
y = np.dot(W_hy, h) + b_y
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(num_chars), p=p.ravel()) # ravel -> rank0
# "ix" is a list of indexes selected according to the soft max probability.
x = np.zeros((num_chars, 1)) # init
x[ix] = 1
ixes.append(ix) # list
txt = test_char + ''.join(int_to_char[i] for i in ixes)
print ('----\n %s \n----' % (txt, ))
predict('A', 1000)
```
# Resources and Stretch Goals
## Stretch goals:
- Refine the training and generation of text to be able to ask for different genres/styles of Shakespearean text (e.g. plays versus sonnets)
- Train a classification model that takes text and returns which work of Shakespeare it is most likely to be from
- Make it more performant! Many possible routes here - lean on Keras, optimize the code, and/or use more resources (AWS, etc.)
- Revisit the news example from class, and improve it - use categories or tags to refine the model/generation, or train a news classifier
- Run on bigger, better data
## Resources:
- [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) - a seminal writeup demonstrating a simple but effective character-level NLP RNN
- [Simple NumPy implementation of RNN](https://github.com/JY-Yoon/RNN-Implementation-using-NumPy/blob/master/RNN%20Implementation%20using%20NumPy.ipynb) - Python 3 version of the code from "Unreasonable Effectiveness"
- [TensorFlow RNN Tutorial](https://github.com/tensorflow/models/tree/master/tutorials/rnn) - code for training a RNN on the Penn Tree Bank language dataset
- [4 part tutorial on RNN](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/) - relates RNN to the vanishing gradient problem, and provides example implementation
- [RNN training tips and tricks](https://github.com/karpathy/char-rnn#tips-and-tricks) - some rules of thumb for parameterizing and training your RNN
| github_jupyter |
## Data Description and Analysis
```
import numpy as np
import pandas as pd
pd.set_option('max_columns', 150)
import gc
import os
# matplotlib and seaborn for plotting
import matplotlib
matplotlib.rcParams['figure.dpi'] = 120 #resolution
matplotlib.rcParams['figure.figsize'] = (8,6) #figure size
import matplotlib.pyplot as plt
sns.set_style('darkgrid')
import seaborn as sns
color = sns.color_palette()
root = 'C:/Data/instacart-market-basket-analysis/'
```
The dataset contains relational set of files describing customers' orders over time. For each user, 4 to 100 orders are provided with the sequence of products purchased in each order. The data of the order's week and hour of the day as well as a relative measure of time between orders is provided.
**Files in the Dataset:**
```
os.listdir(root)
aisles = pd.read_csv(root + 'aisles.csv')
departments = pd.read_csv(root + 'departments.csv')
orders = pd.read_csv(root + 'orders.csv')
order_products_prior = pd.read_csv(root + 'order_products__prior.csv')
order_products_train = pd.read_csv(root + 'order_products__train.csv')
products = pd.read_csv(root + 'products.csv')
```
### aisles:
This file contains different aisles and there are total 134 unique aisles.
```
aisles.head()
aisles.tail()
len(aisles.aisle.unique())
aisles.aisle.unique()
```
### departments:
This file contains different departments and there are total 21 unique departments.
```
departments.head()
departments.tail()
len(departments.department.unique())
departments.department.unique()
```
### orders:
This file contains all the orders made by different users. From below analysis, we can conclude following:
- There are total 3421083 orders made by total 206209 users.
- There are three sets of orders: Prior, Train and Test. The distributions of orders in Train and Test sets are similar whereas the distribution of orders in Prior set is different.
- The total orders per customer ranges from 0 to 100.
- Based on the plot of 'Orders VS Day of Week' we can map 0 and 1 as Saturday and Sunday respectively based on the assumption that most of the people buy groceries on weekends.
- Majority of the orders are made during the day time.
- Customers order once in a week which is supported by peaks at 7, 14, 21 and 30 in 'Orders VS Days since prior order' graph.
- Based on the heatmap between 'Day of Week' and 'Hour of Day,' we can say that Saturday afternoons and Sunday mornings are prime time for orders.
```
orders.head(12)
orders.tail()
orders.info()
len(orders.order_id.unique())
len(orders.user_id.unique())
orders.eval_set.value_counts()
orders.order_number.describe().apply(lambda x: format(x, '.2f'))
order_number = orders.groupby('user_id')['order_number'].max()
order_number = order_number.value_counts()
fig, ax = plt.subplots(figsize=(15,8))
ax = sns.barplot(x = order_number.index, y = order_number.values, color = color[3])
ax.set_xlabel('Orders per customer')
ax.set_ylabel('Count')
ax.xaxis.set_tick_params(rotation=90, labelsize=10)
ax.set_title('Frequency of Total Orders by Customers')
fig.savefig('Frequency of Total Orders by Customers.png')
fig, ax = plt.subplots(figsize = (8,4))
ax = sns.kdeplot(orders.order_number[orders.eval_set == 'prior'], label = "Prior set", lw = 1)
ax = sns.kdeplot(orders.order_number[orders.eval_set == 'train'], label = "Train set", lw = 1)
ax = sns.kdeplot(orders.order_number[orders.eval_set == 'test'], label = "Test set", lw = 1)
ax.set_xlabel('Order Number')
ax.set_ylabel('Count')
ax.tick_params(axis = 'both', labelsize = 10)
ax.set_title('Distribution of Orders in Various Sets')
fig.savefig('Distribution of Orders in Various Sets.png')
plt.show()
fig, ax = plt.subplots(figsize = (5,3))
ax = sns.countplot(orders.order_dow)
ax.set_xlabel('Day of Week', size = 10)
ax.set_ylabel('Orders', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.set_title('Total Orders per Day of Week')
fig.savefig('Total Orders per Day of Week.png')
plt.show()
temp_df = orders.groupby('order_dow')['user_id'].nunique()
fig, ax = plt.subplots(figsize = (5,3))
ax = sns.barplot(x = temp_df.index, y = temp_df.values)
ax.set_xlabel('Day of Week', size = 10)
ax.set_ylabel('Total Unique Users', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.set_title('Total Unique Users per Day of Week')
fig.savefig('Total Unique Users per Day of Week.png')
plt.show()
fig, ax = plt.subplots(figsize = (10,5))
ax = sns.countplot(orders.order_hour_of_day, color = color[2])
ax.set_xlabel('Hour of Day', size = 10 )
ax.set_ylabel('Orders', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.set_title('Total Orders per Hour of Day')
fig.savefig('Total Orders per Hour of Day.png')
plt.show()
fig, ax = plt.subplots(figsize = (10,5))
ax = sns.countplot(orders.days_since_prior_order, color = color[2])
ax.set_xlabel('Days since prior order', size = 10)
ax.set_ylabel('Orders', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.set_title('Orders VS Days since prior order')
fig.savefig('Orders VS Days since prior order.png')
plt.show()
temp_df = orders.groupby(["order_dow", "order_hour_of_day"])["order_number"].aggregate("count").reset_index()
temp_df = temp_df.pivot('order_dow', 'order_hour_of_day', 'order_number')
temp_df.head()
ax = plt.subplots(figsize=(7,3))
ax = sns.heatmap(temp_df, cmap="YlGnBu", linewidths=.5)
ax.set_title("Frequency of Day of week Vs Hour of day", size = 12)
ax.set_xlabel("Hour of Day", size = 10)
ax.set_ylabel("Day of Week", size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
cbar = ax.collections[0].colorbar
cbar.ax.tick_params(labelsize=10)
fig = ax.get_figure()
fig.savefig("Frequency of Day of week Vs Hour of day.png")
plt.show()
```
### order_products_prior:
This file gives information about which products were ordered and in which order they were added in the cart. It also tells us that if the product was reordered or not.
- In this file there is an information of total 3214874 orders through which total 49677 products were ordered.
- From the 'Count VS Items in cart' plot, we can say that most of the people buy 1-15 items in an order and there were a maximum of 145 items in an order.
- The percentage of reorder items in this set is 58.97%.
```
order_products_prior.head(10)
order_products_prior.tail()
len(order_products_prior.order_id.unique())
len(order_products_prior.product_id.unique())
add_to_cart_order_prior = order_products_prior.groupby('order_id')['add_to_cart_order'].count()
add_to_cart_order_prior = add_to_cart_order_prior.value_counts()
add_to_cart_order_prior.head()
add_to_cart_order_prior.tail()
add_to_cart_order_prior.index.max()
fig, ax = plt.subplots(figsize = (15,8))
ax = sns.barplot(x = add_to_cart_order_prior.index, y = add_to_cart_order_prior.values, color = color[3])
ax.set_xlabel('Items in cart')
ax.set_ylabel('Count')
ax.xaxis.set_tick_params(rotation=90, labelsize = 9)
ax.set_title('Frequency of Items in Cart in Prior set', size = 15)
fig.savefig('Frequency of Items in Cart in Prior set.png')
fig, ax = plt.subplots(figsize=(3,3))
ax = sns.barplot(x = order_products_prior.reordered.value_counts().index,
y = order_products_prior.reordered.value_counts().values, color = color[3])
ax.set_xlabel('Reorder', size = 10)
ax.set_ylabel('Count', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.ticklabel_format(style='plain', axis='y')
ax.set_title('Reorder Frequency in Prior Set')
fig.savefig('Reorder Frequency in Prior Set')
plt.show()
print('Percentage of reorder in prior set:',
format(order_products_prior[order_products_prior.reordered == 1].shape[0]*100/order_products_prior.shape[0], '.2f'))
```
### order_products_train:
This file gives information about which products were ordered and in which order they were added in the cart. It also tells us that if the product was reordered or not.
- In this file there is an information of total 131209 orders through which total 39123 products were ordered.
- From the 'Count VS Items in cart' plot, we can say that most of the people buy 1-15 items in an order and there were a maximum of 145 items in an order.
- The percentage of reorder items in this set is 59.86%.
```
order_products_train.head(10)
order_products_train.tail()
len(order_products_train.order_id.unique())
len(order_products_train.product_id.unique())
add_to_cart_order_train = order_products_prior.groupby('order_id')['add_to_cart_order'].count()
add_to_cart_order_train = add_to_cart_order_train.value_counts()
add_to_cart_order_train.head()
add_to_cart_order_train.tail()
add_to_cart_order_train.index.max()
fig, ax = plt.subplots(figsize = (15,8))
ax = sns.barplot(x = add_to_cart_order_train.index, y = add_to_cart_order_train.values, color = color[2])
ax.set_xlabel('Items in cart')
ax.set_ylabel('Count')
ax.xaxis.set_tick_params(rotation=90, labelsize = 8)
ax.set_title('Frequency of Items in Cart in Train set', size = 15)
fig.savefig('Frequency of Items in Cart in Train set.png')
fig, ax = plt.subplots(figsize=(3,3))
ax = sns.barplot(x = order_products_train.reordered.value_counts().index,
y = order_products_train.reordered.value_counts().values, color = color[2])
ax.set_xlabel('Reorder', size = 10)
ax.set_ylabel('Count', size = 10)
ax.tick_params(axis = 'both', labelsize = 8)
ax.set_title('Reorder Frequency in Train Set')
fig.savefig('Reorder Frequency in Train Set')
plt.show()
print('Percentage of reorder in train set:',
format(order_products_train[order_products_train.reordered == 1].shape[0]*100/order_products_train.shape[0], '.2f'))
```
### products:
This file contains the list of total 49688 products and their aisle as well as department. The number of products in different aisles and different departments are different.
```
products.head(10)
products.tail()
len(products.product_name.unique())
len(products.aisle_id.unique())
len(products.department_id.unique())
temp_df = products.groupby('aisle_id')['product_id'].count()
fig, ax = plt.subplots(figsize = (15,6))
ax = sns.barplot(x = temp_df.index, y = temp_df.values, color = color[3])
ax.set_xlabel('Aisle Id')
ax.set_ylabel('Total products in aisle')
ax.xaxis.set_tick_params(rotation=90, labelsize = 7)
ax.set_title('Total Products in Aisle VS Aisle ID', size = 12)
fig.savefig('Total Products in Aisle VS Aisle ID.png')
temp_df = products.groupby('department_id')['product_id'].count()
fig, ax = plt.subplots(figsize = (8,5))
ax = sns.barplot(x = temp_df.index, y = temp_df.values, color = color[2])
ax.set_xlabel('Department Id')
ax.set_ylabel('Total products in department')
ax.xaxis.set_tick_params(rotation=90, labelsize = 9)
ax.set_title('Total Products in Department VS Department ID', size = 10)
fig.savefig('Total Products in Department VS Department ID.png')
temp_df = products.groupby('department_id')['aisle_id'].nunique()
fig, ax = plt.subplots(figsize = (8,5))
ax = sns.barplot(x = temp_df.index, y = temp_df.values)
ax.set_xlabel('Department Id')
ax.set_ylabel('Total Aisles in department')
ax.xaxis.set_tick_params(rotation=90, labelsize = 9)
ax.set_title('Total Aisles in Department VS Department ID', size = 10)
fig.savefig('Total Aisles in Department VS Department ID.png')
```
| github_jupyter |
---
## Data Prep
### Dataset Cleaning
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from time import time
from src.features import build_features as bf
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import fbeta_score, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer, MissingIndicator
from sklearn.pipeline import FeatureUnion, make_pipeline
sns.set()
```
---
## Data Preprocessing
```
features_raw = pd.read_csv('../data/interim/features_raw.csv', index_col='Id')
test_raw = pd.read_csv('../data/raw/test.csv', index_col='Id')
target = pd.read_csv('../data/interim/target_raw.csv', index_col='Id', squeeze=True)
test = test_raw.copy()
features_raw.head()
features_raw.shape
df_zscore = pd.read_csv('../data/interim/df_zscore.csv', index_col='Id')
outlier_idx = df_zscore[(df_zscore >= 5).any(1)].index
outlier_idx
```
### Handle Outliers
```
df = features_raw.drop(index=outlier_idx)
target = target.drop(index=outlier_idx)
index = df.index
# Uncomment this line to save the DataFrames
# target.to_csv('../data/interim/target_no_outliers.csv')
df.shape
```
### Assess Missing Data
#### Assess Missing Data in Each Column
```
df.isna().sum().sort_values(ascending=False).head(10)
```
#### Assess Missing Data in Each Column
```
nan_count = df.isna().sum()
nan_count = nan_count[nan_count > 0]
nan_cols = df[nan_count.index].columns
(nan_count / df.shape[0]).sort_values(ascending=False)
# Investigate patterns in the amount of missing data in each column.
# plt.rcParams.update({'figure.dpi':100})
plt.figure(figsize=(9, 8))
ax = sns.histplot(nan_count, kde=False)
ax.set_title('Histogram of Missing Data by Column')
ax.set(xlabel='Total Missing or Unknown', ylabel='Total Occurrences')
plt.show()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/MissingDatabyCol_Histogram.svg')
nan_rows = df.isna().sum(axis=1)
plt.figure(figsize=(9, 8))
ax = sns.histplot(nan_rows, kde=False)
ax.set_title('Histogram of Missing Data by Row')
ax.set(xlabel='Total Missing or Unknown', ylabel='Total Occurrences')
plt.show()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/MissingDatabyRow_Histogram.svg')
```
#### Assessment Summary
There is a fair amount of missing data in this dataset. Four features in particular (PoolQC, MiscFeature, Alley, Fence) contain >50% missing or unknown values. For PoolQC, we may be able to imply whether or not the home has a pool performing some feature engineering on the PoolArea feature. In addition, several features such as 'GarageArea' indicate the total square footage of a garage (if any) but our dataset does not seem to indicate whether a not a particular home has a garage or not. We'll create features for these. Let's investigate these in turn.
```
def fill_categorical(val):
return 1 if val != 'NA' else 0
def fill_numerical(val):
return 1 if val > 0 else 0
na_cols = ['PoolArea', 'Fence', 'Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure',
'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu']
df[na_cols] = df[na_cols].fillna('NA')
# Most homes don't have pools. Let's use the value of 'PoolArea' to assign 'NA' to 'PoolQC' if the area == 0
df.loc[df['PoolArea'] == 0, 'PoolQC'] = 'NA' # If no 'PoolArea', (0), then there is no pool
# Similarly, let's apply the same logic to other features where necessary:
df['HasFence'] = df['Fence'].apply(lambda x: fill_categorical(x))
df['HasAlley'] = df['Alley'].apply(lambda x: fill_categorical(x))
df['HasFireplace'] = df['FireplaceQu'].apply(lambda x: fill_categorical(x))
df['HasPool'] = df['PoolArea'].apply(lambda x: fill_numerical(x))
df['HasGarage'] = df['GarageArea'].apply(lambda x: fill_numerical(x))
df['HasBasement'] = df['TotalBsmtSF'].apply(lambda x: fill_numerical(x))
# -------------------------------------------------
# Apply above feature engineering steps on test set
# -------------------------------------------------
test[na_cols] = test[na_cols].fillna('NA')
test.loc[test['PoolArea'] == 0, 'PoolQC'] = 'NA' # If no 'PoolArea', (0), then there is no pool
test['HasFence'] = test['Fence'].apply(lambda x: fill_categorical(x))
test['HasAlley'] = test['Alley'].apply(lambda x: fill_categorical(x))
test['HasFireplace'] = test['FireplaceQu'].apply(lambda x: fill_categorical(x))
test['HasPool'] = test['PoolArea'].apply(lambda x: fill_numerical(x))
test['HasGarage'] = test['GarageArea'].apply(lambda x: fill_numerical(x))
test['HasBasement'] = test['TotalBsmtSF'].apply(lambda x: fill_numerical(x))
categorical_cols = df.select_dtypes(include=object).columns
numerical_cols = df.select_dtypes(include=np.number).columns
# Perform One-Hot Encoding on our Categorical Data
features_enc = df.copy()
features_onehot_enc = pd.get_dummies(data=features_enc, columns=categorical_cols, dummy_na=True)
# Uncomment this line to export DataFrame
# features_onehot_enc.to_csv('../data/interim/features_onehot_enc.csv')
# Print the number of features after one-hot encoding
encoded = list(features_onehot_enc.columns)
print(f'{len(encoded)} total features after one-hot encoding.')
# Uncomment the following line to see the encoded feature names
# print(encoded)
# -------------------------------------------------
# Apply above feature engineering steps on test set
# -------------------------------------------------
test_enc = test.copy()
test_onehot_enc = pd.get_dummies(data=test_enc, columns=categorical_cols, dummy_na=True)
# Uncomment this line to export DataFrame
# test_onehot_enc.to_csv('../data/interim/test_onehot_enc.csv')
imp = IterativeImputer(missing_values=np.nan, random_state=5, max_iter=20)
imputed_arr = imp.fit_transform(features_onehot_enc)
features_imputed = pd.DataFrame(imputed_arr, columns=features_onehot_enc.columns)
features_imputed.index = features_onehot_enc.index
# Uncomment this line to export DataFrame
# features_imputed.to_csv('../data/interim/features_imputed.csv')
# -------------------------------------------------
# Apply above imputation steps on test set
# -------------------------------------------------
imputed_arr_test = imp.fit_transform(test_onehot_enc)
test_imputed = pd.DataFrame(imputed_arr_test, columns=test_onehot_enc.columns)
test_imputed.index = test_onehot_enc.index
def align_dataframes(train_set, test_set):
if train_set.shape[1] > test_set.shape[1]:
cols = train_set.columns.difference(test_set.columns)
df = pd.DataFrame(0, index=train_set.index, columns=cols)
test_set[df.columns] = df
elif train_set.shape[1] < test_set.shape[1]:
cols = test_set.columns.difference(train_set.columns)
df = pd.DataFrame(0, index=test_set.index, columns=cols)
train_set[df.columns] = df
align_dataframes(features_imputed, test_imputed)
test_imputed = test_imputed.fillna(value=0)
# Uncomment this line to export DataFrame
# test_imputed.to_csv('../data/interim/test_imputed.csv')
```
### Feature Transformation
#### Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. We'll need to check the following continuous data features for 'skew'.
- LotFrontage
- LotArea
- MasVnrArea
- BsmtFinSF1
- BsmtFinSF2
- TotalBsmtSF
- 1stFlrSF
- 2ndFlrSF
- LowQualFinSF
- GrLivArea
- GarageArea
- WoodDeckSF
- OpenPorchSF
- EnclosedPorch
- 3SsnPorch
- ScreenPorch
- PoolArea
- MiscVal
```
continuous_cols = ['LotFrontage', 'LotArea', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'TotalBsmtSF', '1stFlrSF',
'2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF',
'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal']
skewed = ['ScreenPorch', 'PoolArea', 'LotFrontage', '3SsnPorch', 'LowQualFinSF']
fig = plt.figure(figsize = (16,10));
# Skewed feature plotting
for i, feature in enumerate(skewed):
ax = fig.add_subplot(2, 3, i+1)
sns.histplot(features_imputed[feature], bins=20, color='#00A0A0')
ax.set_title("'%s' Feature Distribution"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 1600))
ax.set_yticks([0, 400, 800, 1200, 1600])
ax.set_yticklabels([0, 400, 800, 1200, ">1600"])
# Plot aesthetics
fig.suptitle("Skewed Distributions of Continuous Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/Skewed_Distributions.svg')
features_log_xformed = pd.DataFrame(data = features_imputed)
# Since the logarithm of 0 is undefined, translate values a small amount to apply the logarithm successfully
features_log_xformed[continuous_cols] = features_imputed[continuous_cols].apply(lambda x: np.log(x + 1))
fig = plt.figure(figsize = (15,10));
# Skewed feature plotting
for i, feature in enumerate(skewed):
ax = fig.add_subplot(2, 3, i+1)
sns.histplot(features_log_xformed[feature], bins=20, color='#00A0A0')
ax.set_title("'%s' Feature Distribution"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 1600))
ax.set_yticks([0, 400, 800, 1200, 1600])
ax.set_yticklabels([0, 400, 800, 1200, ">1600"])
# Plot aesthetics
fig.suptitle("Log-Transformed Distributions of Continuous Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/Log_Xformed_Distributions.svg')
# -------------------------------------------------
# Apply above log transformation steps on test set
# -------------------------------------------------
test_log_xformed = pd.DataFrame(data = test_imputed)
# Since the logarithm of 0 is undefined, translate values a small amount to apply the logarithm successfully
test_log_xformed[continuous_cols] = test_imputed[continuous_cols].apply(lambda x: np.log(x + 1))
```
We also need to perform a log-transformation on our target variable 'SalePrice' to remove skew.
```
target_log_xformed = target.transform(np.log)
# Uncomment this line to export DataFrame
# target_log_xformed.to_csv('../data/interim/target_log_xformed.csv')
```
##### Feature Scaling
#### Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution. Normalization does, however, ensure that each feature is treated equally when applying supervised learners.
```
# Initialize scaler, then apply it to the features
scaler = StandardScaler()
numerical = features_log_xformed.select_dtypes(include=np.number).columns
features_scaled = pd.DataFrame(data = features_log_xformed)
features_scaled[numerical] = scaler.fit_transform(features_log_xformed[numerical])
features_final = features_scaled.copy()
# Uncomment this line to export DataFrame
# features_scaled.to_csv('../data/interim/features_scaled.csv')
# Uncomment this line to export DataFrame
# features_final.to_csv('../data/processed/features_final.csv')
# Show an example of a record with scaling applied
features_final.head()
# -------------------------------------------------
# Apply above feature scaling steps on test set
# -------------------------------------------------
numerical = test_log_xformed.select_dtypes(include=np.number).columns
test_scaled = pd.DataFrame(data = test_log_xformed)
test_scaled[numerical] = scaler.fit_transform(test_log_xformed[numerical])
test_final = test_scaled.copy()
# Uncomment this line to export DataFrame
# test_scaled.to_csv('../data/interim/test_scaled.csv')
# Uncomment this line to export DataFrame
# test_final.to_csv('../data/processed/test_final.csv')
```
| github_jupyter |
```
import scipy.io, os
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
from fastjmd95 import rho
from matplotlib.colors import ListedColormap
import seaborn as sns; sns.set()
sns.set()
import seawater as sw
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matp
lotlib as mpl
colours=sns.color_palette('colorblind', 10)
my_cmap = ListedColormap(colours)
color_list=colours
```
## Code to plot the meridional overturning and density structure from the North Atlantic from Sonnewald and Lguensat (2021).
Data used are from the ECCOv4 State Estimate available: https://ecco-group.org/products-ECCO-V4r4.html
Note: Data is generated for the North Atlantic, also including the Southern Ocean and Artcic basin. Data fro the Paciic and Indian ocean are also generated, and the bleow code can be adjusted to plot this also.
```
gridInfo=np.load('latLonDepthLevelECCOv4.npz')
zLev=gridInfo['depthLevel'][:]
depthPlot=zLev.cumsum()
lat=gridInfo['lat'][:]
lon=gridInfo['lon'][:]
zMat=np.repeat(zLev,720*360).reshape((50,360,720))
dvx=np.rot90(0.5*111000*np.cos(lat*(np.pi/180)),1)
masks=np.load('regimeMasks.npz')
maskMD=masks['maskMD']
maskSSV=masks['maskSSV']
maskNSV=masks['maskNSV']
maskTR=masks['maskTR']
maskSO=masks['maskSO']
maskNL=masks['maskNL']
def getData(NR):
arr = os.listdir('/home/maike/Documents/ECCO_BV/NVELSTAR/.')
f =Dataset('/home/maike/Documents/ECCO_BV/NVELSTAR/'+arr[NR])
nvelS =f.variables['NVELSTAR'][:]
arr = os.listdir('/home/maike/Documents/ECCO_BV/NVELMASS/.')
f =Dataset('/home/maike/Documents/ECCO_BV/NVELMASS/'+arr[NR])
nvelM =f.variables['NVELMASS'][:]
return(nvelS+nvelM)
```
## Creating the basin masks
```
nvel= getData(1) #To get the shape
globalMask=np.ones(nvel[0].shape)
maskArea=np.zeros(nvel[0].shape)*np.nan
maskArea[:,65:360,0:222]=1
maskArea[:,65:360,500:720]=1
maskArea[:,310:360,:]=np.nan
maskArea[:,210:350,160:250]=np.nan
maskArea[:,0:140,500:650]=np.nan
maskArea[:,0:165,500:620]=np.nan
maskArea[:,0:255,500:560]=np.nan
maskArea[:,0:210,500:570]=np.nan
maskArea[:,0:185,500:590]=np.nan
pacificMask=maskArea
maskArea=np.zeros(nvel[0].shape)*np.nan
maskArea[:,:,221:400]=1
maskArea[:,200:360,160:400]=1
maskArea[:,0:65,:]=1
maskArea[:,310:360,:]=1
maskArea[:,199:215,160:180]=np.nan
maskArea[:,199:210,160:190]=np.nan
atlanticMask=maskArea
maskArea=np.ones(nvel[0].shape)
indA=np.where(atlanticMask==1)
indP=np.where(pacificMask==1)
maskArea[indA]=np.nan
maskArea[indP]=np.nan
maskArea[:,100:250,100:250]=np.nan
indianMask=maskArea
plt.figure()
plt.imshow(np.flipud(globalMask[0]*nvel[0,0]))
plt.figure()
plt.imshow(np.flipud(atlanticMask[0]*nvel[0,0]))
plt.figure()
plt.imshow(np.flipud(pacificMask[0]*nvel[0,0]))
plt.figure()
plt.imshow(np.flipud(indianMask[0]*nvel[0,0]))
```
## Calculating the streamfunction
The overall meridional overturning ($\Psi_{z\theta}$) from Fig. 3 in Sonnewald and Lguensat (2021) is defined as:
$$\Psi_{z\theta}(\theta,z)=- \int^z_{-H} \int_{\phi_2}^{\phi_1} v(\phi,\theta,z')d\phi dz',$$
\noindent where $z$ is the relative level depth and $v$ is the meridional (north-south) component of velocity. For the regimes, the relevant velocity fields were then used. A positive $\Psi_{z\theta}$ signifies a clockwise circulation, while a negative $\Psi_{z\theta}$ signifies an anticlockwise circulation.
```
def psiZ(NVEL_IN, mask):
'''Function to calculate overturning in depth space as described in Sonnewald and Lguensat (2021).'''
ntrans=np.zeros(NVEL_IN[:,:,0].shape);
gmoc=np.zeros(NVEL_IN[:,:,0].shape);
NVEL=NVEL_IN*mask
# zonal transport integral
for zz in np.arange(0,50):
ntrans[zz,:]=np.nansum(NVEL[zz,:,:]*dvx,axis=1);
for zz in np.flipud(np.arange(0,49)):
gmoc[zz,:]=gmoc[zz+1,:]+ntrans[zz+1,:]*zLev[zz+1];
gmoc=gmoc/1e6;
return(gmoc)
def psiMaskedCalc(mask):
'''Calculating the overturning in depth space for the different regimes, as plotted in Fig. 3 in Sonnewald and Lguensat (2021).'''
yrs, months=20,12
PSI_all = np.zeros((yrs*months, 50, 360))*np.nan
PSI_NL = np.zeros((yrs*months, 50, 360))*np.nan
PSI_SO = np.zeros((yrs*months, 50, 360))*np.nan
PSI_SSV = np.zeros((yrs*months, 50, 360))*np.nan
PSI_NSV = np.zeros((yrs*months, 50, 360))*np.nan
PSI_MD = np.zeros((yrs*months, 50, 360))*np.nan
PSI_TR = np.zeros((yrs*months, 50, 360))*np.nan
ITTER=0
for NR in np.arange(0,yrs):
nvel= getData(NR)
# print('Got data')
for MM in np.arange(0,months):
PSI_all[ITTER]=psiZ(nvel[MM], np.ones(maskSO.shape)*mask)
PSI_NL[ITTER]=psiZ(nvel[MM], maskNL*mask)
PSI_SO[ITTER]=psiZ(nvel[MM], maskSO*mask)
PSI_SSV[ITTER]=psiZ(nvel[MM], maskSSV*mask)
PSI_NSV[ITTER]=psiZ(nvel[MM], maskNSV*mask)
PSI_MD[ITTER]=psiZ(nvel[MM], maskMD*mask)
PSI_TR[ITTER]=psiZ(nvel[MM], maskTR*mask)
ITTER+=1
return PSI_all, PSI_NL, PSI_SO, PSI_SSV, PSI_NSV, PSI_MD, PSI_TR
PSI_all_A, PSI_NL_A, PSI_SO_A, PSI_SSV_A, PSI_NSV_A, PSI_MD_A, PSI_TR_A = psiMaskedCalc(atlanticMask)
PSI_all_P, PSI_NL_P, PSI_SO_P, PSI_SSV_P, PSI_NSV_P, PSI_MD_P, PSI_TR_P = psiMaskedCalc(pacificMask)
PSI_all_I, PSI_NL_I, PSI_SO_I, PSI_SSV_I, PSI_NSV_I, PSI_MD_I, PSI_TR_I = psiMaskedCalc(indianMask)
PSI_all_G, PSI_NL_G, PSI_SO_G, PSI_SSV_G, PSI_NSV_G, PSI_MD_G, PSI_TR_G = psiMaskedCalc(globalMask)
#Save the data
np.savez('PSI_global', PSI_all_G=PSI_all_G, PSI_NL_G=PSI_NL_G, PSI_SO_G=PSI_SO_G, PSI_SSV_G=PSI_SSV_G, PSI_NSV_G=PSI_NSV_G, PSI_MD_G=PSI_MD_G, PSI_TR_G=PSI_TR_G)
np.savez('PSI_atlantic', PSI_all_A=PSI_all_A, PSI_NL_A=PSI_NL_A, PSI_SO_A=PSI_SO_A, PSI_SSV_A=PSI_SSV_A, PSI_NSV_A=PSI_NSV_A, PSI_MD_A=PSI_MD_A, PSI_TR_A=PSI_TR_A)
np.savez('PSI_pacific', PSI_all_P=PSI_all_P, PSI_NL_P=PSI_NL_P, PSI_SO_P=PSI_SO_P, PSI_SSV_P=PSI_SSV_P, PSI_NSV_P=PSI_NSV_P, PSI_MD_P=PSI_MD_P, PSI_TR_P=PSI_TR_P)
np.savez('PSI_indian', PSI_all_I=PSI_all_I, PSI_NL_I=PSI_NL_I, PSI_SO_I=PSI_SO_I, PSI_SSV_I=PSI_SSV_I, PSI_NSV_I=PSI_NSV_I, PSI_MD_I=PSI_MD_I, PSI_TR_I=PSI_TR_I)
```
## Calculate the density in $\sigma_2$
```
def getDataTS(NR):
'''Retrieve the T and S data. Data from the ECCOv4 state estimate.'''
arr = os.listdir('/home/maike/Documents/ECCO_BV/THETA/.')
f =Dataset('/home/maike/Documents/ECCO_BV/THETA/'+arr[NR])
T =f.variables['THETA'][:]
arr = os.listdir('/home/maike/Documents/ECCO_BV/SALT/.')
f =Dataset('/home/maike/Documents/ECCO_BV/SALT/'+arr[NR])
S =f.variables['SALT'][:]
return(T, S)
dens=np.zeros((50,360,720))
ITTER=1
yrs=20
months=12
for NR in np.arange(0,yrs):
T,S = getDataTS(NR)
print('Got data', NR)
#Tin=sw.eos80.temp(S, T, -np.cumsum(zMat, axis=0), pr=np.cumsum(zMat, axis=0))
for MM in np.arange(0,months):
dens = dens+rho(S[MM], T[MM], 2000) - 1000
ITTER+=1
dens=dens/ITTER
#Save the density data.
np.save('density20yr', np.array(dens))
```
# Finally, we plot the data.
The plot is a composite of different subplots.
```
levs=[32,33,34, 34.5, 35, 35.5,36,36.5,37,37.25,37.5,37.75,38]
cols=plt.cm.viridis([300,250, 200,150, 125, 100, 50,30, 10,15,10,9,1])
Land=np.ones(np.nansum(PSI_all_A, axis=0).shape)*np.nan
Land[np.nansum(PSI_all_A, axis=0)==0.0]=0
land3D=np.ones(dens.shape)
land3D[dens==0]=np.nan
def zPlotSurf(ax, data,zMin, zMax,label,mm,latMin,latMax,RGB,Ticks,saveName='test'):
land=np.ones(np.nanmean(data, axis=0).shape)*np.nan
land[np.nansum(data, axis=0)==0.0]=0
n=50
levels = np.linspace(-20, 20, n+1)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],-np.nanmean(data, axis=0)[zMin:zMax,latMin:latMax], levels=np.linspace(-20, 20, n+1),cmap=plt.cm.seismic, extend='both')
n2=30
densityPlot=np.nanmean((dens*land3D*mm), axis=2)
assert(len(levs)==len(cols))
CS=ax.contour(lat[0,latMin:latMax],-depthPlot[zMin:zMax],densityPlot[zMin:zMax,latMin:latMax],
levels=levs,
linewidths=3,colors=cols, extend='both')
ax.tick_params(axis='y', labelsize=20)
if Ticks == 0:
ax.set_xticklabels( () )
elif Ticks == 1:
ax.set_xticklabels( () )
ax.set_yticklabels( () )
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],land[zMin:zMax,latMin:latMax], 1,cmap=plt.cm.Set2)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],Land[zMin:zMax,latMin:latMax], 50,cmap=plt.cm.bone)
yL=ax.get_ylim()
xL=ax.get_xlim()
plt.text(xL[0]+0.02*np.ptp(xL), yL[0]+0.4*np.ptp(yL), label, fontsize=20, size=30,
weight='bold', bbox={'facecolor':'white', 'alpha':0.7}, va='bottom')
def zPlotDepth(ax, data,zMin, zMax,label,mm,latMin,latMax,RGB,Ticks,saveName='test'):
land=np.ones(np.nanmean(data, axis=0).shape)*np.nan
land[np.nansum(data, axis=0)==0.0]=0
n=50
levels = np.linspace(-20, 20, n+1)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],-np.nanmean(data, axis=0)[zMin:zMax,latMin:latMax], levels=np.linspace(-20, 20, n+1),cmap=plt.cm.seismic, extend='both')
n2=30
densityPlot=np.nanmean((dens*land3D*mm), axis=2)
ax.contour(lat[0,latMin:latMax],-depthPlot[zMin:zMax],densityPlot[zMin:zMax,latMin:latMax], colors=cols,
levels=levs,
linewidths=3, extend='both')
if Ticks == 0:
ax.tick_params(axis='y', labelsize=20)
#ax.set_xticklabels( () )
elif Ticks== 1:
#ax.set_xticklabels( () )
ax.set_yticklabels( () )
plt.tick_params(axis='both', labelsize=20)
#plt.clim(cmin, cmax)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],land[zMin:zMax,latMin:latMax], 1,cmap=plt.cm.Set2)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],Land[zMin:zMax,latMin:latMax], 50,cmap=plt.cm.bone)
yL=ax.get_ylim()
xL=ax.get_xlim()
plt.text(xL[0]+0.03*np.ptp(xL), yL[0]+0.03*np.ptp(yL), label, fontsize=20, size=30,
weight='bold', bbox={'facecolor':RGB, 'alpha':1}, va='bottom')
# Set general figure options
# figure layout
xs = 15.5 # figure width in inches
nx = 2 # number of axes in x dimension
ny = 3 # number of sub-figures in y dimension (each sub-figure has two axes)
nya = 2 # number of axes per sub-figure
idy = [2.0, 1.0] # size of the figures in the y dimension
xm = [0.07, 0.07,0.9, 0.07] # x margins of the figure (left to right)
ym = [1.5] + ny*[0.07, 0.1] + [0.3] # y margins of the figure (bottom to top)
# pre-calculate some things
xcm = np.cumsum(xm) # cumulative margins
ycm = np.cumsum(ym) # cumulative margins
idx = (xs - np.sum(xm))/nx
idy_off = [0] + idy
ys = np.sum(idy)*ny + np.sum(ym) # size of figure in y dimension
# make the figure!
fig = plt.figure(figsize=(xs, ys))
# loop through sub-figures
ix,iy=0,0
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_TR_A,1,50,'TR', maskTR,200, 310, color_list[1],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
else:
xticks = ax.get_xticks()
ax.set_xticklabels(['{:0.0f}$^\circ$N'.format(xtick) for xtick in xticks])
elif iys == 1:
zPlotSurf(ax, PSI_TR_A,0,10,'', maskTR,200, 310, color_list[1],'')
# remove x ticks
ax.set_xticks([])
ix,iy=0,1
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_NL_A,1,50,'NL', maskNL,200, 310, color_list[-1],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_NL_A,0,10,'', maskNL,200, 310, color_list[4],'')
# remove x ticks
ax.set_xticks([])
############### n-SV
ix,iy=0,2
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_NSV_A,1,50,'N-SV', maskNSV,200, 310, color_list[4],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_NSV_A,0,10,'', maskNSV,200, 310, color_list[-1],'')
# remove x ticks
ax.set_xticks([])
#
#_______________________________________________________________________
# S-SV
ix,iy=1,2
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
# ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_SSV_A,1,50,'S-SV', maskSSV,200, 310, color_list[2],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_SSV_A,0,10,'', maskSSV,200, 310, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
#%%%%%%%%%%%%%%%%%%%%%%%%% SO
ix,iy=1,1
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_SO_A,1,50,'SO', maskSO,200, 310, color_list[-3],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_SO_A,0,10,'', maskSO,200, 310, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
#%%%%%%%MD
ix,iy=1,0
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_MD_A,1,50,'MD', maskMD,200, 310, color_list[0],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
else:
xticks = ax.get_xticks()
ax.set_xticklabels(['{:0.0f}$^\circ$N'.format(xtick) for xtick in xticks])
elif iys == 1:
zPlotSurf(ax, PSI_MD_A,0,10,'', maskMD,200, 310, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
cmap = plt.get_cmap('viridis')
cmap = mpl.colors.ListedColormap(cols)
ncol = len(levs)
axes = plt.axes([(xcm[0])/(xs), (ym[0]-0.6)/ys, (2*idx + xm[1])/(xs*2), (0.2)/ys])
cb = fig.colorbar(plt.cm.ScalarMappable(norm=mpl.colors.Normalize(-0.5, ncol - 0.5), cmap=cmap),
cax=axes, orientation='horizontal')
cb.ax.set_xticks(np.arange(ncol))
cb.ax.set_xticklabels(['{:0.2f}'.format(lev) for lev in levs])
cb.ax.tick_params(labelsize=20)
cb.set_label(label=r'Density, $\sigma_2$',weight='bold', fontsize=20)
cmap = plt.get_cmap('seismic')
ncol = len(cols)
axes = plt.axes([(xcm[2]+2*idx)/(xs*2), (ym[0]-0.6)/ys, (2*idx+xm[3])/(xs*2), (0.2)/ys])
cb = fig.colorbar(plt.cm.ScalarMappable(norm=mpl.colors.Normalize(-20,20), cmap=cmap),
cax=axes, label='title', orientation='horizontal', extend='both',format='%.0f',
boundaries=np.linspace(-20, 20, 41))
cb.ax.tick_params(labelsize=20)
cb.set_label(label=r'SV ($10^{6}m^{2}s^{-2}$)',weight='bold', fontsize=20)
# save as a png
#fig.savefig('psiRho_NAtl_sigma2.png', dpi=200, bbox_inches='tight')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA,TruncatedSVD,NMF
from sklearn.preprocessing import Normalizer
import argparse
import time
import pickle as pkl
def year_binner(year,val=10):
return year - year%val
def dim_reduction(df,rows):
df_svd = TruncatedSVD(n_components=300, n_iter=10, random_state=args.seed)
print(f'Explained variance ratio {(df_svd.fit(df).explained_variance_ratio_.sum()):2.3f}')
#df_list=df_svd.fit(df).explained_variance_ratio_
df_reduced = df_svd.fit_transform(df)
df_reduced = Normalizer(copy=False).fit_transform(df_reduced)
df_reduced=pd.DataFrame(df_reduced,index=rows)
#df_reduced.reset_index(inplace=True)
if args.temporal!=0:
df_reduced.index = pd.MultiIndex.from_tuples(df_reduced.index, names=['common', 'time'])
return df_reduced
parser = argparse.ArgumentParser(description='Gather data necessary for performing Regression')
parser.add_argument('--inputdir',type=str,
help='Provide directory that has the files with the fivegram counts')
parser.add_argument('--outputdir',type=str,
help='Provide directory in that the output files should be stored')
parser.add_argument('--temporal', type=int, default=0,
help='Value to bin the temporal information: 0 (remove temporal information), 1 (no binning), 10 (binning to decades), 20 (binning each 20 years) or 50 (binning each 50 years)')
parser.add_argument('--contextual', action='store_true',
help='Is the model contextual')
parser.add_argument('--cutoff', type=int, default=50,
help='Cut-off frequency for each compound per time period : none (0), 20, 50 and 100')
parser.add_argument('--seed', type=int, default=1991,
help='random seed')
parser.add_argument('--storedf', action='store_true',
help='Should the embeddings be saved')
parser.add_argument('--dims', type=int, default=300,
help='Desired number of reduced dimensions')
parser.add_argument('--input_format',type=str,default='csv',choices=['csv','pkl'],
help='In what format are the input files : csv or pkl')
parser.add_argument('--save_format', type=str,default='pkl',choices=['pkl','csv'],
help='In what format should the reduced datasets be saved : csv or pkl')
args = parser.parse_args('--inputdir ../Compounding/coha_compounds/ --outputdir ../Compounding/coha_compounds/ --cutoff 10 --storedf --input_format csv --save_format csv'.split())
print(f'Cutoff: {args.cutoff}')
print(f'Time span: {args.temporal}')
print(f'Dimensionality: {args.dims}')
print("Creating dense embeddings")
if args.contextual:
print("CompoundCentric Model")
print("Loading the constituent and compound vector datasets")
if args.input_format=="csv":
compounds=pd.read_csv(args.inputdir+"/compounds.csv",sep="\t")
elif args.input=="pkl":
compounds=pd.read_pickle(args.inputdir+"/compounds.pkl")
compounds.reset_index(inplace=True)
compounds.year=compounds.year.astype("int32")
compounds=compounds.query('1800 <= year <= 2010').copy()
compounds['common']=compounds['modifier']+" "+compounds['head']
#head_list_reduced=compounds['head'].unique().tolist()
#modifier_list_reduced=compounds['modifier'].unique().tolist()
if args.temporal==0:
print('No temporal information is stored')
compounds=compounds.groupby(['common','context'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','context'])['count'].sum()
else:
compounds['time']=year_binner(compounds['year'].values,args.temporal)
compounds=compounds.groupby(['common','context','time'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
modifiers=pd.read_csv(args.inputdir+"/modifiers.csv",sep="\t")
elif args.input=="pkl":
modifiers=pd.read_pickle(args.inputdir+"/modifiers.pkl")
modifiers.reset_index(inplace=True)
modifiers.year=modifiers.year.astype("int32")
modifiers=modifiers.query('1800 <= year <= 2010').copy()
modifiers.columns=['common','context','year','count']
modifiers['common']=modifiers['common'].str.replace(r'_noun$', r'_m', regex=True)
if args.temporal==0:
print('No temporal information is stored')
modifiers=modifiers.groupby(['common','context'])['count'].sum().to_frame()
modifiers.reset_index(inplace=True)
modifiers=modifiers.loc[modifiers.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
modifiers=modifiers.groupby(['common','context'])['count'].sum()
else:
modifiers['time']=year_binner(modifiers['year'].values,args.temporal)
modifiers=modifiers.groupby(['common','context','time'])['count'].sum().to_frame()
modifiers=modifiers.loc[modifiers.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
modifiers=modifiers.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
heads=pd.read_csv(args.inputdir+"/heads.csv",sep="\t")
elif args.input_format=="pkl":
heads=pd.read_pickle(args.inputdir+"/heads.pkl")
heads.reset_index(inplace=True)
heads.year=heads.year.astype("int32")
heads=heads.query('1800 <= year <= 2010').copy()
heads.columns=['common','context','year','count']
heads['common']=heads['common'].str.replace(r'_noun$', r'_h', regex=True)
if args.temporal==0:
print('No temporal information is stored')
heads=heads.groupby(['common','context'])['count'].sum().to_frame()
heads.reset_index(inplace=True)
heads=heads.loc[heads.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
heads=heads.groupby(['common','context'])['count'].sum()
else:
heads['time']=year_binner(heads['year'].values,args.temporal)
heads=heads.groupby(['common','context','time'])['count'].sum().to_frame()
heads=heads.loc[heads.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
heads=heads.groupby(['common','time','context'])['count'].sum()
print('Concatenating all the datasets together')
df=pd.concat([heads,modifiers,compounds], sort=True)
else:
print("CompoundAgnostic Model")
wordlist = pkl.load( open( "data/coha_wordlist.pkl", "rb" ) )
if args.input_format=="csv":
compounds=pd.read_csv(args.inputdir+"/phrases.csv",sep="\t")
elif args.input_format=="pkl":
compounds=pd.read_pickle(args.inputdir+"/phrases.pkl")
compounds.reset_index(inplace=True)
compounds.year=compounds.year.astype("int32")
compounds=compounds.query('1800 <= year <= 2010').copy()
compounds['common']=compounds['modifier']+" "+compounds['head']
if args.temporal==0:
print('No temporal information is stored')
compounds=compounds.groupby(['common','context'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','context'])['count'].sum()
else:
compounds['time']=year_binner(compounds['year'].values,args.temporal)
#compounds = dd.from_pandas(compounds, npartitions=100)
compounds=compounds.groupby(['common','context','time'])['count'].sum().to_frame()
compounds=compounds.loc[compounds.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
constituents=pd.read_csv(args.outputdir+"/words.csv",sep="\t")
elif args.input_format=="pkl":
constituents=pd.read_pickle(args.outputdir+"/words.pkl")
constituents.reset_index(inplace=True)
constituents.year=constituents.year.astype("int32")
constituents=constituents.query('1800 <= year <= 2010').copy()
constituents.columns=['common','context','year','count']
constituents.query('common in @wordlist',inplace=True)
if args.temporal==0:
print('No temporal information is stored')
constituents=constituents.groupby(['common','context'])['count'].sum().to_frame()
constituents.reset_index(inplace=True)
constituents=constituents.loc[constituents.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
constituents=constituents.groupby(['common','context'])['count'].sum()
else:
constituents['time']=year_binner(constituents['year'].values,args.temporal)
constituents=constituents.groupby(['common','context','time'])['count'].sum().to_frame()
constituents.reset_index(inplace=True)
constituents=constituents.loc[constituents.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
constituents=constituents.groupby(['common','time','context'])['count'].sum()
print('Concatenating all the datasets together')
df=pd.concat([constituents,compounds], sort=True)
dtype = pd.SparseDtype(np.float, fill_value=0)
df=df.astype(dtype)
if args.temporal!=0:
df, rows, _ = df.sparse.to_coo(row_levels=['common','time'],column_levels=['context'],sort_labels=False)
else:
df, rows, _ = df.sparse.to_coo(row_levels=['common'],column_levels=['context'],sort_labels=False)
print('Running SVD')
df_reduced=dim_reduction(df,rows)
print('Splitting back into individual datasets are saving them')
if args.temporal!=0:
df_reduced.index.names = ['common','time']
else:
df_reduced.index.names = ['common']
compounds_reduced=df_reduced.loc[df_reduced.index.get_level_values(0).str.contains(r'\w \w')]
compounds_reduced.reset_index(inplace=True)
#print(compounds_reduced.head())
#compounds_reduced['modifier'],compounds_reduced['head']=compounds_reduced['common'].str.split(' ', 1).str
compounds_reduced[['modifier','head']]=compounds_reduced['common'].str.split(' ', n=1,expand=True).copy()
compounds_reduced
```
| github_jupyter |
Load libs and utilities.
```
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
from google.colab import drive
drive.mount('/content/drive')
%cd "drive/MyDrive/Projects/Fourier"
!pip install import-ipynb
import import_ipynb
import os
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model
from pathlib import Path
from utils import *
class UnitGaussianNormalizer:
def __init__(self, x, eps=0.00001):
super(UnitGaussianNormalizer, self).__init__()
self.mean = tf.math.reduce_mean(x, 0)
self.std = tf.math.reduce_std(x, 0)
self.eps = eps
def encode(self, x):
x = (x - self.mean) / (self.std + self.eps)
return x
def decode(self, x):
std = self.std + self.eps
mean = self.mean
x = (x * std) + mean
return x
PROJECT_PATH = Path(os.path.abspath('')).parent.parent.resolve().__str__()
TRAIN_PATH = PROJECT_PATH + '/Datasets/Fourier/piececonst_r241_N1024_smooth1.mat'
TEST_PATH = PROJECT_PATH + '/Datasets/Fourier/piececonst_r241_N1024_smooth2.mat'
N_TRAIN = 1000
W = 49 #width
FTS = 32 #features
R = 5 #refinement
MODES = 12
# ...
try:
if DATA_IS_LOADED:
print("Not reloading data!")
except:
reader = MatReader()
if reader.is_not_loaded():
reader.load_file(TRAIN_PATH)
DATA_IS_LOADED = True
# ...
x_train = reader.read_field('coeff')[:N_TRAIN,::R,::R]
y_train = reader.read_field('sol')[:N_TRAIN,::R,::R]
S_ = x_train.shape[1]
grids = []
grids.append(np.linspace(0, 1, S_))
grids.append(np.linspace(0, 1, S_))
grid = np.vstack([xx.ravel() for xx in np.meshgrid(*grids)]).T
grid = grid.reshape(1,S_,S_,2)
print(x_train.shape)
x_train = tf.convert_to_tensor(x_train, dtype=tf.float32)
y_train = tf.convert_to_tensor(y_train, dtype=tf.float32)
grid = tf.convert_to_tensor(grid, dtype=tf.float32)
x_train = tf.expand_dims(x_train, axis=3)
grid = tf.repeat(grid, repeats = N_TRAIN, axis = 0)
x_train = tf.concat([x_train, grid], axis=3)
y_train = tf.expand_dims(y_train, axis=3)
x_normalizer = UnitGaussianNormalizer(x_train)
x_train = x_normalizer.encode(x_train)
y_normalizer = UnitGaussianNormalizer(y_train)
y_train = y_normalizer.encode(y_train)
print("x_train dims: " + str(x_train.shape))
print("y_train dims: " + str(y_train.shape))
class FourierLayer(layers.Layer):
def __init__(self):
super(FourierLayer, self).__init__()
self.weight_fft1 = tf.Variable(tf.random.uniform([FTS, FTS, MODES, MODES], minval=0, maxval=1),name="Wfft1", trainable=True)
self.weight_fft2 = tf.Variable(tf.random.uniform([FTS, FTS, MODES, MODES], minval=0, maxval=1),name="Wfft2", trainable=True)
def call(self, input, training=True):
weight_fft_complex = tf.complex(self.weight_fft1, self.weight_fft2)
x = input
x = keras.layers.Lambda(lambda v: tf.signal.rfft2d(v, tf.constant([49, 49])))(x)
x = x[:,:,:MODES, :MODES]
x = keras.layers.Lambda(lambda v: tf.einsum('ioxy,bixy->boxy', weight_fft_complex, v))(x)
x = keras.layers.Lambda(lambda v: tf.signal.irfft2d(v, tf.constant([49, 49])))(x)
return x
class FourierUnit(layers.Layer):
def __init__(self):
super(FourierUnit, self).__init__()
self.W = tf.keras.layers.Conv1D(W, 1)
self.fourier = FourierLayer()
self.add = tf.keras.layers.Add()
self.bn = tf.keras.layers.BatchNormalization()
def call(self, input, training=True):
x = input
x1 = self.fourier(x)
x2 = self.W(x)
x = self.add([x1, x2])
x = self.bn(x)
return x
class MyModel(keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.fc0 = tf.keras.layers.Dense(FTS)
self.perm_pre = tf.keras.layers.Permute((3, 1, 2))
self.fourier_unit_1 = FourierUnit()
self.relu_1 = tf.keras.layers.ReLU()
self.fourier_unit_2 = FourierUnit()
self.relu = tf.keras.layers.ReLU()
self.perm_post = tf.keras.layers.Permute((2, 3, 1))
self.fc1 = tf.keras.layers.Dense(128)
self.relu2 = tf.keras.layers.ReLU()
self.fc2 = tf.keras.layers.Dense(1)
def call(self, input):
x = self.fc0(input)
x = self.perm_pre(x)
x = self.fourier_unit_1(x)
x = self.relu_1(x)
x = self.fourier_unit_2(x)
x = self.perm_post(x)
x = self.fc1(x)
x = self.relu2(x)
x = self.fc2(x)
return x
def model(self):
x = keras.Input(shape=(W, W, 3))
return keras.Model(inputs=[x], outputs=self.call(x))
model = MyModel()
mse = tf.keras.losses.MeanSquaredError()
model.compile(
loss=mse,
optimizer=keras.optimizers.Adam(lr=3e-4),
metrics=[tf.keras.metrics.RootMeanSquaredError()],
)
model.fit(x_train, y_train, batch_size=64, epochs=2, verbose=2)
model.model().summary()
```
| github_jupyter |
# Neural Networks for Regression with TensorFlow
> Notebook demonstrates Neural Networks for Regression Problems with TensorFlow
- toc: true
- badges: true
- comments: true
- categories: [DeepLearning, NeuralNetworks, TensorFlow, Python, LinearRegression]
- image: images/nntensorflow.png
## Neural Network Regression Model with TensorFlow
This notebook is continuation of the Blog post [TensorFlow Fundamentals](https://sandeshkatakam.github.io/My-Machine_learning-Blog/tensorflow/machinelearning/2022/02/09/TensorFlow-Fundamentals.html). **The notebook is an account of my working for the Tensorflow tutorial by Daniel Bourke on Youtube**.
**The Notebook will cover the following concepts:**
* Architecture of a neural network regression model.
* Input shapes and output shapes of a regression model(features and labels).
* Creating custom data to view and fit.
* Steps in modelling
* Creating a model, compiling a model, fitting a model, evaluating a model.
* Different evaluation methods.
* Saving and loading models.
**Regression Problems**:
A regression problem is when the output variable is a real or continuous value, such as “salary” or “weight”. Many different models can be used, the simplest is the linear regression. It tries to fit data with the best hyper-plane which goes through the points.
Examples:
* How much will this house sell for?
* How many people will buy this app?
* How much will my health insurace be?
* How much should I save each week for fuel?
We can also use the regression model to try and predict where the bounding boxes should be in object detection problem. Object detection thus involves both regression and then classifying the image in the box(classification problem).
### Regression Inputs and outputs
Architecture of a regression model:
* Hyperparameters:
* Input Layer Shape : same as shape of number of features.
* Hidden Layrer(s): Problem specific
* Neurons per hidden layer : Problem specific.
* Output layer shape: same as hape of desired prediction shape.
* Hidden activation : Usually ReLU(rectified linear unit) sometimes sigmoid.
* Output acitvation: None, ReLU, logistic/tanh.
* Loss function : MSE(Mean squared error) or MAE(Mean absolute error) or combination of both.
* Optimizer: SGD(Stochastic Gradient Descent), Adam optimizer.
**Source:** Adapted from page 239 of [Hands-On Machine learning with Scikit-Learn, Keras & TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
Example of creating a sample regression model in TensorFlow:
```
# 1. Create a model(specific to your problem)
model = tf.keras.Sequential([
tf.keras.Input(shape = (3,)),
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(1, activation = None)
])
# 2. Compile the model
model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(lr = 0.0001), metrics = ["mae"])
# 3. Fit the model
model.fit(X_train, Y_train, epochs = 100)
```
### Introduction to Regression with Neural Networks in TensorFlow
```
# Import TensorFlow
import tensorflow as tf
print(tf.__version__)
## Creating data to view and fit
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use('dark_background')
# create features
X = np.array([-7.0,-4.0,-1.0,2.0,5.0,8.0,11.0,14.0])
# Create labels
y = np.array([3.0,6.0,9.0,12.0,15.0,18.0,21.0,24.0])
# Visualize it
plt.scatter(X,y)
y == X + 10
```
Yayy.. we got the relation by just seeing the data. Since the data is small and the relation ship is just linear, it was easy to guess the relation.
### Input and Output shapes
```
# Create a demo tensor for the housing price prediction problem
house_info = tf.constant(["bedroom","bathroom", "garage"])
house_price = tf.constant([939700])
house_info, house_price
X[0], y[0]
X[1], y[1]
input_shape = X[0].shape
output_shape = y[0].shape
input_shape, output_shape
X[0].ndim
```
we are specifically looking at scalars here. Scalars have 0 dimension
```
# Turn our numpy arrays into tensors
X = tf.cast(tf.constant(X), dtype = tf.float32)
y = tf.cast(tf.constant(y), dtype = tf.float32)
X.shape, y.shape
input_shape = X[0].shape
output_shape = y[0].shape
input_shape, output_shape
plt.scatter(X,y)
```
### Steps in modelling with Tensorflow
1. **Creating a model** - define the input and output layers, as well as the hidden layers of a deep learning model.
2. **Compiling a model** - define the loss function(how wrong the prediction of our model is) and the optimizer (tells our model how to improve the partterns its learning) and evaluation metrics(what we can use to interpret the performance of our model).
3. Fitting a model - letting the model try to find the patterns between X & y (features and labels).
```
X,y
X.shape
# Set random seed
tf.random.set_seed(42)
# Create a model using the Sequential API
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# Compile the model
model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error
optimizer=tf.keras.optimizers.SGD(), # SGD is short for stochastic gradient descent
metrics=["mae"])
# Fit the model
# model.fit(X, y, epochs=5) # this will break with TensorFlow 2.7.0+
model.fit(tf.expand_dims(X, axis=-1), y, epochs=5)
# Check out X and y
X, y
# Try and make a prediction using our model
y_pred = model.predict([17.0])
y_pred
```
The output is very far off from the actual value. So, Our model is not working correctly. Let's go and improve our model in the next section.
### Improving our Model
Let's take a look about the three steps when we created the above model.
We can improve the model by altering the steps we took to create a model.
1. **Creating a model** - here we might add more layers, increase the number of hidden units(all called neurons) within each of the hidden layers, change the activation function of each layer.
2. **Compiling a model** - here we might change the optimization function or perhaps the learning rate of the optimization function.
3. **Fitting a model** - here we might fit a model for more **epochs** (leave it for training longer) or on more data (give the model more examples to learn from)
```
# Let's rebuild our model with change in the epoch number
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0)
# Our data
X , y
# Let's see if our model's prediction has improved
model.predict([17.0])
```
We got so close the actual value is 27 we performed a better prediction than the last model we trained. But we need to improve much better.
Let's see what more we change and how close can we get to our actual output
```
# Let's rebuild our model with changing the optimization function to Adam
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.Adam(lr = 0.0001), # lr stands for learning rate
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0)
# Prediction of our newly trained model:
model.predict([17.0]) # we are going to predict for the same input value 17
```
Oh..god!! This result went really bad for us.
```
# Let's rebuild our model by adding one extra hidden layer with 100 units
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation = "relu"), # only difference we made
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # verbose will hide the output from epochs
X , y
# It's prediction time!
model.predict([17.0])
```
Oh, this should be 27 but this prediction is very far off from our previous prediction.
It seems that our previous model did better than this.
Even though we find the values of our loss function are very low than that of our previous model. We still are far away from our label value.
**Why is that so??**
The explanation is our model is overfitting the dataset. That means it is trying to map a function that just fits the already provided examples correctly but it cannot fit the new examples that we are giving.
So, the `mae` and `loss value` if not the ultimate metric to check for improving the model. because we need to get less error for new examples that the model has not seen before.
```
# Let's rebuild our model by using Adam optimizer
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation = "relu"), # only difference we made
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.Adam(),
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0)# verbose will hide the epochs output
model.predict([17.0])
```
Still not better!!
```
# Let's rebuild our model by adding more layers
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),# only difference we made
tf.keras.layers.Dense(1)
])
# default value of lr is 0.001
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.Adam(lr = 0.01), # lr stands for learning rate
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # verbose will hide the epochs output
```
The learning rate is the most important hyperparameter for all the Neural Networks
### Evaluating our model
In practice, a typical workflow you'll go through when building a neural network is:
```
Build a model -> fit it -> evaluate it -> tweak a model -> fit it -> evaluate it -> tweak it -> fit it
```
Common ways to improve a deep model:
* Adding Layers
* Increase the number of hidden units
* Change the activation functions
* Change the optimization function
* Change the learning rate
* Fitting on more data
* Train for longer (more epochs)
**Because we can alter each of these they are called hyperparameters**
When it comes to evaluation.. there are 3 words you should memorize:
> "Visualize, Visualize, Visualize"
It's a good idea to visualize:
* The data - what data are working with? What does it look like
* The model itself - What does our model look like?
* The training of a model - how does a model perform while it learns?
* The predictions of the model - how does the prediction of the model line up against the labels(original value)
```
# Make a bigger dataset
X_large = tf.range(-100,100,4)
X_large
y_large = X_large + 10
y_large
import matplotlib.pyplot as plt
plt.scatter(X_large,y_large)
```
### The 3 sets ...
* **Training set** - The model learns from this data, which is typically 70-80% of the total data you have available.
* **validation set** - The model gets tuned on this data, which is typically 10-15% of the data avaialable.
* **Test set** - The model gets evaluated on this data to test what it has learned. This set is typically 10-15%.
```
# Check the length of how many samples we have
len(X_large)
# split the data into train and test sets
# since the dataset is small we can skip the valdation set
X_train = X_large[:40]
X_test = X_large[40:]
y_train = y_large[:40]
y_test = y_large[40:]
len(X_train), len(X_test), len(y_train), len(y_test)
```
### Visualizing the data
Now we've got our data in training and test sets. Let's visualize it.
```
plt.figure(figsize = (10,7))
# Plot the training data in blue
plt.scatter(X_train, y_train, c= 'b', label = "Training data")
# Plot the test data in green
plt.scatter(X_test, y_test, c = "g", label = "Training data")
plt.legend();
# Let's have a look at how to build neural network for our data
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# default value of lr is 0.001
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.SGD(), # lr stands for learning rate
metrics = ["mae"])
# 3. Fit the model to our dataset
#model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100)
```
Let's visualize it before fitting the model
```
model.summary()
```
model.summary() doesn't work without building the model or fitting the model
```
X[0], y[0]
# Let's create a model which builds automatically by defining the input_shape arguments
tf.random.set_seed(42)
# Create a model(same as above)
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape = [1]) # input_shape is 1 refer above code cell
])
# Compile the model
model.compile(loss= "mae",
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"])
model.summary()
```
* **Total params** - total number of parameters in the model.
* **Trainable parameters**- these are the parameters (patterns) the model can update as it trains.
* **Non-Trainable parameters** - these parameters aren't updated during training(this is typical when you have paramters from other models during **transfer learning**)
```
# Let's have a look at how to build neural network for our data
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape = [1], name= "input_layer"),
tf.keras.layers.Dense(1, name = "output_layer")
], name = "model_1")
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.SGD(), # lr stands for learning rate
metrics = ["mae"])
model.summary()
```
We have changed the layer names and added our custom model name.
```
from tensorflow.keras.utils import plot_model
plot_model(model = model, to_file = 'model1.png', show_shapes = True)
# Let's have a look at how to build neural network for our data
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),
tf.keras.layers.Dense(100, activation = "relu"),# only difference we made
tf.keras.layers.Dense(1)
], name)
# default value of lr is 0.001
# 2. Compile the model
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.Adam(lr = 0.01), # lr stands for learning rate
metrics = ["mae"])
# 3. Fit the model to our dataset
model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100, verbose = 0)
model.predict(X_test)
```
wow, we are so close!!!
```
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model = model, to_file = 'model.png', show_shapes = True)
```
### Visualizing our model's predictions
To visualize predictions, it's a good idea to plot them against the ground truth labels.
Often you'll see this in the form of `y_test` or `y_true` versus `y_pred`
```
# Set random seed
tf.random.set_seed(42)
# Create a model (same as above)
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape = [1], name = "input_layer"),
tf.keras.layers.Dense(1, name = "output_layer") # define the input_shape to our model
], name = "revised_model_1")
# Compile model (same as above)
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
model.summary()
model.fit(X_train, y_train, epochs=100, verbose=0)
model.summary()
# Make some predictions
y_pred = model.predict(X_test)
tf.constant(y_pred)
```
These are our predictions!
```
y_test
```
These are the ground truth labels!
```
plot_model(model, show_shapes=True)
```
**Note:** IF you feel like you're going to reuse some kind of functionality in future,
it's a good idea to define a function so that we can reuse it whenever we need.
```
#Let's create a plotting function
def plot_predictions(train_data= X_train,
train_labels = y_train,
test_data = X_test,
test_labels =y_test,
predictions = y_pred):
"""
Plots training data, test data and compares predictions to ground truth labels
"""
plt.figure(figsize = (10,7))
# Plot training data in blue
plt.scatter(train_data, train_labels, c= "b", label = "Training data")
# Plot testing data in green
plt.scatter(test_data, test_labels, c= "g", label = "Testing data")
# Plot model's predictions in red
plt.scatter(test_data, predictions, c= "r", label = "Predictions")
# Show legends
plt.legend();
plot_predictions(train_data=X_train,
train_labels=y_train,
test_data=X_test,
test_labels=y_test,
predictions=y_pred)
```
We tuned our model very well this time. The predictions are really close to the actual values.
### Evaluating our model's predictions with regression evaluation metrics
Depending on the problem you're working on, there will be different evaluation metrics to evaluate your model's performance.
Since, we're working on a regression, two of the main metrics:
* **MAE** - mean absolute error, "on average, how wrong id each of my model's predictions"
* TensorFlow code: `tf.keras.losses.MAE()`
* or `tf.metrics.mean_absolute_error()`
$$ MAE = \frac{Σ_{i=1}^{n} |y_i - x_i| }{n} $$
* **MSE** - mean square error, "square of the average errors"
* `tf.keras.losses.MSE()`
* `tf.metrics.mean_square_error()`
$$ MSE = \frac{1}{n} Σ_{i=1}^{n}(Y_i - \hat{Y_i})^2$$
$\hat{Y_i}$ is the prediction our model makes.
$Y_i$ is the label value.
* **Huber** - Combination of MSE and MAE, Less sensitive to outliers than MSE.
* `tf.keras.losses.Huber()`
```
# Evaluate the model on test set
model.evaluate(X_test, y_test)
# calculate the mean absolute error
mae = tf.metrics.mean_absolute_error(y_true = y_test,
y_pred = tf.constant(y_pred))
mae
```
We got the metric values wrong..why did this happen??
```
tf.constant(y_pred)
y_test
```
Notice that the shape of `y_pred` is (10,1) and the shape of `y_test` is (10,)
They might seem the same but they are not of the same shape.
Let's reshape the tensor to make the shapes equal.
```
tf.squeeze(y_pred)
# Calculate the mean absolute error
mae = tf.metrics.mean_absolute_error(y_true = y_test,
y_pred = tf.squeeze(y_pred))
mae
```
Now,we got our metric value. The mean absolute error of our model is 3.1969407.
Now, let's calculate the mean squared error and see how that goes.
```
# Calculate the mean squared error
mse = tf.metrics.mean_squared_error(y_true = y_test,
y_pred = tf.squeeze(y_pred))
mse
```
Our mean squared error is 13.070143. Remember, the mean squared error squares the error for every example in the test set and averages the values. So, generally, the mse is largeer than mae.
When larger errors are more significant than smaller errors, then it is best to use mse.
MAE can be used as a great starter metric for any regression problem.
We can also try Huber and see how that goes.
```
# Calculate the Huber metric for our model
huber_metric = tf.losses.huber(y_true = y_test,
y_pred = tf.squeeze(y_pred))
huber_metric
# Make some functions to reuse MAE and MSE and also Huber
def mae(y_true, y_pred):
return tf.metrics.mean_absolute_error(y_true = y_test,
y_pred = tf.squeeze(y_pred))
def mse(y_true, y_pred):
return tf.metrics.mean_squared_error(y_true = y_test,
y_pred = tf.squeeze(y_pred))
def huber(y_true, y_pred):
return tf.losses.huber(y_true = y_test,
y_pred = tf.squeeze(y_pred))
```
### Running experiments to improve our model
```
Build a model -> fit it -> evaluate it -> tweak a model -> fit it -> evaluate it -> tweak it -> fit it
```
1. Get more data - get more examples for your model to train on(more oppurtunities to learn patterns or relationships between features and labels).
2. Make your mode larger(using a more complex model) - this might come in the form of more layeres or more hidden unites in each layer.
3. Train for longer - give your model more of a chance to find patterns in the data.
Let's do a few modelling experiments:
1. `model_1` - same as original model, 1 layer, trained for 100 epochs.
2. `model_2` - 2 layers, trained for 100 epochs
3. `model_3` - 2 layers, trained for 500 epochs.
You can design more experiments too to make the model more better
**Build `Model_1`**
```
X_train, y_train
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape = [1])
], name = "Model_1")
# 2. Compile the model
model_1.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"])
# 3. Fit the model
model_1.fit(X_train, y_train ,epochs = 100, verbose = 0)
model_1.summary()
# Make and plot the predictions for model_1
y_preds_1 = model_1.predict(X_test)
plot_predictions(predictions = y_preds_1)
# Calculate model_1 evaluation metrics
mae_1 = mae(y_test, y_preds_1)
mse_1 = mse(y_test, y_preds_1)
mae_1, mse_1
```
**Build `Model_2`**
* 2 dense layers, trained for 100 epochs
```
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape =[1]),
tf.keras.layers.Dense(1)
], name = "model_2")
# 2. Compile the model
model_2.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mse"]) # Let's build this model with mse as eval metric.
# 3. Fit the model
model_2.fit(X_train, y_train ,epochs = 100, verbose = 0)
model_2.summary()
# Make and plot predictions of model_2
y_preds_2 = model_2.predict(X_test)
plot_predictions(predictions = y_preds_2)
```
Yeah,we improved this model very much than the previous one.
If you want to compare with previous one..scroll up and see the plot_predictions of
previous one and compare it with this one.
```
# Calculate the model_2 evaluation metrics
mae_2 = mae(y_test, y_preds_2)
mse_2 = mse(y_test, y_preds_2)
mae_2, mse_2
```
**Build `Model_3`**
* 2 layers, trained for 500 epochs
```
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
model_3 = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape =[1]),
tf.keras.layers.Dense(1)
], name = "model_3")
# 2. Compile the model
model_3.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"]) # Let's build this model with mse as eval metric.
# 3. Fit the model
model_2.fit(X_train, y_train ,epochs = 500, verbose = 0)
# Make and plot some predictions
y_preds_3 = model_3.predict(X_test)
plot_predictions(predictions = y_preds_3)
```
This is even terrible performance than the first model. we have actually made the model worse. WHY??
We, overfitted the model too much because we trained it for much longer than we are supposed to.
```
# Calculate the model_3 evaluation metrics
mae_3 = mae(y_test, y_preds_3)
mse_3 = mse(y_test, y_preds_3)
mae_3, mse_3
```
whoaa, the error is extremely high. I think the best of our models is `model_2`
The Machine Learning practitioner's motto:
`Experiment, experiment, experiment`
**Note:** You want to start with small experiments(small models) and make sure they work and then increase their scale when neccessary.
### Comparing the results of our experiments
We've run a few experiments, let's compare the results now.
```
# Let's compare our models'c results using pandas dataframe:
import pandas as pd
model_results = [["model_1", mae_1.numpy(), mse_1.numpy()],
["model_2", mae_2.numpy(), mse_2.numpy()],
["model_3", mae_3.numpy(), mse_3.numpy()]]
all_results = pd.DataFrame(model_results, columns =["model", "mae", "mse"])
all_results
```
It looks like model_2 performed done the best. Let's look at what is model_2
```
model_2.summary()
```
This is the model that has done the best on our dataset.
**Note:** One of your main goals should be to minimize the time between your experiments. The more experiments you do, the more things you will figure out which don't work and in turn, get closer to figuring out what does work. Remeber, the machine learning pracitioner's motto : "experiment, experiment, experiment".
## Tracking your experiments:
One really good habit of machine learning modelling is to track the results of your experiments.
And when doing so, it can be tedious if you are running lots of experiments.
Luckily, there are tools to help us!
**Resources:** As you build more models, you'll want to look into using:
* TensorBoard - a component of TensorFlow library to help track modelling experiments. It is integrated into the TensorFlow library.
* Weights & Biases - A tool for tracking all kinds of machine learning experiments (it plugs straight into tensorboard).
## Saving our models
Saving our models allows us to use them outside of Google Colab(or wherever they were trained) such as in a web application or a mobile app.
There are two main formats we can save our model:
1. The SavedModel format
2. The HDF5 format
`model.save()` allows us to save the model and we can use it again to do add things to the model after reloading it.
```
# Save model using savedmodel format
model_2.save("best_model_SavedModel_format")
```
If we are planning to use this model inside the tensorflow framework. we will be better off using the `SavedModel` format. But if we are planning to export the model else where and use it outside the tensorflow framework use the HDF5 format.
```
# Save model using HDF5 format
model_2.save("best_model_HDF5_format.h5")
```
Saving a model with SavedModel format will give us a folder with some files regarding our model.
Saving a model with HDF5 format will give us just one file with our model.
### Loading in a saved model
```
# Load in the SavedModel format model
loaded_SavedModel_format = tf.keras.models.load_model("/content/best_model_SavedModel_format")
loaded_SavedModel_format.summary()
# Let's check is that the same thing as model_2
model_2.summary()
# Compare the model_2 predictions with SavedModel format model predictions
model_2_preds = model_2.predict(X_test)
loaded_SavedModel_format_preds = loaded_SavedModel_format.predict(X_test)
model_2_preds == loaded_SavedModel_format_preds
mae(y_true = y_test, y_pred = model_2_preds) == mae(y_true = y_test, y_pred = loaded_SavedModel_format_preds)
# Load in a model using the .hf format
loaded_h5_model = tf.keras.models.load_model("/content/best_model_HDF5_format.h5")
loaded_h5_model.summary()
model_2.summary()
```
Yeah the loading of .hf format model matched with our original mode_2 format.
So, our model loading worked correctly.
```
# Check to see if loaded .hf model predictions match model_2
model_2_preds = model_2.predict(X_test)
loaded_h5_model_preds = loaded_h5_model.predict(X_test)
model_2_preds == loaded_h5_model_preds
```
### Download a model(or any other file) from google colab
If you want to download your files from Google Colab:
1. you can go to the files tab and right click on the file you're after and click download.
2. Use code(see the cell below).
3. You can save it to google drive by connecting to google drive and copying it there.
```
# Download a file from Google Colab
from google.colab import files
files.download("/content/best_model_HDF5_format.h5")
# Save a file from Google Colab to Google Drive(requires mounting google drive)
!cp /content/best_model_HDF5_format.h5 /content/drive/MyDrive/tensor-flow-deep-learning
!ls /content/drive/MyDrive/tensor-flow-deep-learning
```
We have saved our model to our google drive !!!
## A larger example
We take a larger dataset to do create a regression model. The model we do is insurance forecast by using linear regression available from kaggle [Medical Cost Personal Datasets](https://www.kaggle.com/mirichoi0218/insurance)
```
# Import required libraries
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
# Read in the insurance data set
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
insurance
```
This is a quite bigger dataset than the one we have previously worked with.
```
# one hot encoding on a pandas dataframe
insurance_one_hot = pd.get_dummies(insurance)
insurance_one_hot.head()
# Create X & y values (features and labels)
X = insurance_one_hot.drop("charges", axis =1)
y = insurance_one_hot["charges"]
# View X
X.head()
# View y
y.head()
# Create training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size = 0.2, random_state = 42)
len(X), len(X_train), len(X_test)
X_train
insurance["smoker"] , insurance["sex"]
# Build a neural network (sort of like model_2 above)
tf.random.set_seed(42)
# 1. Create a model
insurance_model = tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
insurance_model.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.SGD(),
metrics = ["mae"])
#3. Fit the model
insurance_model.fit(X_train, y_train,epochs = 100, verbose = 0)
# Check the results of the insurance model on the test data
insurance_model.evaluate(X_test,y_test)
y_train.median(), y_train.mean()
```
Right now it looks like our model is not performing well, lets try and improve it.
To try and improve our model, we'll run 2 experiments:
1. Add an extra layer with more hidden units and use the Adam optimizer
2. Train for longer (like 200 epochs)
3. We can also do our custom experiments to improve it.
```
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
insurance_model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
],name = "insurace_model_2")
# 2. Compile the model
insurance_model_2.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.Adam(),
metrics = ["mae"])
# 3. Fit the model
insurance_model_2.fit(X_train, y_train, epochs = 100, verbose = 0)
insurance_model_2.evaluate(X_test, y_test)
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
insurance_model_3 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
],name = "insurace_model_2")
# 2. Compile the model
insurance_model_3.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.Adam(),
metrics = ["mae"])
# 3. Fit the model
history = insurance_model_3.fit(X_train, y_train, epochs = 200, verbose = 0)
# Evaluate our third model
insurance_model_3.evaluate(X_test, y_test)
# Plot history (also known as a loss curve or a training curve)
pd.DataFrame(history.history).plot()
plt.ylabel("loss")
plt.xlabel("epochs")
plt.title("Training curve of our model")
```
**Question:** How long should you train for?
It depends, It really depends on problem you are working on. However, many people have asked this question before, so TensorFlow has a solution!, It is called [EarlyStopping callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping), which is a TensorFlow component you can add to your model to stop training once it stops improving a certain metric.
## Preprocessing data (normalization and standardization)
Short review of our modelling steps in TensorFlow:
1. Get data ready(turn into tensors)
2. Build or pick a pretrained model (to suit your problem)
3. Fit the model to the data and make a prediction.
4. Evaluate the model.
5. Imporve through experimentation.
6. Save and reload your trained models.
we are going to focus on the step 1 to make our data set more rich for training.
some steps involved in getting data ready:
1. Turn all data into numbers(neural networks can't handle strings).
2. Make sure all of your tensors are the right shape.
3. Scale features(normalize or standardize, neural networks tend to prefer normalization) -- this is the one thing we haven't done while preparing our data.
**If you are not sure on which to use for scaling, you could try both and see which perform better**
```
# Import required libraries
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
# Read in the insurance dataframe
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
insurance
```
To prepare our data, we can borrow few classes from Scikit-Learn
```
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
```
**Feature Scaling**:
| **Scaling type** | **what it does** | **Scikit-Learn Function** | **when to use** |
| --- | --- | --- | --- |
| scale(refers to as normalization) | converts all values to between 0 and 1 whilst preserving the original distribution | `MinMaxScaler` | Use as default scaler with neural networks |
| Standarization | Removes the mean and divides each value by the standard deviation | `StandardScaler` | Transform a feature to have close to normal distribution |
```
#Create a column transformer
ct = make_column_transformer(
(MinMaxScaler(), ["age", "bmi", "children"]), # Turn all values in these columns between 0 and 1
(OneHotEncoder(handle_unknown = "ignore"), ["sex", "smoker", "region"])
)
# Create our X and Y values
# because we reimported our dataframe
X = insurance.drop("charges", axis = 1)
y = insurance["charges"]
# Build our train and test set
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 42)
# Fit the column transformer to our training data (only training data)
ct.fit(X_train)
# Transform training and test data with normalization(MinMaxScaler) and OneHotEncoder
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
# What does our data look like now??
X_train.loc[0]
X_train_normal[0], X_train_normal[12], X_train_normal[78]
# we have turned all our data into numerical encoding and aso normalized the data
X_train.shape, X_train_normal.shape
```
Beautiful! our data has been normalized and One hot encoded. Let's build Neural Network on it and see how it goes.
```
# Build a neural network model to fit on our normalized data
tf.random.set_seed(42)
# 1. Create the model
insurance_model_4 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
insurance_model_4.compile(loss = tf.keras.losses.mae,
optimizer = tf.keras.optimizers.Adam(),
metrics = ["mae"])
# 3. Fit the model
history = insurance_model_4.fit(X_train_normal, y_train, epochs= 100, verbose = 0)
# Evaluate our insurance model trained on normalized data
insurance_model_4.evaluate(X_test_normal, y_test)
insurance_model_4.summary()
pd.DataFrame(history.history).plot()
plt.ylabel("loss")
plt.xlabel("epochs")
plt.title("Training curve of insurance_model_4")
```
Let's just plot some graphs. Since we have use them the least in this notebook.
```
X["age"].plot(kind = "hist")
X["bmi"].plot(kind = "hist")
X["children"].value_counts()
```
## **External Resources:**
* [MIT introduction deep learning lecture 1](https://youtu.be/njKP3FqW3Sk)
* [Kaggle's datasets](https://www.kaggle.com/data)
* [Lion Bridge's collection of datasets](https://lionbridge.ai/datasets/)
## Bibliography:
* [Learn TensorFlow and Deep Learning fundamentals with Python (code-first introduction) Part 1/2](https://www.youtube.com/watch?v=tpCFfeUEGs8&list=RDCMUCr8O8l5cCX85Oem1d18EezQ&start_radio=1&rv=tpCFfeUEGs8&t=3)
* [Medical cost personal dataset](https://www.kaggle.com/mirichoi0218/insurance)
* [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf)
* [TensorFlow and Deep learning Daniel Bourke GitHub Repo](https://github.com/mrdbourke/tensorflow-deep-learning)
| github_jupyter |
# Análise de Dados com Python
Neste notebook, utilizaremos dados de automóveis para analisar a influência das características de um carro em seu preço, tentando posteriormente prever qual será o preço de venda de um carro. Utilizaremos como fonte de dados um arquivo .csv com dados já tratados em outro notebook. Caso você tenha dúvidas quanto a como realizar o tratamento dos dados, dê uma olhada no meu repositório Learn-Pandas
```
import pandas as pd
import numpy as np
df = pd.read_csv('clean_auto_df.csv')
df.head()
```
<h4> Utilizando visualização de dados para verificar padrões de características individuais</h4>
```
# Importando as bibliotecas "Matplotlib" e "Seaborn
# utilizando "%matplotlib inline" para plotar o gráfico dentro do notebook.
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
<h4> Como escolher o método de visualização correto? </h4>
<p> Ao visualizar variáveis individuais, é importante primeiro entender com que tipo de variável você está lidando. Isso nos ajudará a encontrar o método de visualização correto para essa variável. Por exemplo, podemos calcular a correlação entre variáveis do tipo "int64" ou "float64" usando o método "corr":</p>
```
df.corr()
```
Os elementos diagonais são sempre um; (estudaremos isso, mais precisamente a correlação de Pearson no final do notebook)
```
# se quisermos verificar a correlação de apenas algumas colunas
df[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr()
```
<h2> Variáveis numéricas contínuas: </h2>
<p> Variáveis numéricas contínuas são variáveis que podem conter qualquer valor dentro de algum intervalo. Variáveis numéricas contínuas podem ter o tipo "int64" ou "float64". Uma ótima maneira de visualizar essas variáveis é usando gráficos de dispersão com linhas ajustadas. </p>
<p> Para começar a compreender a relação (linear) entre uma variável individual e o preço. Podemos fazer isso usando "regplot", que plota o gráfico de dispersão mais a linha de regressão ajustada para os dados. </p>
<h4> Relação linear positiva </h4>
Vamos encontrar o gráfico de dispersão de "engine-size" e "price"
```
# Engine size as potential predictor variable of price
sns.regplot(x="engine-size", y="price", data=df)
plt.ylim(0,)
```
<p> Note que conforme o tamanho do motor aumenta, o preço sobe: isso indica uma correlação direta positiva entre essas duas variáveis. O tamanho do motor parece um bom preditor de preço, já que a linha de regressão é quase uma linha diagonal perfeita. </p>
```
# Podemos examinar a correlação entre 'engine-size' e 'price' e ver que é aproximadamente 0,87
df[["engine-size", "price"]].corr()
```
<h4> Relação linear Negativa </h4>
```
# city-mpg também pode ser um bom preditor para a variável price:
sns.regplot(x="city-mpg", y="price", data=df)
```
<p> À medida que o city-mpg sobe, o preço desce: isso indica uma relação inversa / negativa entre essas duas variáveis, podendo ser um indicador de preço. </p>
```
df[['city-mpg', 'price']].corr()
```
<h4> Relação linear neutra (ou fraca) </h4>
```
sns.regplot(x="peak-rpm", y="price", data=df)
```
<p> A variável peak-rpm não parece ser um bom preditor do preço, pois a linha de regressão está próxima da horizontal. Além disso, os pontos de dados estão muito dispersos e distantes da linha ajustada, apresentando grande variabilidade. Portanto, não é uma variável confiável. </p>
```
df[['peak-rpm','price']].corr()
```
<h2> Variáveis categóricas: </h2>
<p> Essas são variáveis que descrevem uma 'característica' de uma unidade de dados e são selecionadas a partir de um pequeno grupo de categorias. As variáveis categóricas podem ser do tipo "objeto" ou "int64". Uma boa maneira de visualizar variáveis categóricas é usar boxplots. </p>
```
sns.boxplot(x="body-style", y="price", data=df)
```
Vemos que as distribuições de preço entre as diferentes categorias de body-style têm uma sobreposição significativa e, portanto, body-style não seria um bom preditor de preço. Vamos examinar a "engine-location" e o "price" do motor:
```
sns.boxplot(x="engine-location", y="price", data=df)
```
<p> Aqui, vemos que a distribuição de preço entre essas duas categorias de localização do motor, dianteira e traseira, são distintas o suficiente para considerar a localização do motor como um bom indicador de preço em potencial. </p>
```
# drive-wheels
sns.boxplot(x="drive-wheels", y="price", data=df)
```
<p> Aqui vemos que a distribuição de preço entre as diferentes categorias de drive-wheels difere e podem ser um indicador de preço. </p>
<h2> Estatística Descritiva </h2>
<p> Vamos primeiro dar uma olhada nas variáveis usando um método de descrição. </p>
<p> A função <b> describe </b> calcula automaticamente estatísticas básicas para todas as variáveis contínuas. Quaisquer valores NaN são automaticamente ignorados nessas estatísticas. </p>
Isso mostrará:
<ul>
<li> a contagem dessa variável </li>
<li> a média </li>
<li> o desvio padrão (std) </li>
<li> o valor mínimo </li>
<li> o IQR (intervalo interquartil: 25%, 50% e 75%) </li>
<li> o valor máximo </li>
<ul>
```
df.describe()
# A configuração padrão de "describe" ignora variáveis do tipo de objeto.
# Podemos aplicar o método "describe" nas variáveis do tipo 'objeto' da seguinte forma:
df.describe(include=['object'])
```
<h3>Value Counts</h3>
A contagem de valores é uma boa maneira de entender quantas unidades de cada característica / variável temos.
Podemos aplicar o método "value_counts" na coluna 'drive-wheels'.
Não se esqueça que o método "value_counts" só funciona na série Pandas, não nos Dataframes Pandas.
Por isso, incluímos apenas um colchete "df ['drive-wheels']" e não dois colchetes "df [['drive-wheels']]".
```
df['drive-wheels'].value_counts()
# nós podemos converter a série para um dataframe:
df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)
drive_wheels_counts
# vamos renomear o index para 'drive-wheels':
drive_wheels_counts.index.name = 'drive-wheels'
drive_wheels_counts
# repetindo o processo para engine-location
engine_loc_counts = df['engine-location'].value_counts().to_frame()
engine_loc_counts.rename(columns={'engine-location': 'value_counts'}, inplace=True)
engine_loc_counts.index.name = 'engine-location'
engine_loc_counts.head()
```
<h2>Agrupando</h2>
<p> O método "groupby" agrupa os dados por categorias diferentes. Os dados são agrupados com base em uma ou várias variáveis e a análise é realizada nos grupos individuais. </p>
<p> Por exemplo, vamos agrupar pela variável "drive-wheels". Vemos que existem 3 categorias diferentes de rodas motrizes. </p>
```
df['drive-wheels'].unique()
```
<p> Se quisermos saber, em média, qual tipo de drive-wheels é mais valiosa, podemos agrupar "drive-wheels" e depois fazer a média delas. </p>
<p> Podemos selecionar as colunas 'drive-wheels', 'body-style' e 'price' e, em seguida, atribuí-las à variável "df_group_one". </p>
```
df_group_one = df[['drive-wheels','body-style','price']]
# Podemos então calcular o preço médio para cada uma das diferentes categorias de dados
df_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()
df_group_one
```
<p> Pelos nossos dados, parece que os veículos com tração traseira são, em média, os mais caros, enquanto as 4 rodas e as rodas dianteiras têm preços aproximadamente iguais. </p>
<p> Você também pode agrupar com várias variáveis. Por exemplo, vamos agrupar por 'drive-wheels' e 'body-style'. Isso agrupa o dataframe pelas combinações exclusivas 'drive-wheels' e 'body-style'. Podemos armazenar os resultados na variável 'grouped_test1'. </p>
```
df_gptest = df[['drive-wheels','body-style','price']]
grouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()
grouped_test1
```
Esses dados agrupados são muito mais fáceis de visualizar quando transformados em uma tabela dinâmica. Uma tabela dinâmica é como uma planilha do Excel, com uma variável ao longo da coluna e outra ao longo da linha. Podemos converter o dataframe em uma tabela dinâmica usando o método "pivô" para criar uma tabela dinâmica a partir dos grupos.
Nesse caso, deixaremos a variável da drive-wheels como as linhas da tabela e giraremos no estilo do corpo para se tornar as colunas da tabela:
```
grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')
grouped_pivot
```
As vezes não teremos dados para algumas das células pivô. Podemos preencher essas células ausentes com o valor 0, mas qualquer outro valor também pode ser usado. Deve ser mencionado que a falta de dados é um assunto bastante complexo...
```
grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0
grouped_pivot
df_gptest2 = df[['body-style','price']]
grouped_test_bodystyle = df_gptest2.groupby(['body-style'],as_index= False).mean()
grouped_test_bodystyle
```
<h2>Visualização dos dados</h2>
Vamos usar um mapa de calor para visualizar a relação entre body-style e price.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.pcolor(grouped_pivot, cmap='RdBu')
plt.colorbar()
plt.show()
```
<p> O mapa de calor representa a variável alvo (price) proporcional à cor em relação às variáveis 'drive-wheels' e 'body-style' nos eixos vertical e horizontal, respectivamente. Isso nos permite visualizar como o preço está relacionado a 'drive-wheels' e 'body-style'. </p>
<p> Os rótulos padrão não transmitem informações úteis para nós. Vamos mudar isso: </p>
```
fig, ax = plt.subplots()
im = ax.pcolor(grouped_pivot, cmap='RdBu')
#label names
row_labels = grouped_pivot.columns.levels[1]
col_labels = grouped_pivot.index
#move ticks and labels to the center
ax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)
#insert labels
ax.set_xticklabels(row_labels, minor=False)
ax.set_yticklabels(col_labels, minor=False)
#rotate label if too long
plt.xticks(rotation=90)
fig.colorbar(im)
plt.show()
```
<p> A visualização é muito importante na ciência de dados e os pacotes de visualização oferecem grande liberdade</p>
<p> A principal questão que queremos responder neste notebook é "Quais são as principais características que têm mais impacto no preço do carro?". </p>
<p> Para obter uma melhor medida das características importantes, olhamos para a correlação dessas variáveis com o preço do carro, em outras palavras: como o preço do carro depende dessa variável? </p>
<h2>Correlação e Causalidade</h2>
<p> <b> Correlação </b>: uma medida da extensão da interdependência entre as variáveis. </p>
<p> <b> Causalidade </b>: a relação entre causa e efeito entre duas variáveis. </p>
<p> É importante saber a diferença entre os dois e que a correlação não implica causalidade. Determinar a correlação é muito mais simples do que determinar a causalidade, pois a causalidade pode exigir experimentação independente. </p>
<p3> Correlação de Pearson </p>
<p> A Correlação de Pearson mede a dependência linear entre duas variáveis X e Y. </p>
<p> O coeficiente resultante é um valor entre -1 e 1 inclusive, onde: </p>
<ul>
<li> <b> 1 </b>: Correlação linear positiva total. </li>
<li> <b> 0 </b>: Sem correlação linear, as duas variáveis provavelmente não se afetam. </li>
<li> <b> -1 </b>: Correlação linear negativa total. </li>
</ul>
<p> Correlação de Pearson é o método padrão da função "corr". Como antes, podemos calcular a Correlação de Pearson das variáveis 'int64' ou 'float64'. </p>
```
df.corr()
```
<b> P-value </b>:
<p>P-value é o valor da probabilidade de que a correlação entre essas duas variáveis seja estatisticamente significativa. Normalmente, escolhemos um nível de significância de 0.05, o que significa que temos 95% de confiança de que a correlação entre as variáveis é significativa. </p>
Por convenção, quando o
<ul>
<li> o valor de p é $ <$ 0.001: afirmamos que há fortes evidências de que a correlação é significativa. </li>
<li> o valor p é $ <$ 0.05: há evidências moderadas de que a correlação é significativa. </li>
<li> o valor p é $ <$ 0.1: há evidências fracas de que a correlação é significativa. </li>
<li> o valor p é $> $ 0.1: não há evidências de que a correlação seja significativa. </li>
</ul>
```
# Podemos obter essas informações usando o módulo "stats" da biblioteca "scipy"
from scipy import stats
```
<h3>Wheel-base vs Price</h3>
Vamos calcular o coeficiente de correlação de Pearson e o P-value entre 'wheel-base' e 'price'.
```
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
A notacão científica do resultado indica que o valor é muito maior ou muito pequeno.
No caso de 8.076488270733218e-20 significa:
8.076488270733218 vezes 10 elevado a menos 20 (o que faz andar a casa decimal 20 vezes para esquerda):
0,0000000000000000008076488270733218
<h5> Conclusão: </h5>
<p> Como o P-value é $ <$ 0.001, a correlação entre wheel-base e price é estatisticamente significativa, embora a relação linear não seja extremamente forte (~ 0,585) </p>
<h3>Horsepower vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
<h5> Conclusão: </h5>
<p> Como o P-value é $ <$ 0,001, a correlação entre a horsepower e price é estatisticamente significativa, e a relação linear é bastante forte (~ 0,809, próximo de 1) </p>
<h3>Length vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
<h5> Conclusão: </h5>
<p> Como o valor p é $ <$ 0,001, a correlação entre length e price é estatisticamente significativa, e a relação linear é moderadamente forte (~ 0,691). </p>
<h3>Width vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
##### Conclusão:
Como o valor p é <0,001, a correlação entre largura e preço é estatisticamente significativa e a relação linear é bastante forte (~ 0,751).
<h2>ANOVA</h2>
<p> A Análise de Variância (ANOVA) é um método estatístico usado para testar se existem diferenças significativas entre as médias de dois ou mais grupos. ANOVA retorna dois parâmetros: </p>
<p> <b> F-test score </b>: ANOVA assume que as médias de todos os grupos são iguais, calcula o quanto as médias reais se desviam da suposição e relata como a pontuação do F-test. Uma pontuação maior significa que há uma diferença maior entre as médias. </p>
<p> <b> P-value </b>: P-value diz o quão estatisticamente significativo é nosso valor de pontuação calculado. </p>
<p> Se nossa variável de preço estiver fortemente correlacionada com a variável que estamos analisando, espere que a ANOVA retorne uma pontuação considerável no F-test e um pequeno P-value. </p>
<h3>Drive Wheels</h3>
<p> Uma vez que ANOVA analisa a diferença entre diferentes grupos da mesma variável, a função groupby será útil. Como o algoritmo ANOVA calcula a média dos dados automaticamente, não precisamos tirar a média antes. </p>
<p> Vamos ver se diferentes tipos de 'drive wheels' afetam o 'price', agrupamos os dados. </ p>
```
grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_test2.head(2)
# Podemos obter os valores do grupo de métodos usando o método "get_group".
grouped_test2.get_group('4wd')['price']
# podemos usar a função 'f_oneway' no módulo 'stats' para obter pontuação do test-F e o P-value
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val)
```
Este é um ótimo resultado, com uma grande pontuação no test-F mostrando uma forte correlação e um P-value de quase 0 implicando em significância estatística quase certa. Mas isso significa que todos os três grupos testados são altamente correlacionados?
```
#### fwd e rwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val )
#### 4wd and rwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val)
#### 4wd and fwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price'])
print("ANOVA: F=", f_val, ", P =", p_val)
```
<h3>Conclusão</h3>
<p> Agora temos uma ideia melhor de como são os nossos dados e quais variáveis são importantes levar em consideração ao prever o preço do carro.</p>
<p> À medida que avançamos na construção de modelos de aprendizado de máquina para automatizar nossa análise, alimentar o modelo com variáveis que afetam significativamente nossa variável de destino melhorará o desempenho de previsão do nosso modelo. </p>
# É isso!
### Este é apenas um exemplo de análise de dados com Python
Este notebook faz parte de uma série de notebooks com conteúdos extraídos de cursos dos quais participei como aluno, ouvinte, professor, monitor... Reunidos para consulta futura e compartilhamento de idéias, soluções e conhecimento!
### Muito obrigado pela sua leitura!
<h4>Anderson Cordeiro</h4>
Você pode encontrar mais conteúdo no meu Medium<br> ou então entrar em contato comigo :D
<a href="https://www.linkedin.com/in/andercordeiro/" target="_blank">[LinkedIn]</a>
<a href="https://medium.com/@andcordeiro" target="_blank">[Medium]</a>
| github_jupyter |
```
import pandas as pd
#This is the Richmond USGS Data gage
river_richmnd = pd.read_csv('JR_Richmond02037500.csv')
river_richmnd.dropna();
#Hurricane data for the basin - Names of Relevant Storms - This will be used for getting the storms from the larger set
JR_stormnames = pd.read_csv('gis_match.csv')
# Bring in the Big HURDAT data, from 1950 forward (satellites and data quality, etc.)
HURDAT = pd.read_csv('hurdatcleanva_1950_present.csv')
VA_JR_stormmatch = JR_stormnames.merge(HURDAT)
# Now the common storms for the James Basin have been created. We now have time and storms together for the basin
#checking some things about the data
# How many unique storms within the basin since 1950? 62 here and 53 in the Data on the Coast.NOAA.gov's website.
#I think we are close enough here, digging may show some other storms, but I think we have at least captured the ones
#from NOAA
len(VA_JR_stormmatch['Storm Number'].unique());
#double ck the lat and long parameters
print(VA_JR_stormmatch['Lat'].min(),
VA_JR_stormmatch['Lon'].min(),
VA_JR_stormmatch['Lat'].max(),
VA_JR_stormmatch['Lon'].max())
#Make a csv of this data
VA_JR_stormmatch.to_csv('storms_in_basin.csv', sep=',',encoding = 'utf-8')
#names of storms
len(VA_JR_stormmatch['Storm Number'].unique())
VA_JR_stormmatch['Storm Number'].unique()
numbers = VA_JR_stormmatch['Storm Number']
#grab a storm from this list and lok at the times
#Bill = pd.DataFrame(VA_JR_stormmatch['Storm Number'=='AL032003'])
storm = VA_JR_stormmatch[(VA_JR_stormmatch["Storm Number"] == 'AL061996')]
storm
#so this is the data for a storm named Bill that had a pth through the basin * BILL WAS A BACKDOOR Storm
# plotting for the USGS river Gage data
import matplotlib
import matplotlib.pyplot as plt
from climata.usgs import DailyValueIO
from datetime import datetime
from pandas.plotting import register_matplotlib_converters
import numpy as np
register_matplotlib_converters()
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (20.0, 10.0)
# set parameters
nyears = 1
ndays = 365 * nyears
station_id = "02037500"
param_id = "00060"
datelist = pd.date_range(end=datetime.today(), periods=ndays).tolist()
#take an annual average for the river
annual_data = DailyValueIO(
start_date="1996-01-01",
end_date="1997-01-01",
station=station_id,
parameter=param_id,)
for series in annual_data:
flow = [r[1] for r in series.data]
si_flow_annual = np.asarray(flow) * 0.0283168
flow_mean = np.mean(si_flow_annual)
#now for the storm
dischg = DailyValueIO(
start_date="1996-09-03",
end_date="1996-09-17",
station=station_id,
parameter=param_id,)
#create lists of date-flow values
for series in dischg:
flow = [r[1] for r in series.data]
si_flow = np.asarray(flow) * 0.0283168
dates = [r[0] for r in series.data]
plt.plot(dates, si_flow)
plt.axhline(y=flow_mean, color='r', linestyle='-')
plt.xlabel('Date')
plt.ylabel('Discharge (m^3/s)')
plt.title("TS Fran - 1996 (Atlantic)")
plt.xticks(rotation='vertical')
plt.show()
max(si_flow)
percent_incr= (abs(max(si_flow)-flow_mean)/abs(flow_mean))*100
percent_incr
#take an annual average for the river
annual_data = DailyValueIO(
start_date="1996-03-01",
end_date="1996-10-01",
station=station_id,
parameter=param_id,)
for series in annual_data:
flow = [r[1] for r in series.data]
si_flow_annual = np.asarray(flow) * 0.0283168
flow_mean_season = np.mean(si_flow_annual)
print(abs(flow_mean-flow_mean_season))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/NAIP/ndwi.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/NAIP/ndwi.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=NAIP/ndwi.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/NAIP/ndwi.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
```
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
```
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# lng_lat = ee.Geometry.Point(lng, lat)
naip = collection.filterBounds(polys)
naip_2015 = naip.filterDate('2015-01-01', '2015-12-31')
ppr = naip_2015.mosaic().clip(polys)
# print(naip_2015.size().getInfo()) # count = 120
vis = {'bands': ['N', 'R', 'G']}
Map.setCenter(lng, lat, 10)
# Map.addLayer(naip_2015,vis)
Map.addLayer(ppr,vis)
# Map.addLayer(fromFT)
ndwi = ppr.normalizedDifference(['G', 'N'])
ndwiViz = {'min': 0, 'max': 1, 'palette': ['00FFFF', '0000FF']}
ndwiMasked = ndwi.updateMask(ndwi.gte(0.05))
ndwi_bin = ndwiMasked.gt(0)
Map.addLayer(ndwiMasked, ndwiViz)
patch_size = ndwi_bin.connectedPixelCount(256, True)
# Map.addLayer(patch_size)
patch_id = ndwi_bin.connectedComponents(ee.Kernel.plus(1), 256)
Map.addLayer(patch_id)
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
## Release the Kraken!
```
# The next library we're going to look at is called Kraken, which was developed by Université
# PSL in Paris. It's actually based on a slightly older code base, OCRopus. You can see how the
# flexible open-source licenses allow new ideas to grow by building upon older ideas. And, in
# this case, I fully support the idea that the Kraken - a mythical massive sea creature - is the
# natural progression of an octopus!
#
# What we are going to use Kraken for is to detect lines of text as bounding boxes in a given
# image. The biggest limitation of tesseract is the lack of a layout engine inside of it. Tesseract
# expects to be using fairly clean text, and gets confused if we don't crop out other artifacts.
# It's not bad, but Kraken can help us out be segmenting pages. Lets take a look.
# First, we'll take a look at the kraken module itself
import kraken
help(kraken)
# There isn't much of a discussion here, but there are a number of sub-modules that look
# interesting. I spend a bit of time on their website, and I think the pageseg module, which
# handles all of the page segmentation, is the one we want to use. Lets look at it
from kraken import pageseg
help(pageseg)
# So it looks like there are a few different functions we can call, and the segment
# function looks particularly appropriate. I love how expressive this library is on the
# documentation front -- I can see immediately that we are working with PIL.Image files,
# and the author has even indicated that we need to pass in either a binarized (e.g. '1')
# or grayscale (e.g. 'L') image. We can also see that the return value is a dictionary
# object with two keys, "text_direction" which will return to us a string of the
# direction of the text, and "boxes" which appears to be a list of tuples, where each
# tuple is a box in the original image.
#
# Lets try this on the image of text. I have a simple bit of text in a file called
# two_col.png which is from a newspaper on campus here
from PIL import Image
im=Image.open("readonly/two_col.png")
# Lets display the image inline
display(im)
# Lets now convert it to black and white and segment it up into lines with kraken
bounding_boxes=pageseg.segment(im.convert('1'))['boxes']
# And lets print those lines to the screen
print(bounding_boxes)
# Ok, pretty simple two column text and then a list of lists which are the bounding boxes of
# lines of that text. Lets write a little routine to try and see the effects a bit more
# clearly. I'm going to clean up my act a bit and write real documentation too, it's a good
# practice
def show_boxes(img):
'''Modifies the passed image to show a series of bounding boxes on an image as run by kraken
:param img: A PIL.Image object
:return img: The modified PIL.Image object
'''
# Lets bring in our ImageDraw object
from PIL import ImageDraw
# And grab a drawing object to annotate that image
drawing_object=ImageDraw.Draw(img)
# We can create a set of boxes using pageseg.segment
bounding_boxes=pageseg.segment(img.convert('1'))['boxes']
# Now lets go through the list of bounding boxes
for box in bounding_boxes:
# An just draw a nice rectangle
drawing_object.rectangle(box, fill = None, outline ='red')
# And to make it easy, lets return the image object
return img
# To test this, lets use display
display(show_boxes(Image.open("readonly/two_col.png")))
# Not bad at all! It's interesting to see that kraken isn't completely sure what to do with this
# two column format. In some cases, kraken has identified a line in just a single column, while
# in other cases kraken has spanned the line marker all the way across the page. Does this matter?
# Well, it really depends on our goal. In this case, I want to see if we can improve a bit on this.
#
# So we're going to go a bit off script here. While this week of lectures is about libraries, the
# goal of this last course is to give you confidence that you can apply your knowledge to actual
# programming tasks, even if the library you are using doesn't quite do what you want.
#
# I'd like to pause the video for the moment and collect your thoughts. Looking at the image above,
# with the two column example and red boxes, how do you think we might modify this image to improve
# kraken's ability to text lines?
# Thanks for sharing your thoughts, I'm looking forward to seeing the breadth of ideas that everyone
# in the course comes up with. Here's my partial solution -- while looking through the kraken docs on
# the pageseg() function I saw that there are a few parameters we can supply in order to improve
# segmentation. One of these is the black_colseps parameter. If set to True, kraken will assume that
# columns will be separated by black lines. This isn't our case here, but, I think we have all of the
# tools to go through and actually change the source image to have a black separator between columns.
#
# The first step is that I want to update the show_boxes() function. I'm just going to do a quick
# copy and paste from the above but add in the black_colseps=True parameter
def show_boxes(img):
'''Modifies the passed image to show a series of bounding boxes on an image as run by kraken
:param img: A PIL.Image object
:return img: The modified PIL.Image object
'''
# Lets bring in our ImageDraw object
from PIL import ImageDraw
# And grab a drawing object to annotate that image
drawing_object=ImageDraw.Draw(img)
# We can create a set of boxes using pageseg.segment
bounding_boxes=pageseg.segment(img.convert('1'), black_colseps=True)['boxes']
# Now lets go through the list of bounding boxes
for box in bounding_boxes:
# An just draw a nice rectangle
drawing_object.rectangle(box, fill = None, outline ='red')
# And to make it easy, lets return the image object
return img
# The next step is to think of the algorithm we want to apply to detect a white column separator.
# In experimenting a bit I decided that I only wanted to add the separator if the space of was
# at least 25 pixels wide, which is roughly the width of a character, and six lines high. The
# width is easy, lets just make a variable
char_width=25
# The height is harder, since it depends on the height of the text. I'm going to write a routine
# to calculate the average height of a line
def calculate_line_height(img):
'''Calculates the average height of a line from a given image
:param img: A PIL.Image object
:return: The average line height in pixels
'''
# Lets get a list of bounding boxes for this image
bounding_boxes=pageseg.segment(img.convert('1'))['boxes']
# Each box is a tuple of (top, left, bottom, right) so the height is just top - bottom
# So lets just calculate this over the set of all boxes
height_accumulator=0
for box in bounding_boxes:
height_accumulator=height_accumulator+box[3]-box[1]
# this is a bit tricky, remember that we start counting at the upper left corner in PIL!
# now lets just return the average height
# lets change it to the nearest full pixel by making it an integer
return int(height_accumulator/len(bounding_boxes))
# And lets test this with the image with have been using
line_height=calculate_line_height(Image.open("readonly/two_col.png"))
print(line_height)
# Ok, so the average height of a line is 31.
# Now, we want to scan through the image - looking at each pixel in turn - to determine if there
# is a block of whitespace. How bit of a block should we look for? That's a bit more of an art
# than a science. Looking at our sample image, I'm going to say an appropriate block should be
# one char_width wide, and six line_heights tall. But, I honestly just made this up by eyeballing
# the image, so I would encourage you to play with values as you explore.
# Lets create a new box called gap box that represents this area
gap_box=(0,0,char_width,line_height*6)
gap_box
# It seems we will want to have a function which, given a pixel in an image, can check to see
# if that pixel has whitespace to the right and below it. Essentially, we want to test to see
# if the pixel is the upper left corner of something that looks like the gap_box. If so, then
# we should insert a line to "break up" this box before sending to kraken
#
# Lets call this new function gap_check
def gap_check(img, location):
'''Checks the img in a given (x,y) location to see if it fits the description
of a gap_box
:param img: A PIL.Image file
:param location: A tuple (x,y) which is a pixel location in that image
:return: True if that fits the definition of a gap_box, otherwise False
'''
# Recall that we can get a pixel using the img.getpixel() function. It returns this value
# as a tuple of integers, one for each color channel. Our tools all work with binarized
# images (black and white), so we should just get one value. If the value is 0 it's a black
# pixel, if it's white then the value should be 255
#
# We're going to assume that the image is in the correct mode already, e.g. it has been
# binarized. The algorithm to check our bounding box is fairly easy: we have a single location
# which is our start and then we want to check all the pixels to the right of that location
# up to gap_box[2]
for x in range(location[0], location[0]+gap_box[2]):
# the height is similar, so lets iterate a y variable to gap_box[3]
for y in range(location[1], location[1]+gap_box[3]):
# we want to check if the pixel is white, but only if we are still within the image
if x < img.width and y < img.height:
# if the pixel is white we don't do anything, if it's black, we just want to
# finish and return False
if img.getpixel((x,y)) != 255:
return False
# If we have managed to walk all through the gap_box without finding any non-white pixels
# then we can return true -- this is a gap!
return True
# Alright, we have a function to check for a gap, called gap_check. What should we do once
# we find a gap? For this, lets just draw a line in the middle of it. Lets create a new function
def draw_sep(img,location):
'''Draws a line in img in the middle of the gap discovered at location. Note that
this doesn't draw the line in location, but draws it at the middle of a gap_box
starting at location.
:param img: A PIL.Image file
:param location: A tuple(x,y) which is a pixel location in the image
'''
# First lets bring in all of our drawing code
from PIL import ImageDraw
drawing_object=ImageDraw.Draw(img)
# next, lets decide what the middle means in terms of coordinates in the image
x1=location[0]+int(gap_box[2]/2)
# and our x2 is just the same thing, since this is a one pixel vertical line
x2=x1
# our starting y coordinate is just the y coordinate which was passed in, the top of the box
y1=location[1]
# but we want our final y coordinate to be the bottom of the box
y2=y1+gap_box[3]
drawing_object.rectangle((x1,y1,x2,y2), fill = 'black', outline ='black')
# and we don't have anything we need to return from this, because we modified the image
# Now, lets try it all out. This is pretty easy, we can just iterate through each pixel
# in the image, check if there is a gap, then insert a line if there is.
def process_image(img):
'''Takes in an image of text and adds black vertical bars to break up columns
:param img: A PIL.Image file
:return: A modified PIL.Image file
'''
# we'll start with a familiar iteration process
for x in range(img.width):
for y in range(img.height):
# check if there is a gap at this point
if (gap_check(img, (x,y))):
# then update image to one which has a separator drawn on it
draw_sep(img, (x,y))
# and for good measure we'll return the image we modified
return img
# Lets read in our test image and convert it through binarization
i=Image.open("readonly/two_col.png").convert("L")
i=process_image(i)
display(i)
#Note: This will take some time to run! Be patient!
# Not bad at all! The effect at the bottom of the image is a bit unexpected to me, but it makes
# sense. You can imagine that there are several ways we might try and control this. Lets see how
# this new image works when run through the kraken layout engine
display(show_boxes(i))
# Looks like that is pretty accurate, and fixes the problem we faced. Feel free to experiment
# with different settings for the gap heights and width and share in the forums. You'll notice though
# method we created is really quite slow, which is a bit of a problem if we wanted to use
# this on larger text. But I wanted to show you how you can mix your own logic and work with
# libraries you're using. Just because Kraken didn't work perfectly, doesn't mean we can't
# build something more specific to our use case on top of it.
#
# I want to end this lecture with a pause and to ask you to reflect on the code we've written
# here. We started this course with some pretty simple use of libraries, but now we're
# digging in deeper and solving problems ourselves with the help of these libraries. Before we
# go on to our last library, how well prepared do you think you are to take your python
# skills out into the wild?
```
## Comparing Image Data Structures
```
# OpenCV supports reading of images in most file formats, such as JPEG, PNG, and TIFF. Most image and
# video analysis requires converting images into grayscale first. This simplifies the image and reduces
# noise allowing for improved analysis. Let's write some code that reads an image of as person, Floyd
# Mayweather and converts it into greyscale.
# First we will import the open cv package cv2
import cv2 as cv
# We'll load the floyd.jpg image
img = cv.imread('readonly/floyd.jpg')
# And we'll convert it to grayscale using the cvtColor image
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Now, before we get to the result, lets talk about docs. Just like tesseract, opencv is an external
# package written in C++, and the docs for python are really poor. This is unfortunatly quite common
# when python is being used as a wrapper. Thankfully, the web docs for opencv are actually pretty good,
# so hit the website docs.opencv.org when you want to learn more about a particular function. In this
# case cvtColor converts from one color space to another, and we are convering our image to grayscale.
# Of course, we already know at least two different ways of doing this, using binarization and PIL
# color spaces conversions
# Lets instpec this object that has been returned.
import inspect
inspect.getmro(type(gray))
# We see that it is of type ndarray, which is a fundamental list type coming from the numerical
# python project. That's a bit surprising - up until this point we have been used to working with
# PIL.Image objects. OpenCV, however, wants to represent an image as a two dimensional sequence
# of bytes, and the ndarray, which stands for n dimensional array, is the ideal way to do this.
# Lets look at the array contents.
gray
# The array is shown here as a list of lists, where the inner lists are filled with integers.
# The dtype=uint8 definition indicates that each of the items in an array is an 8 bit unsigned
# integer, which is very common for black and white images. So this is a pixel by pixel definition
# of the image.
#
# The display package, however, doesn't know what to do with this image. So lets convert it
# into a PIL object to render it in the browser.
from PIL import Image
# PIL can take an array of data with a given color format and convert this into a PIL object.
# This is perfect for our situation, as the PIL color mode, "L" is just an array of luminance
# values in unsigned integers
image = Image.fromarray(gray, "L")
display(image)
# Lets talk a bit more about images for a moment. Numpy arrays are multidimensional. For
# instance, we can define an array in a single dimension:
import numpy as np
single_dim = np.array([25, 50 , 25, 10, 10])
# In an image, this is analagous to a single row of 5 pixels each in grayscale. But actually,
# all imaging libraries tend to expect at least two dimensions, a width and a height, and to
# show a matrix. So if we put the single_dim inside of another array, this would be a two
# dimensional array with element in the height direction, and five in the width direction
double_dim = np.array([single_dim])
double_dim
# This should look pretty familiar, it's a lot like a list of lists! Lets see what this new
# two dimensional array looks like if we display it
display(Image.fromarray(double_dim, "L"))
# Pretty unexciting - it's just a little line. Five pixels in a row to be exact, of different
# levels of black. The numpy library has a nice attribute called shape that allows us to see how
# many dimensions big an array is. The shape attribute returns a tuple that shows the height of
# the image, by the width of the image
double_dim.shape
# Lets take a look at the shape of our initial image which we loaded into the img variable
img.shape
# This image has three dimensions! That's because it has a width, a height, and what's called
# a color depth. In this case, the color is represented as an array of three values. Lets take a
# look at the color of the first pixel
first_pixel=img[0][0]
first_pixel
# Here we see that the color value is provided in full RGB using an unsigned integer. This
# means that each color can have one of 256 values, and the total number of unique colors
# that can be represented by this data is 256 * 256 *256 which is roughly 16 million colors.
# We call this 24 bit color, which is 8+8+8.
#
# If you find yourself shopping for a television, you might notice that some expensive models
# are advertised as having 10 bit or even 12 bit panels. These are televisions where each of
# the red, green, and blue color channels are represented by 10 or 12 bits instead of 8. For
# ten bit panels this means that there are 1 billion colors capable, and 12 bit panels are
# capable of over 68 billion colors!
# We're not going to talk much more about color in this course, but it's a fun subject. Instead,
# lets go back to this array representation of images, because we can do some interesting things
# with this.
#
# One of the most common things to do with an ndarray is to reshape it -- to change the number
# of rows and columns that are represented so that we can do different kinds of operations.
# Here is our original two dimensional image
print("Original image")
print(gray)
# If we wanted to represent that as a one dimensional image, we just call reshape
print("New image")
# And reshape takes the image as the first parameter, and a new shape as the second
image1d=np.reshape(gray,(1,gray.shape[0]*gray.shape[1]))
print(image1d)
# So, why are we talking about these nested arrays of bytes, we were supposed to be talking
# about OpenCV as a library. Well, I wanted to show you that often libraries working on the
# same kind of principles, in this case images stored as arrays of bytes, are not representing
# data in the same way in their APIs. But, by exploring a bit you can learn how the internal
# representation of data is stored, and build routines to convert between formats.
#
# For instance, remember in the last lecture when we wanted to look for gaps in an image so
# that we could draw lines to feed into kraken? Well, we use PIL to do this, using getpixel()
# to look at individual pixels and see what the luminosity was, then ImageDraw.rectangle to
# actually fill in a black bar separator. This was a nice high level API, and let us write
# routines to do the work we wanted without having to understand too much about how the images
# were being stored. But it was computationally very slow.
#
# Instead, we could write the code to do this using matrix features within numpy. Lets take
# a look.
import cv2 as cv
# We'll load the 2 column image
img = cv.imread('readonly/two_col.png')
# And we'll convert it to grayscale using the cvtColor image
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Now, remember how slicing on a list works, if you have a list of number such as
# a=[0,1,2,3,4,5] then a[2:4] will return the sublist of numbers at position 2 through 4
# inclusive - don't forget that lists start indexing at 0!
# If we have a two dimensional array, we can slice out a smaller piece of that using the
# format a[2:4,1:3]. You can think of this as first slicing along the rows dimension, then
# in the columns dimension. So in this example, that would be a matrix of rows 2, and 3,
# and columns 1, and 2. Here's a look at our image.
gray[2:4,1:3]
# So we see that it is all white. We can use this as a "window" and move it around our
# our big image.
#
# Finally, the ndarray library has lots of matrix functions which are generally very fast
# to run. One that we want to consider in this case is count_nonzero(), which just returns
# the number of entries in the matrix which are not zero.
np.count_nonzero(gray[2:4,1:3])
# Ok, the last benefit of going to this low level approach to images is that we can change
# pixels very fast as well. Previously we were drawing rectangles and setting a fill and line
# width. This is nice if you want to do something like change the color of the fill from the
# line, or draw complex shapes. But we really just want a line here. That's really easy to
# do - we just want to change a number of luminosity values from 255 to 0.
#
# As an example, lets create a big white matrix
white_matrix=np.full((12,12),255,dtype=np.uint8)
display(Image.fromarray(white_matrix,"L"))
white_matrix
# looks pretty boring, it's just a giant white square we can't see. But if we want, we can
# easily color a column to be black
white_matrix[:,6]=np.full((1,12),0,dtype=np.uint8)
display(Image.fromarray(white_matrix,"L"))
white_matrix
# And that's exactly what we wanted to do. So, why do it this way, when it seems so much
# more low level? Really, the answer is speed. This paradigm of using matricies to store
# and manipulate bytes of data for images is much closer to how low level API and hardware
# developers think about storing files and bytes in memory.
#
# How much faster is it? Well, that's up to you to discover; there's an optional assignment
# for this week to convert our old code over into this new format, to compare both the
# readability and speed of the two different approaches.
```
## OpenCV
```
# Ok, we're just about at the project for this course. If you reflect on the specialization
# as a whole you'll realize that you started with probably little or no understanding of python,
# progressed through the basic control structures and libraries included with the language
# with the help of a digital textbook, moved on to more high level representations of data
# and functions with objects, and now started to explore third party libraries that exist for
# python which allow you to manipulate and display images. This is quite an achievement!
#
# You have also no doubt found that as you have progressed the demands on you to engage in self-
# discovery have also increased. Where the first assignments were maybe straight forward, the
# ones in this week require you to struggle a bit more with planning and debugging code as
# you develop.
#
# But, you've persisted, and I'd like to share with you just one more set of features before
# we head over to a project. The OpenCV library contains mechanisms to do face detection on
# images. The technique used is based on Haar cascades, which is a machine learning approach.
# Now, we're not going to go into the machine learning bits, we have another specialization on
# Applied Data Science with Python which you can take after this if you're interested in that topic.
# But here we'll treat OpenCV like a black box.
#
# OpenCV comes with trained models for detecting faces, eyes, and smiles which we'll be using.
# You can train models for detecting other things - like hot dogs or flutes - and if you're
# interested in that I'd recommend you check out the Open CV docs on how to train a cascade
# classifier: https://docs.opencv.org/3.4/dc/d88/tutorial_traincascade.html
# However, in this lecture we just want to use the current classifiers and see if we can detect
# portions of an image which are interesting.
#
# First step is to load opencv and the XML-based classifiers
import cv2 as cv
face_cascade = cv.CascadeClassifier('readonly/haarcascade_frontalface_default.xml')
eye_cascade = cv.CascadeClassifier('readonly/haarcascade_eye.xml')
# Ok, with the classifiers loaded, we now want to try and detect a face. Lets pull in the
# picture we played with last time
img = cv.imread('readonly/floyd.jpg')
# And we'll convert it to grayscale using the cvtColor image
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# The next step is to use the face_cascade classifier. I'll let you go explore the docs if you
# would like to, but the norm is to use the detectMultiScale() function. This function returns
# a list of objects as rectangles. The first parameter is an ndarray of the image.
faces = face_cascade.detectMultiScale(gray)
# And lets just print those faces out to the screen
faces
faces.tolist()[0]
# The resulting rectangles are in the format of (x,y,w,h) where x and y denote the upper
# left hand point for the image and the width and height represent the bounding box. We know
# how to handle this in PIL
from PIL import Image
# Lets create a PIL image object
pil_img=Image.fromarray(gray,mode="L")
# Now lets bring in our drawing object
from PIL import ImageDraw
# And lets create our drawing context
drawing=ImageDraw.Draw(pil_img)
# Now lets pull the rectangle out of the faces object
rec=faces.tolist()[0]
# Now we just draw a rectangle around the bounds
drawing.rectangle(rec, outline="white")
# And display
display(pil_img)
# So, not quite what we were looking for. What do you think went wrong?
# Well, a quick double check of the docs and it is apparent that OpenCV is return the coordinates
# as (x,y,w,h), while PIL.ImageDraw is looking for (x1,y1,x2,y2). Looks like an easy fix
# Wipe our old image
pil_img=Image.fromarray(gray,mode="L")
# Setup our drawing context
drawing=ImageDraw.Draw(pil_img)
# And draw the new box
drawing.rectangle((rec[0],rec[1],rec[0]+rec[2],rec[1]+rec[3]), outline="white")
# And display
display(pil_img)
# We see the face detection works pretty good on this image! Note that it's apparent that this is
# not head detection, but that the haarcascades file we used is looking for eyes and a mouth.
# Lets try this on something a bit more complex, lets read in our MSI recruitment image
img = cv.imread('readonly/msi_recruitment.gif')
# And lets take a look at that image
display(Image.fromarray(img))
# Whoa, what's that error about? It looks like there is an error on a line deep within the PIL
# Image.py file, and it is trying to call an internal private member called __array_interface__
# on the img object, but this object is None
#
# It turns out that the root of this error is that OpenCV can't work with Gif images. This is
# kind of a pain and unfortunate. But we know how to fix that right? One was is that we could
# just open this in PIL and then save it as a png, then open that in open cv.
#
# Lets use PIL to open our image
pil_img=Image.open('readonly/msi_recruitment.gif')
# now lets convert it to greyscale for opencv, and get the bytestream
open_cv_version=pil_img.convert("L")
# now lets just write that to a file
open_cv_version.save("msi_recruitment.png")
# Ok, now that the conversion of format is done, lets try reading this back into opencv
cv_img=cv.imread('msi_recruitment.png')
# We don't need to color convert this, because we saved it as grayscale
# lets try and detect faces in that image
faces = face_cascade.detectMultiScale(cv_img)
# Now, we still have our PIL color version in a gif
pil_img=Image.open('readonly/msi_recruitment.gif')
# Set our drawing context
drawing=ImageDraw.Draw(pil_img)
# For each item in faces, lets surround it with a red box
for x,y,w,h in faces:
# That might be new syntax for you! Recall that faces is a list of rectangles in (x,y,w,h)
# format, that is, a list of lists. Instead of having to do an iteration and then manually
# pull out each item, we can use tuple unpacking to pull out individual items in the sublist
# directly to variables. A really nice python feature
#
# Now we just need to draw our box
drawing.rectangle((x,y,x+w,y+h), outline="white")
display(pil_img)
# What happened here!? We see that we have detected faces, and that we have drawn boxes
# around those faces on the image, but that the colors have gone all weird! This, it turns
# out, has to do with color limitations for gif images. In short, a gif image has a very
# limited number of colors. This is called a color pallette after the pallette artists
# use to mix paints. For gifs the pallette can only be 256 colors -- but they can be *any*
# 256 colors. When a new color is introduced, is has to take the space of an old color.
# In this case, PIL adds white to the pallette but doesn't know which color to replace and
# thus messes up the image.
#
# Who knew there was so much to learn about image formats? We can see what mode the image
# is in with the .mode attribute
pil_img.mode
# We can see a list of modes in the PILLOW documentation, and they correspond with the
# color spaces we have been using. For the moment though, lets change back to RGB, which
# represents color as a three byte tuple instead of in a pallette.
# Lets read in the image
pil_img=Image.open('readonly/msi_recruitment.gif')
# Lets convert it to RGB mode
pil_img = pil_img.convert("RGB")
# And lets print out the mode
pil_img.mode
# Ok, now lets go back to drawing rectangles. Lets get our drawing object
drawing=ImageDraw.Draw(pil_img)
# And iterate through the faces sequence, tuple unpacking as we go
for x,y,w,h in faces:
# And remember this is width and height so we have to add those appropriately.
drawing.rectangle((x,y,x+w,y+h), outline="white")
display(pil_img)
# Awesome! We managed to detect a bunch of faces in that image. Looks like we have missed
# four faces. In the machine learning world we would call these false negatives - something
# which the machine thought was not a face (so a negative), but that it was incorrect on.
# Consequently, we would call the actual faces that were detected as true positives -
# something that the machine thought was a face and it was correct on. This leaves us with
# false positives - something the machine thought was a face but it wasn't. We see there are
# two of these in the image, picking up shadow patterns or textures in shirts and matching
# them with the haarcascades. Finally, we have true negatives, or the set of all possible
# rectangles the machine learning classifier could consider where it correctly indicated that
# the result was not a face. In this case there are many many true negatives.
# There are a few ways we could try and improve this, and really, it requires a lot of
# experimentation to find good values for a given image. First, lets create a function
# which will plot rectanges for us over the image
def show_rects(faces):
#Lets read in our gif and convert it
pil_img=Image.open('readonly/msi_recruitment.gif').convert("RGB")
# Set our drawing context
drawing=ImageDraw.Draw(pil_img)
# And plot all of the rectangles in faces
for x,y,w,h in faces:
drawing.rectangle((x,y,x+w,y+h), outline="white")
#Finally lets display this
display(pil_img)
# Ok, first up, we could try and binarize this image. It turns out that opencv has a built in
# binarization function called threshold(). You simply pass in the image, the midpoint, and
# the maximum value, as well as a flag which indicates whether the threshold should be
# binary or something else. Lets try this.
cv_img_bin=cv.threshold(img,120,255,cv.THRESH_BINARY)[1] # returns a list, we want the second value
# Now do the actual face detection
faces = face_cascade.detectMultiScale(cv_img_bin)
# Now lets see the results
show_rects(faces)
# That's kind of interesting. Not better, but we do see that there is one false positive
# towards the bottom, where the classifier detected the sunglasses as eyes and the dark shadow
# line below as a mouth.
#
# If you're following in the notebook with this video, why don't you pause things and try a
# few different parameters for the thresholding value?
# The detectMultiScale() function from OpenCV also has a couple of parameters. The first of
# these is the scale factor. The scale factor changes the size of rectangles which are
# considered against the model, that is, the haarcascades XML file. You can think of it as if
# it were changing the size of the rectangles which are on the screen.
#
# Lets experiment with the scale factor. Usually it's a small value, lets try 1.05
faces = face_cascade.detectMultiScale(cv_img,1.05)
# Show those results
show_rects(faces)
# Now lets also try 1.15
faces = face_cascade.detectMultiScale(cv_img,1.15)
# Show those results
show_rects(faces)
# Finally lets also try 1.25
faces = face_cascade.detectMultiScale(cv_img,1.25)
# Show those results
show_rects(faces)
# We can see that as we change the scale factor we change the number of true and
# false positives and negatives. With the scale set to 1.05, we have 7 true positives,
# which are correctly identified faces, and 3 false negatives, which are faces which
# are there but not detected, and 3 false positives, where are non-faces which
# opencv thinks are faces. When we change this to 1.15 we lose the false positives but
# also lose one of the true positives, the person to the right wearing a hat. And
# when we change this to 1.25 we lost more true positives as well.
#
# This is actually a really interesting phenomena in machine learning and artificial
# intelligence. There is a trade off between not only how accurate a model is, but how
# the inaccuracy actually happens. Which of these three models do you think is best?
# Well, the answer to that question is really, "it depends". It depends why you are trying
# to detect faces, and what you are going to do with them. If you think these issues
# are interesting, you might want to check out the Applied Data Science with Python
# specialization Michigan offers on Coursera.
#
# Ok, beyond an opportunity to advertise, did you notice anything else that happened when
# we changed the scale factor? It's subtle, but the speed at which the processing ran
# took longer at smaller scale factors. This is because more subimages are being considered
# for these scales. This could also affect which method we might use.
#
# Jupyter has nice support for timing commands. You might have seen this before, a line
# that starts with a percentage sign in jupyter is called a "magic function". This isn't
# normal python - it's actually a shorthand way of writing a function which Jupyter
# has predefined. It looks a lot like the decorators we talked about in a previous
# lecture, but the magic functions were around long before decorators were part of the
# python language. One of the built-in magic functions in juptyer is called timeit, and this
# repeats a piece of python ten times (by default) and tells you the average speed it
# took to complete.
#
# Lets time the speed of detectmultiscale when using a scale of 1.05
%timeit face_cascade.detectMultiScale(cv_img,1.05)
# Ok, now lets compare that to the speed at scale = 1.15
%timeit face_cascade.detectMultiScale(cv_img,1.15)
# You can see that this is a dramatic difference, roughly two and a half times slower
# when using the smaller scale!
#
# This wraps up our discussion of detecting faces in opencv. You'll see that, like OCR, this
# is not a foolproof process. But we can build on the work others have done in machine learning
# and leverage powerful libraries to bring us closer to building a turn key python-based
# solution. Remember that the detection mechanism isn't specific to faces, that's just the
# haarcascades training data we used. On the web you'll be able to find other training data
# to detect other objects, including eyes, animals, and so forth.
```
## More Jupyter Widgets
```
# One of the nice things about using the Jupyter notebook systems is that there is a
# rich set of contributed plugins that seek to extend this system. In this lecture I
# want to introduce you to one such plugin, call ipy web rtc. Webrtc is a fairly new
# protocol for real time communication on the web. Yup, I'm talking about chatting.
# The widget brings this to the Jupyter notebook system. Lets take a look.
#
# First, lets import from this library two different classes which we'll use in a
# demo, one for the camera and one for images.
from ipywebrtc import CameraStream, ImageRecorder
# Then lets take a look at the camera stream object
help(CameraStream)
# We see from the docs that it's east to get a camera facing the user, and we can have
# the audio on or off. We don't need audio for this demo, so lets create a new camera
# instance
camera = CameraStream.facing_user(audio=False)
# The next object we want to look at is the ImageRecorder
help(ImageRecorder)
# The image recorder lets us actually grab images from the camera stream. There are features
# for downloading and using the image as well. We see that the default format is a png file.
# Lets hook up the ImageRecorder to our stream
image_recorder = ImageRecorder(stream=camera)
# Now, the docs are a little unclear how to use this within Jupyter, but if we call the
# download() function it will actually store the results of the camera which is hooked up
# in image_recorder.image. Lets try it out
# First, lets tell the recorder to start capturing data
image_recorder.recording=True
# Now lets download the image
image_recorder.download()
# Then lets inspect the type of the image
type(image_recorder.image)
# Ok, the object that it stores is an ipywidgets.widgets.widget_media.Image. How do we do
# something useful with this? Well, an inspection of the object shows that there is a handy
# value field which actually holds the bytes behind the image. And we know how to display
# those.
# Lets import PIL Image
import PIL.Image
# And lets import io
import io
# And now lets create a PIL image from the bytes
img = PIL.Image.open(io.BytesIO(image_recorder.image.value))
# And render it to the screen
display(img)
# Great, you see a picture! Hopefully you are following along in one of the notebooks
# and have been able to try this out for yourself!
#
# What can you do with this? This is a great way to get started with a bit of computer vision.
# You already know how to identify a face in the webcam picture, or try and capture text
# from within the picture. With OpenCV there are any number of other things you can do, simply
# with a webcam, the Jupyter notebooks, and python!
```
| github_jupyter |
```
# @title Copyright & License (click to expand)
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex Model Monitoring
<table align="left">
<td>
<a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb">
<img src="https://cloud.google.com/images/products/ai/ai-solutions-icon.svg" alt="Google Cloud Notebooks"> Open in GCP Notebooks
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/model_monitoring/model_monitoring.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Open in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/model_monitoring/model_monitoring.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
## Overview
### What is Model Monitoring?
Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:
* software versioning
* rigorous deployment processes
* event logging
* alerting/notication of situations requiring intervention
* on-demand and automated diagnostic tracing
* automated performance and functional testing
You should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.
Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:
* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.
* How significantly are service requests evolving over time? This is called **drift detection**.
If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**.
### Objective
In this notebook, you will learn how to...
* deploy a pre-trained model
* configure model monitoring
* generate some artificial traffic
* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* BigQuery
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### The example model
The model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:
- identity - unique player identitity numbers
- demographic features - information about the player, such as the geographic region in which a player is located
- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level
- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.
The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project.
## Before you begin
### Setup your dependencies
```
import os
import sys
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Google Cloud Notebook requires dependencies to be installed with '--user'
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# Install Python package dependencies.
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib \
google-auth-httplib2 oauth2client requests \
google-cloud-aiplatform google-cloud-storage==1.32.0
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. Enter your project id in the first line of the cell below.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.
**Model monitoring is currently supported in regions us-central1, europe-west4, asia-east1, and asia-southeast1. To keep things simple for this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). You can use any supported region, so long as all resources are co-located.**
```
# Import globally needed dependencies here, after kernel restart.
import copy
import numpy as np
import os
import pprint as pp
import random
import sys
import time
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
```
### Login to your Google Cloud account and enable AI services
```
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
```
### Define utilities
Run the following cells to define some utility functions and distributions used later in this notebook. Although these utilities are not critical to understand the main concepts, feel free to expand the cells
in this section if you're curious or want to dive deeper into how some of your API requests are made.
```
# @title Utility imports and constants
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
# This is the default value at which you would like the monitoring function to trigger an alert.
# In other words, this value fine tunes the alerting sensitivity. This threshold can be customized
# on a per feature basis but this is the global default setting.
DEFAULT_THRESHOLD_VALUE = 0.001
# @title Utility functions
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# @title Utility distributions
# This cell containers parameters enabling us to generate realistic test data that closely
# models the feature distributions found in the training data.
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
```
## Import your model
The churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another.
Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
```
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
MODEL_ID = output[1].split("/")[-1]
if _exit_code == 0:
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
else:
print(f"Error creating model: {output}")
```
## Deploy your endpoint
Now that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.
Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
```
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
if _exit_code == 0:
print("Endpoint created.")
else:
print(f"Error creating endpoint: {output}")
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[-1].split()[-1][:-1]
if _exit_code == 0:
print(
f"Model {MODEL_NAME}/{MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}."
)
else:
print(f"Error deploying model to endpoint: {output}")
```
### If you already have a deployed endpoint
You can reuse your existing endpoint by filling in the value of your endpoint ID in the next cell and running it. **If you've just deployed an endpoint in the previous cell, you should skip this step.**
```
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
if ENDPOINT_ID:
ENDPOINT = f"projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
print(f"Using endpoint {ENDPOINT}")
else:
print("If you want to reuse an existing endpoint, you must specify the endpoint id above.")
```
## Run a prediction test
Now that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.
**Try this now by running the next cell and examine the results.**
```
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
```
Taking a closer look at the results, we see the following elements:
- **churned_values** - a set of possible values (0 and 1) for the target field
- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)
- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)
This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application.
## Start your monitoring job
Now that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.
In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML.
### Configure the following fields:
1. User email - The email address to which you would like monitoring alerts sent.
1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.
1. Monitor interval - The time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).
1. Target field - The prediction target column name in training dataset.
1. Skew detection threshold - The skew threshold for each feature you want to monitor.
1. Prediction drift threshold - The drift threshold for each feature you want to monitor.
```
USER_EMAIL = "[your-email-address]" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
```
### Create your monitoring job
The following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. To do this successfully, you need to specify your alerting thresholds (for both skew and drift), your training data source, and apply those settings to all deployed models on your new endpoint (of which there should only be one at this point).
Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
```
# Set thresholds specifying alerting criteria for training/serving skew and create config object.
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
# Set thresholds specifying alerting criteria for serving drift and create config object.
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
# Specify training dataset source location (used for schema generation).
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
# Aggregate the above settings into a ModelMonitoringObjectiveConfig object and use
# that object to adjust the ModelDeploymentMonitoringObjectiveConfig object.
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
# Find all deployed model ids on the created endpoint and set objectives for each.
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_configs = set_objectives(model_ids, objective_template)
# Create the monitoring job for all deployed models on this endpoint.
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
```
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:
<br>
<br>
<img src="https://storage.googleapis.com/mco-general/img/mm6.png" />
<br>
As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
```
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
```
You will notice the following components in these Cloud Storage paths:
- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.
- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification.
- **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).
- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.
- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.
- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data.
### You can create monitoring jobs with other user interfaces
In the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function.
## Generate test data to trigger alerting
Now you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
```
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
```
## Interpret your results
While waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience.
### Here's what a sample email alert looks like...
<img src="https://storage.googleapis.com/mco-general/img/mm7.png" />
This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification.
### Monitoring results in the Cloud Console
You can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities.
#### Monitoring Status
<img src="https://storage.googleapis.com/mco-general/img/mm1.png" />
#### Monitoring Alerts
<img src="https://storage.googleapis.com/mco-general/img/mm2.png" />
## Clean up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
```
out = !gcloud ai endpoints undeploy-model $ENDPOINT_ID --deployed-model-id $DEPLOYED_MODEL_ID
if _exit_code == 0:
print("Model undeployed.")
else:
print("Error undeploying model:", out)
out = !gcloud ai endpoints delete $ENDPOINT_ID --quiet
if _exit_code == 0:
print("Endpoint deleted.")
else:
print("Error deleting endpoint:", out)
out = !gcloud ai models delete $MODEL_ID --quiet
if _exit_code == 0:
print("Model deleted.")
else:
print("Error deleting model:", out)
```
## Learn more about model monitoring
**Congratulations!** You've now learned what model monitoring is, how to configure and enable it, and how to find and interpret the results. Check out the following resources to learn more about model monitoring and ML Ops.
- [TensorFlow Data Validation](https://www.tensorflow.org/tfx/guide/tfdv)
- [Data Understanding, Validation, and Monitoring At Scale](https://blog.tensorflow.org/2018/09/introducing-tensorflow-data-validation.html)
- [Vertex Product Documentation](https://cloud.google.com/vertex)
- [Model Monitoring Reference Docs](https://cloud.google.com/vertex/docs/reference)
- [Model Monitoring blog article]()
| github_jupyter |
RMinimum : Full - Test
```
import math
import random
import queue
```
Testfall : $X = [0, \cdots, n-1]$, $k$
```
# User input
n = 2**10
k = 2**5
# Automatic
X = [i for i in range(n)]
# Show Testcase
print(' Testcase: ')
print('=============================')
print('X = [0, ..., ' + str(n - 1) + ']')
print('k =', k)
```
Algorithmus : Full
```
def rminimum(X, k, cnt = [], rec = 0):
# Generate empty cnt list if its not a recursive call
if cnt == []:
cnt = [0 for _ in range(max(X) + 1)]
# Convert parameters if needed
k = int(k)
n = len(X)
# Base case |X| = 3
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
mini = X[0]
else:
mini = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
mini = X[1]
else:
mini = X[2]
return mini, cnt, rec
# Run phases
W, L, cnt = phase1(X, cnt)
M, cnt = phase2(L, k, cnt)
Wnew, cnt = phase3(W, k, M, cnt)
mini, cnt, rec = phase4(Wnew, k, n, cnt, rec)
return mini, cnt, rec
return mini, cnt, rec
# --------------------------------------------------
def phase1(X, cnt):
# Init W, L
W = [0 for _ in range(len(X) // 2)]
L = [0 for _ in range(len(X) // 2)]
# Random pairs
random.shuffle(X)
for i in range(len(X) // 2):
if X[2 * i] > X[2 * i + 1]:
W[i] = X[2 * i + 1]
L[i] = X[2 * i]
else:
W[i] = X[2 * i]
L[i] = X[2 * i + 1]
cnt[X[2 * i + 1]] += 1
cnt[X[2 * i]] += 1
return W, L, cnt
# --------------------------------------------------
def phase2(L, k, cnt):
# Generate subsets
random.shuffle(L)
subsets = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
# Init M
M = [0 for _ in range(len(subsets))]
# Perfectly balanced tournament tree using a Queue
for i in range(len(subsets)):
q = queue.Queue()
for ele in subsets[i]:
q.put(ele)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
M[i] = q.get()
return M, cnt
# --------------------------------------------------
def phase3(W, k, M, cnt):
# Generate subsets
random.shuffle(W)
W_i = [W[i * k:(i + 1) * k] for i in range((len(W) + k - 1) // k)]
subsets_filtered = [0 for _ in range(len(subsets))]
# Filter subsets
for i in range(len(subsets_filtered)):
subsets_filtered[i] = [elem for elem in subsets[i] if elem < M[i]]
cnt[M[i]] += len(subsets[i])
for elem in subsets[i]:
cnt[elem] += 1
# Merge subsets
Wnew = [item for sublist in subsets_filtered for item in sublist]
return Wnew, cnt
# --------------------------------------------------
def phase4(Wnew, k, n0, cnt, rec):
# Recursive call check
if len(Wnew) <= math.log(n0, 2) ** 2:
q = queue.Queue()
for ele in Wnew:
q.put(ele)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
mini = q.get()
return mini, cnt, rec
else:
rec += 1
rminimum(Wnew, k, cnt, rec)
# ==================================================
# Testcase
mini, cnt, rec = rminimum(X, k)
```
Resultat :
```
def test(X, k, mini, cnt, rec):
print('')
print('Testfall n / k:', len(X), '/', k)
print('====================================')
print('Fragile Complexity:')
print('-------------------')
print('f_min :', cnt[0])
print('f_rem :', max(cnt[1:]))
print('f_n :', max(cnt))
print('Work :', int(sum(cnt)/2))
print('====================================')
print('Process:')
print('--------')
print('Minimum :', mini)
print('n :', len(X))
print('log(n) :', round(math.log(len(X), 2), 2))
print('log(k) :', round(math.log(k, 2), 2))
print('lg / lglg :', round(math.log(len(X), 2) / math.log(math.log(len(X), 2), 2)))
print('n / log(n) :', round(len(X) / math.log(len(X), 2)))
print('====================================')
return
# Testfall
test(X, k, mini, cnt, rec)
```
| github_jupyter |
# Advanced topics
The following material is a deep-dive into Yangson, and is not necessarily representative of how one would perform manipulations in a production environment. Please refer to the other tutorials for a better picture of Rosetta's intended use. Keep in mind that the key feature of Yangson is to be able to manipulate YANG data models in a more human-readable format, ala JSON. What lies below digs beneath the higher-level abstractions and should paint a decent picture of the functional nature of Yangson.
# Manipulating models with Rosetta and Yangson
One of the goals of many network operators is to provide abstractions in a multi-vendor environment. This can be done with YANG and OpenConfig data models, but as they say, the devil is in the details. It occurred to me that you should be able to parse configuration from one vendor and translate it to another. Unfortunately, as we all know, these configurations don't always translate well on a 1-to-1 basis. I will demonstrate this process below and show several features of the related libraries along the way.
The following example begins exactly the same as the Cisco parsing tutorial. Let's load up some Juniper config and parse it into a YANG data model. First, we'll read the file.
```
from ntc_rosetta import get_driver
import json
junos = get_driver("junos", "openconfig")
junos_driver = junos()
# Strip any rpc tags before and after `<configuration>...</configuration>`
with open("data/junos/dev_conf.xml", "r") as fp:
config = fp.read()
print(config)
```
## Junos parsing
Now, we parse the config and take a look at the data model.
```
from sys import exc_info
from yangson.exceptions import SemanticError
try:
parsed = junos_driver.parse(
native={"dev_conf": config},
validate=False,
include=[
"/openconfig-interfaces:interfaces",
"/openconfig-network-instance:network-instances/network-instance/name",
"/openconfig-network-instance:network-instances/network-instance/config",
"/openconfig-network-instance:network-instances/network-instance/vlans",
]
)
except SemanticError as e:
print(f"error: {e}")
print(json.dumps(parsed.raw_value(), sort_keys=True, indent=2))
```
## Naive translation
Since we have a valid data model, let's see if Rosetta can translate it as-is.
```
ios = get_driver("ios", "openconfig")
ios_driver = ios()
native = ios_driver.translate(candidate=parsed.raw_value())
print(native)
```
Pretty cool, right?! Rosetta does a great job of parsing and translating, but it is a case of "monkey see, monkey do". Rosetta doesn't have any mechanisms to translate interface names, for example. It is up to the operator to perform this sort of manipulation.
## Down the Yangson rabbit hole
Yangson allows the developer to easily translate between YANG data models and JSON. Most all of these manipulations can be performed on dictionaries in Python and loaded into data models using [`from_raw`](https://yangson.labs.nic.cz/datamodel.html#yangson.datamodel.DataModel.from_raw). The following examples may appear to be a little obtuse, but the goal is to demonstrate the internals of Yangson.
### And it's mostly functional
It is critical to read the short description of the [zipper](https://yangson.labs.nic.cz/instance.html?highlight=zipper#yangson.instance.InstanceNode) interface in the InstanceNode section of the docs. Yanson never manipulates an object, but returns a copy with the manipulated attributes.
### Show me the code!
Let's take a look at fixing up the interface names and how we can manipulate data model attributes. To do that, we need to locate the attribute in the tree using the [`parse_resource_id`](https://yangson.labs.nic.cz/datamodel.html#yangson.datamodel.DataModel.parse_resource_id) method. This method returns an [`instance route'](https://yangson.labs.nic.cz/instance.html?highlight=arrayentry#yangson.instance.InstanceRoute). The string passed to the method is an xpath.
```
# Locate the interfaces in the tree. We need to modify this one
# Note that we have to URL-escape the forward slashes per https://tools.ietf.org/html/rfc8040#section-3.5.3
irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces/interface=xe-0%2F0%2F1")
current_data = parsed.root.goto(irt)
print("Current node configuration: ", json.dumps(current_data.raw_value(), sort_keys=True, indent=2))
modify_data = current_data.raw_value()
ifname = 'Ethernet0/0/1'
modify_data['name'] = ifname
modify_data['config']['name'] = ifname
stub = current_data.update(modify_data, raw=True)
print("\n\nCandidate node configuration: ", json.dumps(stub.raw_value(), sort_keys=True, indent=2))
```
### Instance routes
You will notice a `goto` method on child nodes. You _can_ access successors with this method, but you have to build the path from the root `datamodel` attribute as seen in the following example. If you aren't sure where an object is in the tree, you can also rely on its `path` attribute.
Quick tangent... what is the difference between `parse_instance_id` and `parse_resource_id`? The answer can be found in the [Yangson glossary](https://yangson.labs.nic.cz/glossary.html) and the respective RFC's.
```
# TL;DR
irt = parsed.datamodel.parse_instance_id('/openconfig-network-instance:network-instances/network-instance[1]/vlans/vlan[3]')
print(parsed.root.goto(irt).raw_value())
irt = parsed.datamodel.parse_resource_id('openconfig-network-instance:network-instances/network-instance=default/vlans/vlan=10')
print(parsed.root.goto(irt).raw_value())
```
What about the rest of the interfaces in the list? Yangson provides an iterator for array nodes.
```
import re
irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces/interface")
iface_objs = parsed.root.goto(irt)
# Swap the name as required
p, sub = re.compile(r'xe-'), 'Ethernet'
# There are a couple challenges here. First is that Yanson doesn't impliment __len__
# The second problem is that you cannot modify a list in-place, so we're basically
# hacking this to hijack the index of the current element and looking it up from a "clean"
# instance. This is a pet example! It would be much easier using Python dicts.
new_ifaces = None
for iface in iface_objs:
name_irt = parsed.datamodel.parse_instance_id('/name')
cname_irt = parsed.datamodel.parse_instance_id('/config/name')
if new_ifaces:
name = new_ifaces[iface.index].goto(name_irt)
else:
name = iface.goto(name_irt)
name = name.update(p.sub(sub, name.raw_value()), raw=True)
cname = name.up().goto(cname_irt)
cname = cname.update(p.sub(sub, cname.raw_value()), raw=True)
iface = cname.up().up()
new_ifaces = iface.up()
print(json.dumps(new_ifaces.raw_value(), sort_keys=True, indent=2))
# Translate to Cisco-speak
native = ios_driver.translate(candidate=new_ifaces.top().raw_value())
print(native)
```
Hooray! That should work. One final approach, just to show you different ways of doing things. This is another pet example to demonstrate Yangson methods.
```
import re
from typing import Dict
irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces")
iface_objs = parsed.root.goto(irt)
# Nuke the whole branch!
iface_objs = iface_objs.delete_item("interface")
def build_iface(data: str) -> Dict:
# Example template, this could be anything you like that conforms to the schema
return {
"name": f"{data['name']}",
"config": {
"name": f"{data['name']}",
"description": f"{data['description']}",
"type": "iana-if-type:ethernetCsmacd",
"enabled": True
},
}
iface_data = [
build_iface({
"name": f"TenGigabitEthernet0/{idx}",
"description": f"This is interface TenGigabitEthernet0/{idx}"
}) for idx in range(10, 0, -1)
]
initial = iface_data.pop()
# Start a new interface list
iface_objs = iface_objs.put_member("interface", [initial], raw=True)
cur_obj = iface_objs[0]
# Yangson exposes `next`, `insert_after`, and `insert_before` methods.
# There is no `append`.
while iface_data:
new_obj = cur_obj.insert_after(iface_data.pop(), raw=True)
cur_obj = new_obj
# Translate to Cisco-speak
native = ios_driver.translate(candidate=cur_obj.top().raw_value())
print(native)
```
### Deleting individual items
Here is an example of deleting an individual item. Navigating the tree can be a bit tricky, but it's not too bad once you get the hang of it.
```
# Locate a vlan by ID and delete it
irt = parsed.datamodel.parse_resource_id("openconfig-network-instance:network-instances/network-instance=default/vlans/vlan=10")
vlan10 = parsed.root.goto(irt)
vlans = vlan10.up().delete_item(vlan10.index)
print(json.dumps(vlans.raw_value(), sort_keys=True, indent=2))
```
| github_jupyter |
# Planning Search Agent
Notebook version of the project [Implement a Planning Search](https://github.com/udacity/AIND-Planning) from [Udacity's Artificial Intelligence Nanodegree](https://www.udacity.com/course/artificial-intelligence-nanodegree--nd889) <br>
**Goal**: Solve deterministic logistics planning problems for an Air Cargo transport system using a planning search agent
All problems are in the Air Cargo domain. They have the same action schema defined, but different initial states and goals:
```
Action(Load(c, p, a),
PRECOND: At(c, a) ∧ At(p, a) ∧ Cargo(c) ∧ Plane(p) ∧ Airport(a)
EFFECT: ¬ At(c, a) ∧ In(c, p))
Action(Unload(c, p, a),
PRECOND: In(c, p) ∧ At(p, a) ∧ Cargo(c) ∧ Plane(p) ∧ Airport(a)
EFFECT: At(c, a) ∧ ¬ In(c, p))
Action(Fly(p, from, to),
PRECOND: At(p, from) ∧ Plane(p) ∧ Airport(from) ∧ Airport(to)
EFFECT: ¬ At(p, from) ∧ At(p, to))
```
## Planning Graph nodes
```
from planning_agent.aimacode.planning import Action
from planning_agent.aimacode.search import Problem
from planning_agent.aimacode.utils import expr
from planning_agent.lp_utils import decode_state
class PgNode():
""" Base class for planning graph nodes.
includes instance sets common to both types of nodes used in a planning graph
parents: the set of nodes in the previous level
children: the set of nodes in the subsequent level
mutex: the set of sibling nodes that are mutually exclusive with this node
"""
def __init__(self):
self.parents = set()
self.children = set()
self.mutex = set()
def is_mutex(self, other) -> bool:
""" Boolean test for mutual exclusion
:param other: PgNode
the other node to compare with
:return: bool
True if this node and the other are marked mutually exclusive (mutex)
"""
if other in self.mutex:
return True
return False
def show(self):
""" helper print for debugging shows counts of parents, children, siblings
:return:
print only
"""
print("{} parents".format(len(self.parents)))
print("{} children".format(len(self.children)))
print("{} mutex".format(len(self.mutex)))
class PgNode_s(PgNode):
"""
A planning graph node representing a state (literal fluent) from a planning
problem.
Args:
----------
symbol : str
A string representing a literal expression from a planning problem
domain.
is_pos : bool
Boolean flag indicating whether the literal expression is positive or
negative.
"""
def __init__(self, symbol: str, is_pos: bool):
""" S-level Planning Graph node constructor
:param symbol: expr
:param is_pos: bool
Instance variables calculated:
literal: expr
fluent in its literal form including negative operator if applicable
Instance variables inherited from PgNode:
parents: set of nodes connected to this node in previous A level; initially empty
children: set of nodes connected to this node in next A level; initially empty
mutex: set of sibling S-nodes that this node has mutual exclusion with; initially empty
"""
PgNode.__init__(self)
self.symbol = symbol
self.is_pos = is_pos
self.literal = expr(self.symbol)
if not self.is_pos:
self.literal = expr('~{}'.format(self.symbol))
def show(self):
"""helper print for debugging shows literal plus counts of parents, children, siblings
:return:
print only
"""
print("\n*** {}".format(self.literal))
PgNode.show(self)
def __eq__(self, other):
"""equality test for nodes - compares only the literal for equality
:param other: PgNode_s
:return: bool
"""
if isinstance(other, self.__class__):
return (self.symbol == other.symbol) \
and (self.is_pos == other.is_pos)
def __hash__(self):
return hash(self.symbol) ^ hash(self.is_pos)
class PgNode_a(PgNode):
"""A-type (action) Planning Graph node - inherited from PgNode
"""
def __init__(self, action: Action):
"""A-level Planning Graph node constructor
:param action: Action
a ground action, i.e. this action cannot contain any variables
Instance variables calculated:
An A-level will always have an S-level as its parent and an S-level as its child.
The preconditions and effects will become the parents and children of the A-level node
However, when this node is created, it is not yet connected to the graph
prenodes: set of *possible* parent S-nodes
effnodes: set of *possible* child S-nodes
is_persistent: bool True if this is a persistence action, i.e. a no-op action
Instance variables inherited from PgNode:
parents: set of nodes connected to this node in previous S level; initially empty
children: set of nodes connected to this node in next S level; initially empty
mutex: set of sibling A-nodes that this node has mutual exclusion with; initially empty
"""
PgNode.__init__(self)
self.action = action
self.prenodes = self.precond_s_nodes()
self.effnodes = self.effect_s_nodes()
self.is_persistent = False
if self.prenodes == self.effnodes:
self.is_persistent = True
def show(self):
"""helper print for debugging shows action plus counts of parents, children, siblings
:return:
print only
"""
print("\n*** {}{}".format(self.action.name, self.action.args))
PgNode.show(self)
def precond_s_nodes(self):
"""precondition literals as S-nodes (represents possible parents for this node).
It is computationally expensive to call this function; it is only called by the
class constructor to populate the `prenodes` attribute.
:return: set of PgNode_s
"""
nodes = set()
for p in self.action.precond_pos:
n = PgNode_s(p, True)
nodes.add(n)
for p in self.action.precond_neg:
n = PgNode_s(p, False)
nodes.add(n)
return nodes
def effect_s_nodes(self):
"""effect literals as S-nodes (represents possible children for this node).
It is computationally expensive to call this function; it is only called by the
class constructor to populate the `effnodes` attribute.
:return: set of PgNode_s
"""
nodes = set()
for e in self.action.effect_add:
n = PgNode_s(e, True)
nodes.add(n)
for e in self.action.effect_rem:
n = PgNode_s(e, False)
nodes.add(n)
return nodes
def __eq__(self, other):
"""equality test for nodes - compares only the action name for equality
:param other: PgNode_a
:return: bool
"""
if isinstance(other, self.__class__):
return (self.action.name == other.action.name) \
and (self.action.args == other.action.args)
def __hash__(self):
return hash(self.action.name) ^ hash(self.action.args)
```
## Planning Graph
```
def mutexify(node1: PgNode, node2: PgNode):
""" adds sibling nodes to each other's mutual exclusion (mutex) set. These should be sibling nodes!
:param node1: PgNode (or inherited PgNode_a, PgNode_s types)
:param node2: PgNode (or inherited PgNode_a, PgNode_s types)
:return:
node mutex sets modified
"""
if type(node1) != type(node2):
raise TypeError('Attempted to mutex two nodes of different types')
node1.mutex.add(node2)
node2.mutex.add(node1)
class PlanningGraph():
"""
A planning graph as described in chapter 10 of the AIMA text. The planning
graph can be used to reason about
"""
def __init__(self, problem: Problem, state: str, serial_planning=True):
"""
:param problem: PlanningProblem (or subclass such as AirCargoProblem or HaveCakeProblem)
:param state: str (will be in form TFTTFF... representing fluent states)
:param serial_planning: bool (whether or not to assume that only one action can occur at a time)
Instance variable calculated:
fs: FluentState
the state represented as positive and negative fluent literal lists
all_actions: list of the PlanningProblem valid ground actions combined with calculated no-op actions
s_levels: list of sets of PgNode_s, where each set in the list represents an S-level in the planning graph
a_levels: list of sets of PgNode_a, where each set in the list represents an A-level in the planning graph
"""
self.problem = problem
self.fs = decode_state(state, problem.state_map)
self.serial = serial_planning
self.all_actions = self.problem.actions_list + self.noop_actions(self.problem.state_map)
self.s_levels = []
self.a_levels = []
self.create_graph()
def noop_actions(self, literal_list):
"""create persistent action for each possible fluent
"No-Op" actions are virtual actions (i.e., actions that only exist in
the planning graph, not in the planning problem domain) that operate
on each fluent (literal expression) from the problem domain. No op
actions "pass through" the literal expressions from one level of the
planning graph to the next.
The no-op action list requires both a positive and a negative action
for each literal expression. Positive no-op actions require the literal
as a positive precondition and add the literal expression as an effect
in the output, and negative no-op actions require the literal as a
negative precondition and remove the literal expression as an effect in
the output.
This function should only be called by the class constructor.
:param literal_list:
:return: list of Action
"""
action_list = []
for fluent in literal_list:
act1 = Action(expr("Noop_pos({})".format(fluent)), ([fluent], []), ([fluent], []))
action_list.append(act1)
act2 = Action(expr("Noop_neg({})".format(fluent)), ([], [fluent]), ([], [fluent]))
action_list.append(act2)
return action_list
def create_graph(self):
""" build a Planning Graph as described in Russell-Norvig 3rd Ed 10.3 or 2nd Ed 11.4
The S0 initial level has been implemented for you. It has no parents and includes all of
the literal fluents that are part of the initial state passed to the constructor. At the start
of a problem planning search, this will be the same as the initial state of the problem. However,
the planning graph can be built from any state in the Planning Problem
This function should only be called by the class constructor.
:return:
builds the graph by filling s_levels[] and a_levels[] lists with node sets for each level
"""
# the graph should only be built during class construction
if (len(self.s_levels) != 0) or (len(self.a_levels) != 0):
raise Exception(
'Planning Graph already created; construct a new planning graph for each new state in the planning sequence')
# initialize S0 to literals in initial state provided.
leveled = False
level = 0
self.s_levels.append(set()) # S0 set of s_nodes - empty to start
# for each fluent in the initial state, add the correct literal PgNode_s
for literal in self.fs.pos:
self.s_levels[level].add(PgNode_s(literal, True))
for literal in self.fs.neg:
self.s_levels[level].add(PgNode_s(literal, False))
# no mutexes at the first level
# continue to build the graph alternating A, S levels until last two S levels contain the same literals,
# i.e. until it is "leveled"
while not leveled:
self.add_action_level(level)
self.update_a_mutex(self.a_levels[level])
level += 1
self.add_literal_level(level)
self.update_s_mutex(self.s_levels[level])
if self.s_levels[level] == self.s_levels[level - 1]:
leveled = True
def add_action_level(self, level):
""" add an A (action) level to the Planning Graph
:param level: int
the level number alternates S0, A0, S1, A1, S2, .... etc the level number is also used as the
index for the node set lists self.a_levels[] and self.s_levels[]
:return:
adds A nodes to the current level in self.a_levels[level]
"""
self.a_levels.append(set()) # set of a_nodes
for a in self.all_actions:
a_node = PgNode_a(a)
if set(a_node.prenodes).issubset(set(self.s_levels[level])): # True: Valid A node
for s_node in self.s_levels[level]:
if s_node in a_node.prenodes: # search for the right parents
a_node.parents.add(s_node)
s_node.children.add(a_node)
self.a_levels[level].add(a_node)
def add_literal_level(self, level):
""" add an S (literal) level to the Planning Graph
:param level: int
the level number alternates S0, A0, S1, A1, S2, .... etc the level number is also used as the
index for the node set lists self.a_levels[] and self.s_levels[]
:return:
adds S nodes to the current level in self.s_levels[level]
"""
self.s_levels.append(set()) # set of s_nodes
for a in self.a_levels[level-1]:
for s_node in a.effnodes: # Valid S nodes
a.children.add(s_node)
s_node.parents.add(a)
self.s_levels[level].add(s_node)
def update_a_mutex(self, nodeset):
""" Determine and update sibling mutual exclusion for A-level nodes
Mutex action tests section from 3rd Ed. 10.3 or 2nd Ed. 11.4
A mutex relation holds between two actions a given level
if the planning graph is a serial planning graph and the pair are nonpersistence actions
or if any of the three conditions hold between the pair:
Inconsistent Effects
Interference
Competing needs
:param nodeset: set of PgNode_a (siblings in the same level)
:return:
mutex set in each PgNode_a in the set is appropriately updated
"""
nodelist = list(nodeset)
for i, n1 in enumerate(nodelist[:-1]):
for n2 in nodelist[i + 1:]:
if (self.serialize_actions(n1, n2) or
self.inconsistent_effects_mutex(n1, n2) or
self.interference_mutex(n1, n2) or
self.competing_needs_mutex(n1, n2)):
mutexify(n1, n2)
def serialize_actions(self, node_a1: PgNode_a, node_a2: PgNode_a) -> bool:
"""
Test a pair of actions for mutual exclusion, returning True if the
planning graph is serial, and if either action is persistent; otherwise
return False. Two serial actions are mutually exclusive if they are
both non-persistent.
:param node_a1: PgNode_a
:param node_a2: PgNode_a
:return: bool
"""
#
if not self.serial:
return False
if node_a1.is_persistent or node_a2.is_persistent:
return False
return True
def inconsistent_effects_mutex(self, node_a1: PgNode_a, node_a2: PgNode_a) -> bool:
"""
Test a pair of actions for inconsistent effects, returning True if
one action negates an effect of the other, and False otherwise.
HINT: The Action instance associated with an action node is accessible
through the PgNode_a.action attribute. See the Action class
documentation for details on accessing the effects and preconditions of
an action.
:param node_a1: PgNode_a
:param node_a2: PgNode_a
:return: bool
"""
# Create 1 set with all the adding effects and 1 set with all the removing effects.
# (a single action cannot result in inconsistent effects)
# If the intersection (&) of the two sets is not empty, then at least one effect negates another
effects_add = node_a1.action.effect_add + node_a2.action.effect_add
effects_rem = node_a1.action.effect_rem + node_a2.action.effect_rem
return bool(set(effects_add) & set(effects_rem))
def interference_mutex(self, node_a1: PgNode_a, node_a2: PgNode_a) -> bool:
"""
Test a pair of actions for mutual exclusion, returning True if the
effect of one action is the negation of a precondition of the other.
HINT: The Action instance associated with an action node is accessible
through the PgNode_a.action attribute. See the Action class
documentation for details on accessing the effects and preconditions of
an action.
:param node_a1: PgNode_a
:param node_a2: PgNode_a
:return: bool
"""
# Similar implementation of inconsistent_effects_mutex but crossing the adding/removing effect of each action
# with the negative/positive precondition of the other.
# 4 sets are used for 2 separated intersections. The intersection of 2 large sets (pos_add and neg_rem) would
# also result True for inconsistent_effects
cross_pos = node_a1.action.effect_add + node_a2.action.precond_pos
cross_neg = node_a1.action.precond_neg + node_a2.action.effect_rem
cross_pos2 = node_a2.action.effect_add + node_a1.action.precond_pos
cross_neg2 = node_a2.action.precond_neg + node_a1.action.effect_rem
return bool(set(cross_pos) & set(cross_neg)) or bool(set(cross_pos2) & set(cross_neg2))
def competing_needs_mutex(self, node_a1: PgNode_a, node_a2: PgNode_a) -> bool:
"""
Test a pair of actions for mutual exclusion, returning True if one of
the precondition of one action is mutex with a precondition of the
other action.
:param node_a1: PgNode_a
:param node_a2: PgNode_a
:return: bool
"""
# Create a list with the parents of one action node that are mutually exclusive with the parents of the other
# and return True if the list is not empty
mutex = [i for i in node_a1.parents for j in node_a2.parents if i.is_mutex(j)]
return bool(mutex)
def update_s_mutex(self, nodeset: set):
""" Determine and update sibling mutual exclusion for S-level nodes
Mutex action tests section from 3rd Ed. 10.3 or 2nd Ed. 11.4
A mutex relation holds between literals at a given level
if either of the two conditions hold between the pair:
Negation
Inconsistent support
:param nodeset: set of PgNode_a (siblings in the same level)
:return:
mutex set in each PgNode_a in the set is appropriately updated
"""
nodelist = list(nodeset)
for i, n1 in enumerate(nodelist[:-1]):
for n2 in nodelist[i + 1:]:
if self.negation_mutex(n1, n2) or self.inconsistent_support_mutex(n1, n2):
mutexify(n1, n2)
def negation_mutex(self, node_s1: PgNode_s, node_s2: PgNode_s) -> bool:
"""
Test a pair of state literals for mutual exclusion, returning True if
one node is the negation of the other, and False otherwise.
HINT: Look at the PgNode_s.__eq__ defines the notion of equivalence for
literal expression nodes, and the class tracks whether the literal is
positive or negative.
:param node_s1: PgNode_s
:param node_s2: PgNode_s
:return: bool
"""
# Mutual exclusive nodes have the same 'symbol' and different 'is_pos' attributes
return (node_s1.symbol == node_s2.symbol) and (node_s1.is_pos != node_s2.is_pos)
def inconsistent_support_mutex(self, node_s1: PgNode_s, node_s2: PgNode_s):
"""
Test a pair of state literals for mutual exclusion, returning True if
there are no actions that could achieve the two literals at the same
time, and False otherwise. In other words, the two literal nodes are
mutex if all of the actions that could achieve the first literal node
are pairwise mutually exclusive with all of the actions that could
achieve the second literal node.
HINT: The PgNode.is_mutex method can be used to test whether two nodes
are mutually exclusive.
:param node_s1: PgNode_s
:param node_s2: PgNode_s
:return: bool
"""
# Get a list with the parents of one node that are not mutually exclusive with at least one parent of the other
# Here the inconsistent is detected if the list is empty (none of the actions can lead to these pair of nodes at
# the same time)
compatible_parents_s1 = [a for a in node_s1.parents for b in node_s2.parents if not a.is_mutex(b)]
return not bool(compatible_parents_s1)
def h_levelsum(self) -> int:
"""The sum of the level costs of the individual goals (admissible if goals independent)
:return: int
"""
level_sum = 0
# for each goal in the problem, determine the level cost, then add them together
remaining_goals = set(self.problem.goal) # remaining goals to find to determine the level cost
# Search for all the goals simultaneously from level 0
for level in range(len(self.s_levels)+1):
literals = set([node.literal for node in self.s_levels[level]]) # literals found in the current level
match = literals & remaining_goals # set of goals found in literals (empty set if none)
level_sum += len(match)*level # add cost of the found goals (0 if none)
remaining_goals -= match # remove found goals from the remaining goals
if not remaining_goals: # return when all goals are found
return level_sum
raise Exception("Goal not found")
```
## Air Cargo Problem
```
from planning_agent.aimacode.logic import PropKB
from planning_agent.aimacode.planning import Action
from planning_agent.aimacode.search import Node, Problem
from planning_agent.aimacode.utils import expr
from planning_agent.lp_utils import FluentState, encode_state, decode_state
class AirCargoProblem(Problem):
def __init__(self, cargos, planes, airports, initial: FluentState, goal: list):
"""
:param cargos: list of str
cargos in the problem
:param planes: list of str
planes in the problem
:param airports: list of str
airports in the problem
:param initial: FluentState object
positive and negative literal fluents (as expr) describing initial state
:param goal: list of expr
literal fluents required for goal test
"""
self.state_map = initial.pos + initial.neg
self.initial_state_TF = encode_state(initial, self.state_map)
Problem.__init__(self, self.initial_state_TF, goal=goal)
self.cargos = cargos
self.planes = planes
self.airports = airports
self.actions_list = self.get_actions()
def get_actions(self):
"""
This method creates concrete actions (no variables) for all actions in the problem
domain action schema and turns them into complete Action objects as defined in the
aimacode.planning module. It is computationally expensive to call this method directly;
however, it is called in the constructor and the results cached in the `actions_list` property.
Returns:
----------
list<Action>
list of Action objects
"""
def load_actions():
"""Create all concrete Load actions and return a list
:return: list of Action objects
"""
loads = []
for c in self.cargos:
for p in self.planes:
for a in self.airports:
precond_pos = [expr("At({}, {})".format(c, a)),
expr("At({}, {})".format(p, a))]
precond_neg = []
effect_add = [expr("In({}, {})".format(c, p))]
effect_rem = [expr("At({}, {})".format(c, a))]
load = Action(expr("Load({}, {}, {})".format(c, p, a)),
[precond_pos, precond_neg],
[effect_add, effect_rem])
loads.append(load)
return loads
def unload_actions():
"""Create all concrete Unload actions and return a list
:return: list of Action objects
"""
unloads = []
for c in self.cargos:
for p in self.planes:
for a in self.airports:
precond_pos = [expr("In({}, {})".format(c, p)),
expr("At({}, {})".format(p, a))]
precond_neg = []
effect_add = [expr("At({}, {})".format(c, a))]
effect_rem = [expr("In({}, {})".format(c, p))]
unload = Action(expr("Unload({}, {}, {})".format(c, p, a)),
[precond_pos, precond_neg],
[effect_add, effect_rem])
unloads.append(unload)
return unloads
def fly_actions():
"""Create all concrete Fly actions and return a list
:return: list of Action objects
"""
flys = []
for fr in self.airports:
for to in self.airports:
if fr != to:
for p in self.planes:
precond_pos = [expr("At({}, {})".format(p, fr)),
]
precond_neg = []
effect_add = [expr("At({}, {})".format(p, to))]
effect_rem = [expr("At({}, {})".format(p, fr))]
fly = Action(expr("Fly({}, {}, {})".format(p, fr, to)),
[precond_pos, precond_neg],
[effect_add, effect_rem])
flys.append(fly)
return flys
return load_actions() + unload_actions() + fly_actions()
def actions(self, state: str) -> list:
""" Return the actions that can be executed in the given state.
:param state: str
state represented as T/F string of mapped fluents (state variables)
e.g. 'FTTTFF'
:return: list of Action objects
"""
possible_actions = []
kb = PropKB()
kb.tell(decode_state(state, self.state_map).pos_sentence())
for action in self.actions_list:
is_possible = True
for clause in action.precond_pos:
if clause not in kb.clauses:
is_possible = False
for clause in action.precond_neg:
if clause in kb.clauses:
is_possible = False
if is_possible:
possible_actions.append(action)
return possible_actions
def result(self, state: str, action: Action):
""" Return the state that results from executing the given
action in the given state. The action must be one of
self.actions(state).
:param state: state entering node
:param action: Action applied
:return: resulting state after action
"""
new_state = FluentState([], [])
# Used the same implementation as cake example:
old_state = decode_state(state, self.state_map)
for fluent in old_state.pos:
if fluent not in action.effect_rem:
new_state.pos.append(fluent)
for fluent in action.effect_add:
if fluent not in new_state.pos:
new_state.pos.append(fluent)
for fluent in old_state.neg:
if fluent not in action.effect_add:
new_state.neg.append(fluent)
for fluent in action.effect_rem:
if fluent not in new_state.neg:
new_state.neg.append(fluent)
return encode_state(new_state, self.state_map)
def goal_test(self, state: str) -> bool:
""" Test the state to see if goal is reached
:param state: str representing state
:return: bool
"""
kb = PropKB()
kb.tell(decode_state(state, self.state_map).pos_sentence())
for clause in self.goal:
if clause not in kb.clauses:
return False
return True
def h_1(self, node: Node):
# note that this is not a true heuristic
h_const = 1
return h_const
def h_pg_levelsum(self, node: Node):
"""
This heuristic uses a planning graph representation of the problem
state space to estimate the sum of all actions that must be carried
out from the current state in order to satisfy each individual goal
condition.
"""
# requires implemented PlanningGraph class
pg = PlanningGraph(self, node.state)
pg_levelsum = pg.h_levelsum()
return pg_levelsum
def h_ignore_preconditions(self, node: Node):
"""
This heuristic estimates the minimum number of actions that must be
carried out from the current state in order to satisfy all of the goal
conditions by ignoring the preconditions required for an action to be
executed.
"""
# Note: We assume that the number of steps required to solve the relaxed ignore preconditions problem
# is equal to the number of unsatisfied goals.
# Thus no action results in multiple goals and no action undoes the effects of other actions
kb = PropKB()
kb.tell(decode_state(node.state, self.state_map).pos_sentence())
# Unsatisfied goals are the ones not found in the clauses of PropKB() for the current state
count = len(set(self.goal) - set(kb.clauses))
# print("Current_state: ", kb.clauses, " Goal state: ", self.goal)
return count
```
## Scenarios
```
def air_cargo_p1() -> AirCargoProblem:
cargos = ['C1', 'C2']
planes = ['P1', 'P2']
airports = ['JFK', 'SFO']
pos = [expr('At(C1, SFO)'),
expr('At(C2, JFK)'),
expr('At(P1, SFO)'),
expr('At(P2, JFK)'),
]
neg = [expr('At(C2, SFO)'),
expr('In(C2, P1)'),
expr('In(C2, P2)'),
expr('At(C1, JFK)'),
expr('In(C1, P1)'),
expr('In(C1, P2)'),
expr('At(P1, JFK)'),
expr('At(P2, SFO)'),
]
init = FluentState(pos, neg)
goal = [expr('At(C1, JFK)'),
expr('At(C2, SFO)'),
]
return AirCargoProblem(cargos, planes, airports, init, goal)
def air_cargo_p2() -> AirCargoProblem:
cargos = ['C1', 'C2', 'C3']
planes = ['P1', 'P2', 'P3']
airports = ['SFO', 'JFK', 'ATL']
pos = [expr('At(C1, SFO)'),
expr('At(C2, JFK)'),
expr('At(C3, ATL)'),
expr('At(P1, SFO)'),
expr('At(P2, JFK)'),
expr('At(P3, ATL)'),
]
neg = [expr('At(C1, JFK)'),
expr('At(C1, ATL)'),
expr('At(C2, SFO)'),
expr('At(C2, ATL)'),
expr('At(C3, SFO)'),
expr('At(C3, JFK)'),
expr('In(C1, P1)'),
expr('In(C1, P2)'),
expr('In(C1, P3)'),
expr('In(C2, P1)'),
expr('In(C2, P2)'),
expr('In(C2, P3)'),
expr('In(C3, P1)'),
expr('In(C3, P2)'),
expr('In(C3, P3)'),
expr('At(P1, JFK)'),
expr('At(P1, ATL)'),
expr('At(P2, SFO)'),
expr('At(P2, ATL)'),
expr('At(P3, SFO)'),
expr('At(P3, JFK)'),
]
init = FluentState(pos, neg)
goal = [expr('At(C1, JFK)'),
expr('At(C2, SFO)'),
expr('At(C3, SFO)'),
]
return AirCargoProblem(cargos, planes, airports, init, goal)
def air_cargo_p3() -> AirCargoProblem:
cargos = ['C1', 'C2', 'C3', 'C4']
planes = ['P1', 'P2']
airports = ['SFO', 'JFK', 'ATL', 'ORD']
pos = [expr('At(C1, SFO)'),
expr('At(C2, JFK)'),
expr('At(C3, ATL)'),
expr('At(C4, ORD)'),
expr('At(P1, SFO)'),
expr('At(P2, JFK)'),
]
neg = [expr('At(C1, JFK)'),
expr('At(C1, ATL)'),
expr('At(C1, ORD)'),
expr('At(C2, SFO)'),
expr('At(C2, ATL)'),
expr('At(C2, ORD)'),
expr('At(C3, JFK)'),
expr('At(C3, SFO)'),
expr('At(C3, ORD)'),
expr('At(C4, JFK)'),
expr('At(C4, SFO)'),
expr('At(C4, ATL)'),
expr('In(C1, P1)'),
expr('In(C1, P2)'),
expr('In(C2, P1)'),
expr('In(C2, P2)'),
expr('In(C3, P1)'),
expr('In(C3, P2)'),
expr('In(C4, P1)'),
expr('In(C4, P2)'),
expr('At(P1, JFK)'),
expr('At(P1, ATL)'),
expr('At(P1, ORD)'),
expr('At(P2, SFO)'),
expr('At(P2, ATL)'),
expr('At(P2, ORD)'),
]
init = FluentState(pos, neg)
goal = [expr('At(C1, JFK)'),
expr('At(C2, SFO)'),
expr('At(C3, JFK)'),
expr('At(C4, SFO)'),
]
return AirCargoProblem(cargos, planes, airports, init, goal)
```
- Problem 1 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK)
∧ At(P1, SFO) ∧ At(P2, JFK)
∧ Cargo(C1) ∧ Cargo(C2)
∧ Plane(P1) ∧ Plane(P2)
∧ Airport(JFK) ∧ Airport(SFO))
Goal(At(C1, JFK) ∧ At(C2, SFO))
```
- Problem 2 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK) ∧ At(C3, ATL)
∧ At(P1, SFO) ∧ At(P2, JFK) ∧ At(P3, ATL)
∧ Cargo(C1) ∧ Cargo(C2) ∧ Cargo(C3)
∧ Plane(P1) ∧ Plane(P2) ∧ Plane(P3)
∧ Airport(JFK) ∧ Airport(SFO) ∧ Airport(ATL))
Goal(At(C1, JFK) ∧ At(C2, SFO) ∧ At(C3, SFO))
```
- Problem 3 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK) ∧ At(C3, ATL) ∧ At(C4, ORD)
∧ At(P1, SFO) ∧ At(P2, JFK)
∧ Cargo(C1) ∧ Cargo(C2) ∧ Cargo(C3) ∧ Cargo(C4)
∧ Plane(P1) ∧ Plane(P2)
∧ Airport(JFK) ∧ Airport(SFO) ∧ Airport(ATL) ∧ Airport(ORD))
Goal(At(C1, JFK) ∧ At(C3, JFK) ∧ At(C2, SFO) ∧ At(C4, SFO))
```
## Solving the problem
```
import argparse
from timeit import default_timer as timer
from planning_agent.aimacode.search import InstrumentedProblem
from planning_agent.aimacode.search import (breadth_first_search, astar_search,
breadth_first_tree_search, depth_first_graph_search, uniform_cost_search,
greedy_best_first_graph_search, depth_limited_search,
recursive_best_first_search)
PROBLEMS = [["Air Cargo Problem 1", air_cargo_p1],
["Air Cargo Problem 2", air_cargo_p2],
["Air Cargo Problem 3", air_cargo_p3]]
SEARCHES = [["breadth_first_search", breadth_first_search, ""],
['breadth_first_tree_search', breadth_first_tree_search, ""],
['depth_first_graph_search', depth_first_graph_search, ""],
['depth_limited_search', depth_limited_search, ""],
['uniform_cost_search', uniform_cost_search, ""],
['recursive_best_first_search', recursive_best_first_search, 'h_1'],
['greedy_best_first_graph_search', greedy_best_first_graph_search, 'h_1'],
['astar_search', astar_search, 'h_1'],
['astar_search', astar_search, 'h_ignore_preconditions'],
['astar_search', astar_search, 'h_pg_levelsum'],
]
class PrintableProblem(InstrumentedProblem):
""" InstrumentedProblem keeps track of stats during search, and this class modifies the print output of those
statistics for air cargo problems """
def __repr__(self):
return '{:^10d} {:^10d} {:^10d}'.format(self.succs, self.goal_tests, self.states)
def show_solution(node, elapsed_time):
print("Plan length: {} Time elapsed in seconds: {}".format(len(node.solution()), elapsed_time))
for action in node.solution():
print("{}{}".format(action.name, action.args))
def run_search(problem, search_function, parameter=None):
start = timer()
ip = PrintableProblem(problem)
if parameter is not None:
node = search_function(ip, parameter)
else:
node = search_function(ip)
end = timer()
print("\nExpansions Goal Tests New Nodes")
print("{}\n".format(ip))
show_solution(node, end - start)
print()
def main(p_choices, s_choices):
problems = [PROBLEMS[i-1] for i in map(int, p_choices)]
searches = [SEARCHES[i-1] for i in map(int, s_choices)]
for pname, p in problems:
for sname, s, h in searches:
hstring = h if not h else " with {}".format(h)
print("\nSolving {} using {}{}...".format(pname, sname, hstring))
_p = p()
_h = None if not h else getattr(_p, h)
run_search(_p, s, _h)
if __name__=="__main__":
main([1,2,3],[1,9])
```
| github_jupyter |
>>> Work in Progress (Following are the lecture notes of Prof Andrew Ng/Head TA-Raphael Townshend - CS229 - Stanford. This is my interpretation of his excellent teaching and I take full responsibility of any misinterpretation/misinformation provided herein.)
## Lecture Notes
#### Outline
- Decision Trees
- Ensemble Methods
- Bagging
- Random Forests
- Boosting
### Decision Trees
- Non-linear model
- A model is called linear if the hypothesis function is of the form $h(x) = \theta^{T}x$
- Ski example - months vs latitude - when you can ski
- we cannot get a linear classifier or use SVM for this
- with decision trees you will have a very natural way of classifying this
- partition this into individual regions, isolating positive and negative examples
#### Selecting Regions - Greedy, Top-Down, Recursive Partitioning
- You ask question and partition the space and then iteratively keep asking new question, partitioning the space
- Is latitude > 30
- Yes
- Is Month < 3
- Yes
- No
- No
- We are looking for a split function
- Region $R_{p}$
- Looking for a split $S_{p}$
> $S_{p}(j,t) = (\{ X|X_{j} \lt t, X \in R_{p}\}, \{ X|X_{j} \ge t, X \in R_{p}\} ) = (R_{1}, R_{2})$
- where j is the feature number and t is the threshold
#### How to choose splits
- isolate space of positives and negatives in this case
- Define L(R): loss on R
- Given C class, define $\hat{p_{i}}$ to be the __porportion of examples__ in R that are of class C
- Define misclassification loss of any region as
> $L_{misclass}(R) = 1 - \max\limits_{C} \hat{p}_{C}$
- what we are saying here is for any region that we have subdivided, we want to predict the most common class there, which is the maximum of $\hat{p}_{C}$. The remaining is the probability of misclassification errors.
- We want to pick a split that maximizes the decrease of loss as much as possible over parent $R_{parent}$ and children regions $R_{1}, R_{2}$
> $\max\limits_{j,t} L(R_{p}) - (L(R_{1}) + L(R_{2}))$
#### Why is misclassification loss the right loss
<img src="images/10_misclassificationLoss.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
- We might argue that the decision boundary on right scenario is better than left, because in the right we are isolating out more positives
- Loss of R1 and R2 region = 100 on right scenario
- Loss of R1' and R2' region = 100 on left scenario
- The loss of both parent Rp is also 100
- We can see that the misclassification loss is not sensitive enough
- its not sensitive enough or the loss is not informative enough because the parent level loss is same as child level loss
- Instead we can define __cross entropy loss__
> $L_{cross}(R) = - \sum\limits_{c}\hat{p}_{c} log_{2}\hat{p}_{c}$
- we are summing over the classes the proportion of elements in that class times the log of proportion in that class
- if we know everything about one class, we dont need to communicate, as we know everything that it's a 100% chance that it is of one class
- if we have a even split, then we need to communicate lot more information about the class
- Cross entropy came from information theory where it is used for transmitting bits, where you can transmit bits of information, which is why it came up as log base 2
#### Misclassification loss vs Cross-entropy loss
- Let the plot be between $\hat{p}$ - the proportion of positives in the set vs the loss
- the cross-entropy loss is a strictly concave curve
- Let $L(R_{1})$ and $L(R_{2})$ be the child loss plotted on the curve
- Let there be equal number of examples in both $R_{1}$ and $R_{2}$, are equally weighted
- the overall loss between the two is the average loss between the two, which is $\frac{L(R_{1}) + L(R_{2})}{2}$
- the parent node loss is the projected loss on the curve $L(R_{p})$
- the projection height is the change in loss
- as we see below, \hat{p} parent is the average of child proportions
<img src="images/10_crossEntropyLoss.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
- the cross-entropy diagram
<img src="images/10_crossEntropyDiagram.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
- the misrepresenstation loss
- if we end up with child node loss on the same side of the curve, there is no change in loss and hence no information gain based on this kind of representation
- this is not strictly concave curve
<img src="images/10_misrepresentationDiagram.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
- the decision splits curves that are successfully used are strictly concave curve
- Gini curve
> $\sum\limits_{c}\hat{p}_{c}(1-\hat{p}_{c})$
#### Regression Tree - Extension for decision tree
- So far we used decision tree for classification
- Decision trees can also be used for regression trees
- Example: Amount of snowfall
- Instead of predicting class, you predict mean of the
- For Region $R_{m}$, the prediction will be
> Predict $\hat{y}_{m} = \frac{\sum\limits_{i \in R_{m}}Y_{i}}{|R_{m}|}$
- sum all the values within the region and average them
<img src="images/10_regressionTrees.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
The loss will be
> $L_{squared} = \frac{\sum\limits_{i \in R_{m}} (y_{i} - \hat{y}_{m})^{2} }{|R_{m}|}$
#### Categorical Variables
- can ask questions on any form of subset, is location in northern hemisphere?
- $location \in \{N\}$
- if there are q categories, the possible number of splits would be $2^{q}$, which very quickly becomes intractable
#### Regularization of DTs
- if you carry on the process of splits, you can split region for each datapoint and that will be case of overfitting
- Decision trees are high variance models
- So we need to regularize the decision tree models
- Heuristics for regularization
- If you have a minimum leaf size, stop
- max depth
- max number of nodes
- min decrease in loss
- Before split, the loss is: $L(R_{p})$
- After split, the loss is: $L(R_{1}) + L(R_{2})$
- if after split, the loss is not great enough, we might conclude that it didn't gain us anything
- but there might be some correlation between variables
- pruning
- you grow up your full tree and check which nodes to prune out
- you have a validation set that you use and you evaluate what your misclassification error is on the validation set, for each example for each leaf
#### Runtime
- n train examples
- f features
- d depth of tree
##### Test time O(d)
d < log n
##### Train time
- Each point is part of O(d) nodes
- Cost of point at each node is O(f)
- for binary features, the cost will be f
- for quantitative features, sort and scan linearly, the cost will be f, as well
- Total cost is O(nfd)
- where data matrix size is nf
- and depth is log n
- so cost is fairly fast training time
#### Downside of DT
- it does not have additive structure
- in the example below we get a very rough estimation of decision boundary
- decision trees have problems where the features are interacting additively with one another
<img src="images/10_noAdditiveStructure.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
#### DT - Recap
- Pos
- Easy to explain
- Interpretable
- can deal with categorical variable
- generally fast
- Neg
- high variance problems - generally leads to overfitting
- Not additive
- Low predictive accuracy
- We can make it lot better with ensembling
### Ensembling
- take $X_{i}'s$ which are random variables that are independent identically distributed (i.i.d.)
> $Var(X_{i}) = \sigma^{2}$
> $Var(\bar{X}) = Var\left(\frac{1}{n}\sum\limits_{i}X_{i}\right) = \frac{\sigma^{2}}{n}$
- which means each independent rv is decreasing the variance of your model
- If we drop the independence assumption, so now $X_{i}'s$ are only i.d. X's are correlated by $\rho$
- So the variance of mean will be:
> $Var(\bar{X}) = \rho \sigma^{2} + \frac{1-\rho}{n} \sigma^{2}$
- if they are fully correlated ($\rho = 1$), it becomes $Var(\bar{X}) = \sigma^{2}$
- if there is no correlation($\rho = 0$), it becomes $Var(\bar{X}) = \frac{\sigma^{2}}{n} $
- there would be interest in models with large n so the second term goes down. Also have models that are decorrelated so the first term goes down
#### Ways to ensemble
- different algorithms, not really helpful
- use different training sets, not really helpful
- Bagging - Random Forest
- Boosting - Adaboost, xgboost
### Bagging
- Bootstrap aggregation
- bootstrapping is a method used in statistics to measure uncertainty
- Say that a true population is P
- Training set $S \sim P$
- Assume population is the training sample P = S
- Bootstrap samples Z \sim S
- Z is sampled from S. We take a training sample S with cardinality N. We sample N times from S with replacement, because we are assuming that S is a population and we are sampling from a population
- Take model and then train on all these separate bootstrap samples
<br>
#### Bootstrap aggregation
- we will train separate models separately and then average their outputs
- Say we have bootstrap samples $Z_{1},...,Z_{M}$
- We train model $G_{m}$ on $Z_{m}$ and define
> Aggregate Predictor $G(m) = \frac{\sum\limits_{m=1}{M}G_{m}(x)}{M}$
- This process is called bagging
#### Bias-Variance Analysis
> $Var(\bar{X}) = \rho \sigma^{2} + \frac{1-\rho}{n} \sigma^{2}$
- Bootstrapping is driving down $\rho$
- But what about the second term
- With the increase in bootstrap samples, the M term increases, driving down the second term
- A nice property about bootstrapping is that increasing the number of bootstrap models does not cause overfit than before.
- More M causes less variance
- But the bias of the model increases
- because of the random subsampling from S, it causes model to be less complex as we are drawing less data, and increases the bias
#### Decision Trees + Bagging
- DT have high variance, low bias
- this makes DT ideal fit for bagging
### Random Forest
- RF is a version of decision trees and bagging
- the random forest introduces even more randomization into each individual decision tree
- 1st - Earlier we learnt, bootstrapping drives down $\rho$
- 2nd - But if we can further decorrelate the random variables, we can drive down the variance even further
- At each split for RF, we consider only a fraction of your total features
- 1st - Decreasing $\rho$ in $Var(\bar{X})$
- 2nd - Say in a classification problem, we have found a very strong predictor that gives very good performance on its own (in ski example - the latitude split), and we use that predictor first at the first split. That causes all your models to be very highly correlated. So we should try to decorrelate the models
### Boosting
- In bagging we tried to reduce variance
- Boosting is opposite. In boosting we try to reduce bias
- Is additive
- In bagging, we took average of number of variables
- In boosting, we train one model and then add it into the ensemble and then keep adding in as prediction
- Decision stump - ask one question at a time
- the reason behind this is: we are decreasing bias by restricting the tree depth to be only 1
- this causes the bias to increase and decrease the variance
- Say we make a split and make some misclassifications.
- we identify those mistakes and increase the weights
- in the next iteration, it works on the modified sets - because of more weights on misclassfied samples, split might pick this weighted decision boundary
<img src="images/10_boosting.png" width=400 height=400>
$\tiny{\text{YouTube-Stanford-CS229-Andrew Ng/Raphael Townshend}}$
#### Adaboost
- Determine for classifier $G_{m}$ a weight $\alpha_{m}$ proportional, which is log odds
> $log\left( \frac{1-err_{m}}{err_{m}}\right)$
- Total classifier
> $G(x) = \sum\limits_{m}\alpha_{m}G_{m}$
- each $G_{m}$ is trained on re-weighted training set
- Similar mechanism is used to derive algorithm like XGBoost or gradient boosting machines that allow us to reweight the examples we are getting right or wrong in dynamic fashion and then adding them in additive fashion to your model
| github_jupyter |
```
import argparse
import logging
import math
import os
import random
import shutil
import time
from collections import OrderedDict
import numpy as np
import torch
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import LambdaLR
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
from torch.utils.data.distributed import DistributedSampler
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdm
from dataset.custom import DATASET_GETTERS
from utils import AverageMeter, accuracy
logger = logging.getLogger(__name__)
best_acc = 0
def save_checkpoint(state, is_best, checkpoint, filename='checkpoint.pth.tar'):
filepath = os.path.join(checkpoint, filename)
torch.save(state, filepath)
if is_best:
shutil.copyfile(filepath, os.path.join(checkpoint,
'model_best.pth.tar'))
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
def get_cosine_schedule_with_warmup(optimizer,
num_warmup_steps,
num_training_steps,
num_cycles=7./16.,
last_epoch=-1):
def _lr_lambda(current_step):
if current_step < num_warmup_steps:
return float(current_step) / float(max(1, num_warmup_steps))
no_progress = float(current_step - num_warmup_steps) / \
float(max(1, num_training_steps - num_warmup_steps))
return max(0., math.cos(math.pi * num_cycles * no_progress))
return LambdaLR(optimizer, _lr_lambda, last_epoch)
def interleave(x, size):
s = list(x.shape)
return x.reshape([-1, size] + s[1:]).transpose(0, 1).reshape([-1] + s[1:])
def de_interleave(x, size):
s = list(x.shape)
return x.reshape([size, -1] + s[1:]).transpose(0, 1).reshape([-1] + s[1:])
def main():
parser = argparse.ArgumentParser(description='PyTorch FixMatch Training')
parser.add_argument('--gpu-id', default='0', type=int,
help='id(s) for CUDA_VISIBLE_DEVICES')
parser.add_argument('--num-workers', type=int, default=4,
help='number of workers')
parser.add_argument('--dataset', default='cifar10', type=str,
choices=['cifar10', 'cifar100'],
help='dataset name')
parser.add_argument('--num-labeled', type=int, default=4000,
help='number of labeled data')
parser.add_argument("--expand-labels", action="store_true",
help="expand labels to fit eval steps")
parser.add_argument('--arch', default='wideresnet', type=str,
choices=['wideresnet', 'resnext'],
help='dataset name')
parser.add_argument('--total-steps', default=2**20, type=int,
help='number of total steps to run')
parser.add_argument('--eval-step', default=1024, type=int,
help='number of eval steps to run')
parser.add_argument('--start-epoch', default=0, type=int,
help='manual epoch number (useful on restarts)')
parser.add_argument('--batch-size', default=64, type=int,
help='train batchsize')
parser.add_argument('--lr', '--learning-rate', default=0.03, type=float,
help='initial learning rate')
parser.add_argument('--warmup', default=0, type=float,
help='warmup epochs (unlabeled data based)')
parser.add_argument('--wdecay', default=5e-4, type=float,
help='weight decay')
parser.add_argument('--nesterov', action='store_true', default=True,
help='use nesterov momentum')
parser.add_argument('--use-ema', action='store_true', default=True,
help='use EMA model')
parser.add_argument('--ema-decay', default=0.999, type=float,
help='EMA decay rate')
parser.add_argument('--mu', default=7, type=int,
help='coefficient of unlabeled batch size')
parser.add_argument('--lambda-u', default=1, type=float,
help='coefficient of unlabeled loss')
parser.add_argument('--T', default=1, type=float,
help='pseudo label temperature')
parser.add_argument('--threshold', default=0.95, type=float,
help='pseudo label threshold')
parser.add_argument('--out', default='result',
help='directory to output the result')
parser.add_argument('--resume', default='', type=str,
help='path to latest checkpoint (default: none)')
parser.add_argument('--seed', default=None, type=int,
help="random seed")
parser.add_argument("--amp", action="store_true",
help="use 16-bit (mixed) precision through NVIDIA apex AMP")
parser.add_argument("--opt_level", type=str, default="O1",
help="apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']."
"See details at https://nvidia.github.io/apex/amp.html")
parser.add_argument("--local_rank", type=int, default=-1,
help="For distributed training: local_rank")
parser.add_argument('--no-progress', action='store_true',
help="don't use progress bar")
args = parser.parse_args()
global best_acc
def create_model(args):
if args.arch == 'wideresnet':
import models.wideresnet as models
model = models.build_wideresnet(depth=args.model_depth,
widen_factor=args.model_width,
dropout=0,
num_classes=args.num_classes)
elif args.arch == 'resnext':
import models.resnext as models
model = models.build_resnext(cardinality=args.model_cardinality,
depth=args.model_depth,
width=args.model_width,
num_classes=args.num_classes)
logger.info("Total params: {:.2f}M".format(
sum(p.numel() for p in model.parameters())/1e6))
return model
if args.local_rank == -1:
device = torch.device('cuda', args.gpu_id)
args.world_size = 1
args.n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device('cuda', args.local_rank)
torch.distributed.init_process_group(backend='nccl')
args.world_size = torch.distributed.get_world_size()
args.n_gpu = 1
args.device = device
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN)
logger.warning(
f"Process rank: {args.local_rank}, "
f"device: {args.device}, "
f"n_gpu: {args.n_gpu}, "
f"distributed training: {bool(args.local_rank != -1)}, "
f"16-bits training: {args.amp}",)
logger.info(dict(args._get_kwargs()))
args.dataset == 'custom'
args.num_classes = 8
if args.seed is not None:
set_seed(args)
if args.local_rank in [-1, 0]:
os.makedirs(args.out, exist_ok=True)
args.writer = SummaryWriter(args.out)
if args.arch == 'wideresnet':
args.model_depth = 28
args.model_width = 2
elif args.arch == 'resnext':
args.model_cardinality = 4
args.model_depth = 28
args.model_width = 4
if args.local_rank not in [-1, 0]:
torch.distributed.barrier()
labeled_dataset, unlabeled_dataset, test_dataset = DATASET_GETTERS[args.dataset](
args, './data')
if args.local_rank == 0:
torch.distributed.barrier()
train_sampler = RandomSampler if args.local_rank == -1 else DistributedSampler
labeled_trainloader = DataLoader(
labeled_dataset,
sampler=train_sampler(labeled_dataset),
batch_size=args.batch_size,
num_workers=args.num_workers,
drop_last=True)
unlabeled_trainloader = DataLoader(
unlabeled_dataset,
sampler=train_sampler(unlabeled_dataset),
batch_size=args.batch_size*args.mu,
num_workers=args.num_workers,
drop_last=True)
test_loader = DataLoader(
test_dataset,
sampler=SequentialSampler(test_dataset),
batch_size=args.batch_size,
num_workers=args.num_workers)
if args.local_rank not in [-1, 0]:
torch.distributed.barrier()
model = create_model(args)
if args.local_rank == 0:
torch.distributed.barrier()
model.to(args.device)
no_decay = ['bias', 'bn']
grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(
nd in n for nd in no_decay)], 'weight_decay': args.wdecay},
{'params': [p for n, p in model.named_parameters() if any(
nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = optim.SGD(grouped_parameters, lr=args.lr,
momentum=0.9, nesterov=args.nesterov)
args.epochs = math.ceil(args.total_steps / args.eval_step)
scheduler = get_cosine_schedule_with_warmup(
optimizer, args.warmup, args.total_steps)
if args.use_ema:
from models.ema import ModelEMA
ema_model = ModelEMA(args, model, args.ema_decay)
args.start_epoch = 0
if args.resume:
logger.info("==> Resuming from checkpoint..")
assert os.path.isfile(
args.resume), "Error: no checkpoint directory found!"
args.out = os.path.dirname(args.resume)
checkpoint = torch.load(args.resume)
best_acc = checkpoint['best_acc']
args.start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
if args.use_ema:
ema_model.ema.load_state_dict(checkpoint['ema_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
scheduler.load_state_dict(checkpoint['scheduler'])
if args.amp:
from apex import amp
model, optimizer = amp.initialize(
model, optimizer, opt_level=args.opt_level)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[args.local_rank],
output_device=args.local_rank, find_unused_parameters=True)
logger.info("***** Running training *****")
logger.info(f" Task = {args.dataset}@{args.num_labeled}")
logger.info(f" Num Epochs = {args.epochs}")
logger.info(f" Batch size per GPU = {args.batch_size}")
logger.info(
f" Total train batch size = {args.batch_size*args.world_size}")
logger.info(f" Total optimization steps = {args.total_steps}")
model.zero_grad()
train(args, labeled_trainloader, unlabeled_trainloader, test_loader,
model, optimizer, ema_model, scheduler)
def test(args, test_loader, model, epoch):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
end = time.time()
if not args.no_progress:
test_loader = tqdm(test_loader,
disable=args.local_rank not in [-1, 0])
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
data_time.update(time.time() - end)
model.eval()
inputs = inputs.to(args.device)
targets = targets.to(args.device)
outputs = model(inputs)
loss = F.cross_entropy(outputs, targets)
prec1, prec5 = accuracy(outputs, targets, topk=(1, 5))
losses.update(loss.item(), inputs.shape[0])
top1.update(prec1.item(), inputs.shape[0])
top5.update(prec5.item(), inputs.shape[0])
batch_time.update(time.time() - end)
end = time.time()
if not args.no_progress:
test_loader.set_description("Test Iter: {batch:4}/{iter:4}. Data: {data:.3f}s. Batch: {bt:.3f}s. Loss: {loss:.4f}. top1: {top1:.2f}. top5: {top5:.2f}. ".format(
batch=batch_idx + 1,
iter=len(test_loader),
data=data_time.avg,
bt=batch_time.avg,
loss=losses.avg,
top1=top1.avg,
top5=top5.avg,
))
if not args.no_progress:
test_loader.close()
logger.info("top-1 acc: {:.2f}".format(top1.avg))
logger.info("top-5 acc: {:.2f}".format(top5.avg))
return losses.avg, top1.avg
if __name__ == '__main__':
main()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.