code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Mining Twitter
Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://developer.twitter.com/en/apps and create a sample application. It is possible that Twitter no longer supports sandboxed applications and you may need to submit a request for permission to develop an app on Twitter.
There are four primary identifiers you'll need to note for an OAuth 1.0A workflow: consumer key, consumer secret, access token, and access token secret. Note that you will need an ordinary Twitter account in order to login, create an app, and get these credentials.
<img src="resources/ch01-twitter/images/Twitter-AppCredentials.png" width="600px">
If you are running this code on Binder or from the Docker container, you should just be able to execute the code in this notebook without any worries whatsoever about installing dependencies. If you are running the code from your own development envioronment, however, be advised that these examples in this chapter take advantage of a Python package called [twitter](https://github.com/sixohsix/twitter) to make API calls. You can install this package in a terminal with [pip](https://pypi.python.org/pypi/pip) with the command `pip install twitter`, preferably from within a [Python virtual environment](https://pypi.python.org/pypi/virtualenv).
Once installed, you should be able to open up a Python interpreter (or better yet, your [IPython](http://ipython.org/) interpreter) and get rolling.
## Authorizing an application to access Twitter account data
```
import twitter
import os
from dotenv import load_dotenv
load_dotenv()
# Go to https://developer.twitter.com/en/apps to create an app and get values
# for these credentials, which you'll need to provide in place of these
# empty string values that are defined as placeholders.
# See https://developer.twitter.com/en/docs/basics/authentication/overview/oauth
# for more information on Twitter's OAuth implementation.
CONSUMER_KEY = os.getenv("CONSUMER_KEY")
CONSUMER_SECRET = os.getenv("CONSUMER_SECRET")
OAUTH_TOKEN = os.getenv("ACCESS_TOKEN")
OAUTH_TOKEN_SECRET = os.getenv("ACCESS_TOKEN_SECRET")
auth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
CONSUMER_KEY, CONSUMER_SECRET)
twitter_api = twitter.Twitter(auth=auth)
# Nothing to see by displaying twitter_api except that it's now a
# defined variable
print(twitter_api)
```
## Retrieving trends
```
# The Yahoo! Where On Earth ID for the entire world is 1.
# See https://dev.twitter.com/docs/api/1.1/get/trends/place and
# http://developer.yahoo.com/geo/geoplanet/
WORLD_WOE_ID = 1
US_WOE_ID = 23424977
# Prefix ID with the underscore for query string parameterization.
# Without the underscore, the twitter package appends the ID value
# to the URL itself as a special case keyword argument.
world_trends = twitter_api.trends.place(_id=WORLD_WOE_ID)
us_trends = twitter_api.trends.place(_id=US_WOE_ID)
print(world_trends)
print()
print(us_trends)
for trend in world_trends[0]['trends']:
print(trend['name'])
for trend in us_trends[0]['trends']:
print(trend['name'])
world_trends_set = set([trend['name']
for trend in world_trends[0]['trends']])
us_trends_set = set([trend['name']
for trend in us_trends[0]['trends']])
common_trends = world_trends_set.intersection(us_trends_set)
print(common_trends)
```
## Anatomy of a Tweet
```
import json
# Set this variable to a trending topic,
# or anything else for that matter. The example query below
# was a trending topic when this content was being developed
# and is used throughout the remainder of this chapter.
q = '#MothersDay'
count = 100
# Import unquote to prevent url encoding errors in next_results
from urllib.parse import unquote
# See https://dev.twitter.com/rest/reference/get/search/tweets
search_results = twitter_api.search.tweets(q=q, count=count)
statuses = search_results['statuses']
# Iterate through 5 more batches of results by following the cursor
for _ in range(5):
print('Length of statuses', len(statuses))
try:
next_results = search_results['search_metadata']['next_results']
except KeyError as e: # No more results when next_results doesn't exist
break
# Create a dictionary from next_results, which has the following form:
# ?max_id=847960489447628799&q=%23RIPSelena&count=100&include_entities=1
kwargs = dict([ kv.split('=') for kv in unquote(next_results[1:]).split("&") ])
search_results = twitter_api.search.tweets(**kwargs)
statuses += search_results['statuses']
# Show one sample search result by slicing the list...
print(json.dumps(statuses[0], indent=1))
for i in range(10):
print()
print(statuses[i]['text'])
print('Favorites: ', statuses[i]['favorite_count'])
print('Retweets: ', statuses[i]['retweet_count'])
```
## Extracting text, screen names, and hashtags from tweets
```
status_texts = [ status['text']
for status in statuses ]
screen_names = [ user_mention['screen_name']
for status in statuses
for user_mention in status['entities']['user_mentions'] ]
hashtags = [ hashtag['text']
for status in statuses
for hashtag in status['entities']['hashtags'] ]
# Compute a collection of all words from all tweets
words = [ w
for t in status_texts
for w in t.split() ]
# Explore the first 5 items for each...
print(json.dumps(status_texts[0:5], indent=1))
print(json.dumps(screen_names[0:5], indent=1) )
print(json.dumps(hashtags[0:5], indent=1))
print(json.dumps(words[0:5], indent=1))
```
## Creating a basic frequency distribution from the words in tweets
```
from collections import Counter
for item in [words, screen_names, hashtags]:
c = Counter(item)
print(c.most_common()[:10]) # top 10
print()
```
## Using prettytable to display tuples in a nice tabular format
```
from prettytable import PrettyTable
for label, data in (('Word', words),
('Screen Name', screen_names),
('Hashtag', hashtags)):
pt = PrettyTable(field_names=[label, 'Count'])
c = Counter(data)
[ pt.add_row(kv) for kv in c.most_common()[:10] ]
pt.align[label], pt.align['Count'] = 'l', 'r' # Set column alignment
print(pt)
```
## Calculating lexical diversity for tweets
```
# A function for computing lexical diversity
def lexical_diversity(tokens):
return len(set(tokens))/len(tokens)
# A function for computing the average number of words per tweet
def average_words(statuses):
total_words = sum([ len(s.split()) for s in statuses ])
return total_words/len(statuses)
print(lexical_diversity(words))
print(lexical_diversity(screen_names))
print(lexical_diversity(hashtags))
print(average_words(status_texts))
```
## Finding the most popular retweets
```
retweets = [
# Store out a tuple of these three values ...
(status['retweet_count'],
status['retweeted_status']['user']['screen_name'],
status['retweeted_status']['id'],
status['text'])
# ... for each status ...
for status in statuses
# ... so long as the status meets this condition.
if 'retweeted_status' in status.keys()
]
# Slice off the first 5 from the sorted results and display each item in the tuple
pt = PrettyTable(field_names=['Count', 'Screen Name', 'Tweet ID', 'Text'])
[ pt.add_row(row) for row in sorted(retweets, reverse=True)[:5] ]
pt.max_width['Text'] = 50
pt.align= 'l'
print(pt)
```
## Looking up users who have retweeted a status
```
# Get the original tweet id for a tweet from its retweeted_status node
# and insert it here
_retweets = twitter_api.statuses.retweets(id=862359093398261760)
print([r['user']['screen_name'] for r in _retweets])
```
## Plotting frequencies of words
```
import matplotlib.pyplot as plt
%matplotlib inline
word_counts = sorted(Counter(words).values(), reverse=True)
plt.loglog(word_counts)
plt.ylabel("Freq")
plt.xlabel("Word Rank")
```
## Generating histograms of words, screen names, and hashtags
```
for label, data in (('Words', words),
('Screen Names', screen_names),
('Hashtags', hashtags)):
# Build a frequency map for each set of data
# and plot the values
c = Counter(data)
plt.hist(list(c.values()))
# Add a title and y-label ...
plt.title(label)
plt.ylabel("Number of items in bin")
plt.xlabel("Bins (number of times an item appeared)")
# ... and display as a new figure
plt.figure()
```
## Generating a histogram of retweet counts
```
# Using underscores while unpacking values in
# a tuple is idiomatic for discarding them
counts = [count for count, _, _, _ in retweets]
plt.hist(counts)
plt.title('Retweets')
plt.xlabel('Bins (number of times retweeted)')
plt.ylabel('Number of tweets in bin')
```
## Sentiment Analysis
```
# pip install nltk
import nltk
nltk.download('vader_lexicon')
import numpy as np
from nltk.sentiment.vader import SentimentIntensityAnalyzer
twitter_stream = twitter.TwitterStream(auth=auth)
iterator = twitter_stream.statuses.sample()
tweets = []
for tweet in iterator:
try:
if tweet['lang'] == 'en':
tweets.append(tweet)
except:
pass
if len(tweets) == 100:
break
analyzer = SentimentIntensityAnalyzer()
analyzer.polarity_scores('Hello')
analyzer.polarity_scores('I really enjoy this video series.')
analyzer.polarity_scores('I REALLY enjoy this video series.')
analyzer.polarity_scores('I REALLY enjoy this video series!!!')
analyzer.polarity_scores('I REALLY did not enjoy this video series!!!')
scores = np.zeros(len(tweets))
for i, t in enumerate(tweets):
# Extract the text portion of the tweet
text = t['text']
# Measure the polarity of the tweet
polarity = analyzer.polarity_scores(text)
# Store the normalized, weighted composite score
scores[i] = polarity['compound']
most_positive = np.argmax(scores)
most_negative = np.argmin(scores)
print('{0:6.3f} : "{1}"'.format(scores[most_positive], tweets[most_positive]['text']))
print('{0:6.3f} : "{1}"'.format(scores[most_negative], tweets[most_negative]['text']))
```
| github_jupyter |
```
"""
Today we will be looking at the 2 Naive Bayes classification algorithms SeaLion has to offer - gaussian and multinomial (more common).
Both of them use the same underlying principles and as usual we'll explain them step by step.
"""
# first import
import sealion as sl
from sealion.naive_bayes import GaussianNaiveBayes, MultinomialNaiveBayes
"""
We'll first start with gaussian naive bayes. The way it works is by creating a normal (gaussian) curve to measure the
probability of any certain feature occuring for a given class. It looks at the probability for a feature to be on
each class possible. The way it makes its predictions on a given data point is by just looking at the probability of
each feature in the point for each class, and as it after aggregating all of the probabilities for all of the features
will predict the class with the highest probability.
"""
# we will use the iris dataset for this
from sklearn.datasets import load_iris
X, y = load_iris()['data'], load_iris()['target']
# and let's split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 3) # another thing to note :
# with naive bayes, try to always have as balanced data for all classes as possible.
# we can now setup the model
gnb = GaussianNaiveBayes()
gnb.fit(X_train, y_train) # fit the model
gnb.evaluate(X_test, y_test) # we can evaluate it
# WOAH! Looks like we do pretty well with this model. Let's see how much we got wrong.
y_pred = gnb.predict(X_test)
y_pred == y_test
# 1 wrong. Super simple, right?
# onto multinomial naive bayes
"""
Multinomial Naive Bayes is a type of naive bayes that will work with stuff like text classification, where you have
a dataset where each observation/data point is just a word. This could look like : ["hello", "what", "do", "you", "want", "from", "me"]
for a given data point. Each feature is the exact same here, so what if a model could look split all data into its classes,
and then see the probability of finding a feature (i.e. "hello") for that class. For example if you have a dataset of 100 emails,
50 spam and 50 ham - you can split the 100 into a dataset of 50 spam and 50 ham and then count the number of
times "hello" and all other features show up in each of those 50 class-datasets (doesn't matter where.) Then if you are given a new
data point you can see the probability of seeing each of its features for each class, and choose the class with the
highest probability. This is the underlying idea behind multinomial naive bayes.
"""
# let's get started
# the spam dataset is available here : https://www.kaggle.com/uciml/sms-spam-collection-dataset
import pandas as pd
spam_df = pd.read_csv("spam.csv", engine = "python", encoding='ISO-8859-1') # we need to manually define the encoding
spam_df # print it out
# as usual data manipulation is honestly not as fun as the algorithms, so we're going to have to get our hands dirty
X, y = spam_df['v2'], spam_df['v1']
X, y # let's print this stuff out
# it looks like we have plenty of data
# the first step is tokenize, where we take those strings in each data point and turn them into unique numbers. This
# will apply throughout, so "hello" as 100 in one data point is the same for another
VOCAB_SIZE = 10000 # we allow 10000 words
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words = VOCAB_SIZE)
tokenizer.fit_on_texts(X)
X_seq = tokenizer.texts_to_sequences(X)
from tensorflow.keras.preprocessing.sequence import pad_sequences
# we'll also want to pad it, meaning that we make sure everything is the same length
X_pad = pad_sequences(X_seq, maxlen = 100, truncating = "post", padding = "post")
# and we will want to split it up now
from sklearn.model_selection import train_test_split
import numpy as np
y = np.array(y)
y[np.where(y == "ham")] = 0
y[np.where(y == "spam")] = 1 # spam is 1
X_train, X_test, y_train, y_test = train_test_split(X_pad, y, test_size = 0.15, random_state = 3)
# let's print out X_train
X_train
# time to start using Multinomial Naive Bayes
mnb = MultinomialNaiveBayes()
mnb.fit(X_train, y_train)
# time to evaluate
mnb.evaluate(X_test, y_test)
# dang ... but hmmm is it just predicting 0s? Is that why?
mnb.predict(X_test)[:10]
# looks like it did phenomenal. And of course, we're going to use a confusion matrix.
from sealion.utils import confusion_matrix
confusion_matrix(mnb.predict(X_test), y_test)
# The only thing we get wrong is thinking something is fine when its not. I think that's better than
# the opposite, where you miss something important and it goes into your spam folder...
# Look's like that's the end for us. As usual, I hope you enjoyed this tutorial!
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Sid-Oya/DS-Unit-2-Linear-Models/blob/master/DSPT7_LESSON_Unit_2_Sprint_1_Module_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 1*
---
# Regression 1
- Begin with baselines for regression
- Use scikit-learn to fit a linear regression
- Explain the coefficients from a linear regression
Brandon Rohrer wrote a good blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)
We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”
- “How Much / How Many?” (Regression)
- “Is this A or B?” (Classification)
This unit, you’ll build supervised learning models with “tabular data” (data in tables, like spreadsheets). Including, but not limited to:
- Predict New York City real estate prices <-- **Today, we'll start this!**
- Predict which water pumps in Tanzania need repairs
- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model!
### Setup
Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.
Libraries:
- ipywidgets
- pandas
- plotly
- scikit-learn
If your **Plotly** visualizations aren't working:
- You must have JavaScript enabled in your browser
- You probably want to use Chrome or Firefox
- You may need to turn off ad blockers
- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/#jupyterlab-support-python-35)
```
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
```
# Begin with baselines for regression
## Overview
### Predict how much a NYC condo costs 🏠💸
Regression models output continuous numbers, so we can use regression to answer questions like "How much?" or "How many?"
Often, the question is "How much will this cost? How many dollars?"
For example, here's a fun YouTube video, which we'll use as our scenario for this lesson:
[Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I)
> Real Estate Agent Leonard Steinberg just sold a pre-war condo in New York City's Tribeca neighborhood. We challenged three people - an apartment renter, an apartment owner and a real estate expert - to try to guess how much the apartment sold for. Leonard reveals more and more details to them as they refine their guesses.
The condo from the video is **1,497 square feet**, built in 1852, and is in a desirable neighborhood. According to the real estate agent, _"Tribeca is known to be one of the most expensive ZIP codes in all of the United States of America."_
How can we guess what this condo sold for? Let's look at 3 methods:
1. Heuristics
2. Descriptive Statistics
3. Predictive Model
## Follow Along
### 1. Heuristics
Heuristics are "rules of thumb" that people use to make decisions and judgments. The video participants discussed their heuristics:
**Participant 1**, Chinwe, is a real estate amateur. She rents her apartment in New York City. Her first guess was `8 million, and her final guess was 15 million.
[She said](https://youtu.be/JQCctBOgH9I?t=465), _"People just go crazy for numbers like 1852. You say **'pre-war'** to anyone in New York City, they will literally sell a kidney. They will just give you their children."_
**Participant 3**, Pam, is an expert. She runs a real estate blog. Her first guess was 1.55 million, and her final guess was 2.2 million.
[She explained](https://youtu.be/JQCctBOgH9I?t=280) her first guess: _"I went with a number that I think is kind of the going rate in the location, and that's **a thousand bucks a square foot.**"_
**Participant 2**, Mubeen, is between the others in his expertise level. He owns his apartment in New York City. His first guess was 1.7 million, and his final guess was also 2.2 million.
### 2. Descriptive Statistics
We can use data to try to do better than these heuristics. How much have other Tribeca condos sold for?
Let's answer this question with a relevant dataset, containing most of the single residential unit, elevator apartment condos sold in Tribeca, from January through April 2019.
We can get descriptive statistics for the dataset's `SALE_PRICE` column.
How many condo sales are in this dataset? What was the average sale price? The median? Minimum? Maximum?
```
import pandas as pd
df = pd.read_csv(DATA_PATH+'condos/tribeca.csv')
pd.options.display.float_format = '{:,.0f}'.format
df['SALE_PRICE'].describe()
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(df['SALE_PRICE'], kde=False);
plt.axvline(df['SALE_PRICE'].mean(), color='blue')
plt.axvline(df['SALE_PRICE'].median(), color='red')
```
On average, condos in Tribeca have sold for \$3.9 million. So that could be a reasonable first guess.
In fact, here's the interesting thing: **we could use this one number as a "prediction", if we didn't have any data except for sales price...**
Imagine we didn't have any any other information about condos, then what would you tell somebody? If you had some sales prices like this but you didn't have any of these other columns. If somebody asked you, "How much do you think a condo in Tribeca costs?"
You could say, "Well, I've got 90 sales prices here, and I see that on average they cost \$3.9 million."
So we do this all the time in the real world. We use descriptive statistics for prediction. And that's not wrong or bad, in fact **that's where you should start. This is called the _mean baseline_.**
```
```
**Baseline** is an overloaded term, with multiple meanings:
1. [**The score you'd get by guessing**](https://twitter.com/koehrsen_will/status/1088863527778111488)
2. [**Fast, first models that beat guessing**](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)
3. **Complete, tuned "simpler" model** (Simpler mathematically, computationally. Or less work for you, the data scientist.)
4. **Minimum performance that "matters"** to go to production and benefit your employer and the people you serve.
5. **Human-level performance**
Baseline type #1 is what we're doing now.
(Linear models can be great for #2, 3, 4, and [sometimes even #5 too!](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825))
---
Let's go back to our mean baseline for Tribeca condos.
If we just guessed that every Tribeca condo sold for \$3.9 million, how far off would we be, on average?
```
guess = df['SALE_PRICE'].mean()
# guess = 15000000
errors = guess - df['SALE_PRICE']
mean_absolute_error = errors.abs().mean()
print(f'If we just guessed every Tribeca condo sold for ${guess:,.0f},')
print(f'we would be off by ${mean_absolute_error:,.0f} on average.')
```
That sounds like a lot of error!
But fortunately, we can do better than this first baseline — we can use more data. For example, the condo's size.
Could sale price be **dependent** on square feet? To explore this relationship, let's make a scatterplot, using [Plotly Express](https://plot.ly/python/plotly-express/):
```
import plotly.express as px
px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE')
```
### 3. Predictive Model
To go from a _descriptive_ [scatterplot](https://www.plotly.express/plotly_express/#plotly_express.scatter) to a _predictive_ regression, just add a _line of best fit:_
```
px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE', trendline='ols')
```
Roll over the Plotly regression line to see its equation and predictions for sale price, dependent on gross square feet.
Linear Regression helps us **interpolate.** For example, in this dataset, there's a gap between 4016 sq ft and 4663 sq ft. There were no 4300 sq ft condos sold, but what price would you predict, using this line of best fit?
Linear Regression also helps us **extrapolate.** For example, in this dataset, there were no 6000 sq ft condos sold, but what price would you predict?
The line of best fit tries to summarize the relationship between our x variable and y variable in a way that enables us to use the equation for that line to make predictions.
**Synonyms for "y variable"**
- **Dependent Variable**
- Response Variable
- Outcome Variable
- Predicted Variable
- Measured Variable
- Explained Variable
- **Label**
- **Target**
**Synonyms for "x variable"**
- **Independent Variable**
- Explanatory Variable
- Regressor
- Covariate
- Correlate
- **Feature**
The bolded terminology will be used most often by your instructors this unit.
## Challenge
In your assignment, you will practice how to begin with baselines for regression, using a new dataset!
# Use scikit-learn to fit a linear regression
## Overview
We can use visualization libraries to do simple linear regression ("simple" means there's only one independent variable).
But during this unit, we'll usually use the scikit-learn library for predictive models, and we'll usually have multiple independent variables.
In [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API), Jake VanderPlas explains **how to structure your data** for scikit-learn:
> The best way to think about data within Scikit-Learn is in terms of tables of data.
>
> 
>
>The features matrix is often stored in a variable named `X`. The features matrix is assumed to be two-dimensional, with shape `[n_samples, n_features]`, and is most often contained in a NumPy array or a Pandas `DataFrame`.
>
>We also generally work with a label or target array, which by convention we will usually call `y`. The target array is usually one dimensional, with length `n_samples`, and is generally contained in a NumPy array or Pandas `Series`. The target array may have continuous numerical values, or discrete classes/labels.
>
>The target array is the quantity we want to _predict from the data:_ in statistical terms, it is the dependent variable.
VanderPlas also lists a **5 step process** for scikit-learn's "Estimator API":
> Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.
>
> Most commonly, the steps in using the Scikit-Learn estimator API are as follows:
>
> 1. Choose a class of model by importing the appropriate estimator class from Scikit-Learn.
> 2. Choose model hyperparameters by instantiating this class with desired values.
> 3. Arrange data into a features matrix and target vector following the discussion above.
> 4. Fit the model to your data by calling the `fit()` method of the model instance.
> 5. Apply the Model to new data: For supervised learning, often we predict labels for unknown data using the `predict()` method.
Let's try it!
## Follow Along
Follow the 5 step process, and refer to [Scikit-Learn LinearRegression documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
```
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
model = LinearRegression()
# 3. Arrange X features matrix & y target vector
features = ['GROSS_SQUARE_FEET', 'YEAR_BUILT']
# features = ['GROSS_SQUARE_FEET']
target = ['SALE_PRICE']
x_train = df[features]
y_train = df[target]
x_train.shape, y_train.shape
x_train
# 4. Fit the model
model.fit(x_train, y_train)
y_train
# 5. Apply the model to new data
square_feet = 1497
year_built = 1852
x_test = [[ square_feet ]]
y_pred = model.predict(x_test)
y_pred
```
So, we used scikit-learn to fit a linear regression, and predicted the sales price for a 1,497 square foot Tribeca condo, like the one from the video.
Now, what did that condo actually sell for? ___The final answer is revealed in [the video at 12:28](https://youtu.be/JQCctBOgH9I?t=748)!___
```
y_test = [2800000]
```
What was the error for our prediction, versus the video participants?
Let's use [scikit-learn's mean absolute error function](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html).
```
chinwe_final_guess = [15000000]
mubeen_final_guess = [2200000]
pam_final_guess = [2200000]
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_test, y_pred)
print ("Mean Absolute Error of our model", mae)
```
This [diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y) shows what we just did! Don't worry about understanding it all now. But can you start to match some of these boxes/arrows to the corresponding lines of code from above?
<img src="https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/_images/plot_ML_flow_chart_12.png" width="75%">
Here's [another diagram](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/), which shows how machine learning is a "new programming paradigm":
<img src="https://pbs.twimg.com/media/ECQDlFOWkAEJzlY.jpg" width="70%">
> A machine learning system is "trained" rather than explicitly programmed. It is presented with many "examples" relevant to a task, and it finds statistical structure in these examples which eventually allows the system to come up with rules for automating the task. —[Francois Chollet](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/)
Wait, are we saying that *linear regression* could be considered a *machine learning algorithm*? Maybe it depends? What do you think? We'll discuss throughout this unit.
## Challenge
In your assignment, you will use scikit-learn for linear regression with one feature. For a stretch goal, you can do linear regression with two or more features.
# Explain the coefficients from a linear regression
## Overview
What pattern did the model "learn", about the relationship between square feet & price?
## Follow Along
To help answer this question, we'll look at the `coef_` and `intercept_` attributes of the `LinearRegression` object. (Again, [here's the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).)
```
```
We can repeatedly apply the model to new/unknown data, and explain the coefficient:
```
def predict(square_feet):
y_pred = model.predict([[square_feet]])
estimate = y_pred[0]
coefficient = model.coef_[0]
print ('Estimated price for', square_feet, 'square feet is', estimate)
print ('The coefficient (cost per square foot) is', coefficient)
# result = f'${estimate:,.0f} estimated price for {square_feet:,.0f} square foot condo in Tribeca.'
# explanation = f'In this linear regression, each additional square foot adds ${coefficient:,.0f}.'
# return result + '\n' + explanation
print(predict(1497));
# What does the model predict for low square footage?
print(predict(500))
# For high square footage?
print(predict(10000))
```
## Challenge
In your assignment, you will define a function to make new predictions and explain the model coefficient.
# Review
You'll practice these objectives when you do your assignment:
- Begin with baselines for regression
- Use scikit-learn to fit a linear regression
- Make new predictions and explain coefficients
You'll use another New York City real estate dataset. You'll predict how much it costs to rent an apartment, instead of how much it costs to buy a condo.
You've been provided with a separate notebook for your assignment, which has all the instructions and stretch goals. Good luck and have fun!
# Sources
#### NYC Real Estate
- Video: [Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I)
- Data: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)
- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page)
#### Baselines
- Will Koehrsen, ["One of the most important steps in a machine learning project is establishing a common sense baseline..."](https://twitter.com/koehrsen_will/status/1088863527778111488)
- Emmanuel Ameisen, [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)
- Robyn M. Dawes, [The robust beauty of improper linear models in decision making](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825)
#### Plotly Express
- [Plotly Express](https://plot.ly/python/plotly-express/) examples
- [plotly_express.scatter](https://www.plotly.express/plotly_express/#plotly_express.scatter) docs
#### Scikit-Learn
- Francois Chollet, [Diagram](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/)
- Jake VanderPlas, [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API)
- Olvier Grisel, [Diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y)
- [sklearn.linear_model.LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)
- [sklearn.metrics.mean_absolute_error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html)
| github_jupyter |
<a href="https://colab.research.google.com/github/Wee7/FinancialEngineering_IR_xVA/blob/main/FE_xVA_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lecture 02- Understanding of Filtrations and Measures
```
#%% Martingale.py
"""
Created on July 05 2021
Simulation of, E(W(t)|F(s)) = W(s) using nested Monte Carlo
This code is purely educational and comes from "Financial Engineering" course by L.A. Grzelak
The course is based on the book “Mathematical Modeling and Computation
in Finance: With Exercises and Python and MATLAB Computer Codesâ€,
by C.W. Oosterlee and L.A. Grzelak, World Scientific Publishing Europe Ltd, 2019.
@author: Lech A. Grzelak
"""
import numpy as np
import matplotlib.pyplot as plt
t = 10
s = 5
NoOfPaths=1000
NoOfSteps=10
# First part to caclulate E(W(t)|F(0)) = W(0)=0
def martingaleA():
W_t = np.random.normal(0.0,pow(t,0.5),[NoOfPaths,1])
E_W_t = np.mean(W_t)
print("mean value equals to: %.2f while the expected value is W(0) =%0.2f " %(E_W_t,0.0))
# Second part requiring nested Monte Carlo simulation E(W(t)|F(s)) = W(s)
def martingaleB():
Z = np.random.normal(0.0,1.0,[NoOfPaths,NoOfSteps])
W = np.zeros([NoOfPaths,NoOfSteps+1])
# time-step from [t0,s]
dt1 = s / float(NoOfSteps)
for i in range(0,NoOfSteps):
# making sure that samples from normal have mean 0 and variance 1
Z[:,i] = (Z[:,i] - np.mean(Z[:,i])) / np.std(Z[:,i])
W[:,i+1] = W[:,i] + pow(dt1,0.5)*Z[:,i]
#W_s is the last column of W
W_s = W[:,-1]
#for every path W(s) we perform sub-simulation until time t and calculate
#the expectation
# time-step from [s,t]
dt2 = (t-s)/float(NoOfSteps);
W_t = np.zeros([NoOfPaths,NoOfSteps+1]);
#Store the results
E_W_t = np.zeros([NoOfPaths])
Error=[]
for i in range(0,NoOfPaths):
#Sub-simulation from time "s" until "t"
W_t[:,0] = W_s[i];
Z = np.random.normal(0.0,1.0,[NoOfPaths,NoOfSteps])
for j in range(0,NoOfSteps):
#this is a scaling that ensures that Z has mean 0 and variance 1
Z[:,j] = (Z[:,j]-np.mean(Z[:,j])) / np.std(Z[:,j]);
#path simulation, from "s" until "t"
W_t[:,j+1] = W_t[:,j] + pow(dt2,0.5)*Z[:,j];
E_W_t[i]=np.mean(W_t[:,-1])
Error.append(E_W_t[i]-W_s[i])
#Generate a plot for the first path
if i==0:
plt.plot(np.linspace(0,s,NoOfSteps+1),W[0,:])
for j in range(0,NoOfPaths):
plt.plot(np.linspace(s,t,NoOfSteps+1),W_t[j,:])
plt.xlabel("time")
plt.ylabel("W(t)")
plt.grid()
print(Error)
error = np.max(np.abs(E_W_t-W_s))
print("The error is equal to: %.18f"%(error))
martingaleB()
#%% Black_Scholes_Jumps.py
"""
Created on July 05 2021
Impact of conditional expectation pricing (Black-Scholes with Jump volatility)
This code is purely educational and comes from "Financial Engineering" course by L.A. Grzelak
The course is based on the book “Mathematical Modeling and Computation
in Finance: With Exercises and Python and MATLAB Computer Codesâ€,
by C.W. Oosterlee and L.A. Grzelak, World Scientific Publishing Europe Ltd, 2019.
@author: Lech A. Grzelak
"""
import numpy as np
import matplotlib.pyplot as plt
import enum
import scipy.stats as st
# This class defines puts and calls
class OptionType(enum.Enum):
CALL = 1.0
PUT = -1.0
def GeneratePaths(NoOfPaths,NoOfSteps,S0,T,muJ,sigmaJ,r):
# Create empty matrices for Poisson process and for compensated Poisson process
X = np.zeros([NoOfPaths, NoOfSteps+1])
S = np.zeros([NoOfPaths, NoOfSteps+1])
time = np.zeros([NoOfSteps+1])
dt = T / float(NoOfSteps)
X[:,0] = np.log(S0)
S[:,0] = S0
Z = np.random.normal(0.0,1.0,[NoOfPaths,NoOfSteps])
J = np.random.normal(muJ,sigmaJ,[NoOfPaths,NoOfSteps])
for i in range(0,NoOfSteps):
# making sure that samples from normal have mean 0 and variance 1
if NoOfPaths > 1:
Z[:,i] = (Z[:,i] - np.mean(Z[:,i])) / np.std(Z[:,i])
X[:,i+1] = X[:,i] + (r - 0.5*J[:,i]**2.0)*dt +J[:,i]*np.sqrt(dt)* Z[:,i]
time[i+1] = time[i] +dt
S = np.exp(X)
paths = {"time":time,"X":X,"S":S,"J":J}
return paths
def EUOptionPriceFromMCPaths(CP,S,K,T,r):
# S is a vector of Monte Carlo samples at T
if CP == OptionType.CALL:
return np.exp(-r*T)*np.mean(np.maximum(S-K,0.0))
elif CP == OptionType.PUT:
return np.exp(-r*T)*np.mean(np.maximum(K-S,0.0))
def BS_Call_Put_Option_Price(CP,S_0,K,sigma,t,T,r):
K = np.array(K).reshape([len(K),1])
d1 = (np.log(S_0 / K) + (r + 0.5 * np.power(sigma,2.0))
* (T-t)) / (sigma * np.sqrt(T-t))
d2 = d1 - sigma * np.sqrt(T-t)
if CP == OptionType.CALL:
value = st.norm.cdf(d1) * S_0 - st.norm.cdf(d2) * K * np.exp(-r * (T-t))
elif CP == OptionType.PUT:
value = st.norm.cdf(-d2) * K * np.exp(-r * (T-t)) - st.norm.cdf(-d1)*S_0
return value
def CallOption_CondExpectation(NoOfPaths,T,S0,K,J,r):
# Jumps at time T
J_i = J[:,-1]
result = np.zeros([NoOfPaths])
for j in range(0,NoOfPaths):
sigma = J_i[j]
result[j] = BS_Call_Put_Option_Price(OptionType.CALL,S0,[K],sigma,0.0,T,r)
return np.mean(result)
def mainCalculation():
NoOfPaths = 25
NoOfSteps = 500
T = 5
muJ = 0.3
sigmaJ = 0.005
S0 =100
r =0.00
Paths = GeneratePaths(NoOfPaths,NoOfSteps,S0, T,muJ,sigmaJ,r)
timeGrid = Paths["time"]
X = Paths["X"]
S = Paths["S"]
plt.figure(1)
plt.plot(timeGrid, np.transpose(X))
plt.grid()
plt.xlabel("time")
plt.ylabel("X(t)")
plt.figure(2)
plt.plot(timeGrid, np.transpose(S))
plt.grid()
plt.xlabel("time")
plt.ylabel("S(t)")
# Check the convergence for a given strike
K = 80
CP =OptionType.CALL
NGrid = range(100,10000,1000)
NoOfRuns = len(NGrid)
resultMC = np.zeros([NoOfRuns])
resultCondExp = np.zeros([NoOfRuns])
for (i,N) in enumerate(NGrid):
print(N)
Paths = GeneratePaths(N,NoOfSteps,S0, T,muJ,sigmaJ,r)
timeGrid = Paths["time"]
S = Paths["S"]
resultMC[i] = EUOptionPriceFromMCPaths(CP,S[:,-1],K,T,r)
J = Paths["J"]
resultCondExp[i]=CallOption_CondExpectation(NoOfPaths,T,S0,K,J,r)
plt.figure(3)
plt.plot(NGrid,resultMC)
plt.plot(NGrid,resultCondExp)
plt.legend(['MC','Conditional Expectation'])
plt.title('Call Option Price- Convergence')
plt.xlabel('Number of Paths')
plt.ylabel('Option price for a given strike, K')
plt.grid()
mainCalculation()
```
| github_jupyter |
<h1><center>Deep Learning Helping Navigate Robots</center></h1>
<img src="https://storage.googleapis.com/kaggle-competitions/kaggle/13242/logos/thumb76_76.png?t=2019-03-12-23-33-31" width="300"></img>
### Dependencies
```
import warnings
import cufflinks
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from keras import optimizers
from keras.layers import Dense
from keras.utils import to_categorical
from keras.models import Sequential, Model
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
%matplotlib inline
warnings.filterwarnings("ignore")
cufflinks.go_offline(connected=True)
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(0)
seed(0)
```
### Load data
```
train = pd.read_csv('../input/X_train.csv')
labels = pd.read_csv('../input/y_train.csv')
test = pd.read_csv('../input/X_test.csv')
print('Train features shape', train.shape)
display(train.head())
print('Train labels shape', labels.shape)
display(labels.head())
print('Test shape', test.shape)
display(test.head())
```
### Join train features with labels
```
train = train.join(labels, on='series_id', rsuffix='_')
train.drop('series_id_', axis=1, inplace=True)
print(train.shape)
display(train.head())
```
### Plotly graphs may take a while to load.
# EDA
## Surface distribution
- Let's see what's the label distribution of our data
```
f, ax = plt.subplots(figsize=(12, 8))
ax = sns.countplot(y='surface', data=train, palette="rocket", order=reversed(train['surface'].value_counts().index))
ax.set_ylabel("Surface type")
plt.show()
```
### Surface distribution by "group_id"
```
group_df = train.groupby(['group_id', 'surface'])['surface'].agg({'surface':['count']}).reset_index()
group_df.columns = ['group_id', 'surface', 'count']
f, ax = plt.subplots(figsize=(18, 8))
ax = sns.barplot(x="group_id", y="count", data=group_df, palette="GnBu_d")
for index, row in group_df.iterrows():
ax.text(row.name, row['count'], row['surface'], color='black', ha="center", rotation=60)
plt.show()
```
## Features distribution
- Now would be a good idea to see how each other type of features behavior
### Orientation distribution
```
orientation_features = ['orientation_X', 'orientation_Y', 'orientation_Z', 'orientation_W']
train[orientation_features].iplot(kind='histogram', bins=200, subplots=True, shape=(len(orientation_features), 1))
train[orientation_features].iplot(kind='histogram', barmode='overlay', bins=200)
train[orientation_features].iplot(kind='box')
```
The interesting part here is that "orientation_Y" and "orientation_X" are far more spread than the other two.
### Angular velocity distribution
```
velocity_features = ['angular_velocity_X', 'angular_velocity_Y', 'angular_velocity_Z']
train[velocity_features].iplot(kind='histogram', bins=200, subplots=True, shape=(len(velocity_features), 1))
train[velocity_features].iplot(kind='histogram', barmode='overlay', bins=200)
train[velocity_features].iplot(kind='box')
```
Here all the angular velocity features seem to be centered around 0, but "angular_velocity_Y" is less spread than the others.
### Linear acceleration distribution
```
acceleration_features = ['linear_acceleration_X', 'linear_acceleration_Y', 'linear_acceleration_Z']
train[acceleration_features].iplot(kind='histogram', bins=200, subplots=True, shape=(len(acceleration_features), 1))
train[acceleration_features].iplot(kind='histogram', barmode='overlay', bins=200)
train[acceleration_features].iplot(kind='box')
```
The linear acceleration features seem to be the most different between itself, all 3 features have different mean and spread.
### Preprocess the labels
```
target = train['surface']
n_labels = target.nunique()
labels_names = target.unique()
le = LabelEncoder()
target = le.fit_transform(target.values)
target = to_categorical(target)
train.drop('surface', axis=1, inplace=True)
```
### Train/validation split
```
features = ['orientation_X', 'orientation_Y', 'orientation_Z', 'orientation_W',
'angular_velocity_X', 'angular_velocity_Y', 'angular_velocity_Z',
'linear_acceleration_X', 'linear_acceleration_Y', 'linear_acceleration_Z']
X_train, X_val, Y_train, Y_val = train_test_split(train[features], target, test_size=0.2, random_state=0)
print('Train shape', X_train.shape)
print('Validation shape', X_val.shape)
display(X_train.head())
```
### Model
```
epochs = 70
batch = 128
lr = 0.001
adam = optimizers.Adam(lr)
model = Sequential()
model.add(Dense(20, activation='relu', input_dim=X_train.shape[1]))
model.add(Dense(20, activation='relu'))
model.add(Dense(n_labels, activation="softmax"))
model.compile(loss='categorical_crossentropy', optimizer=adam)
model.summary()
history = model.fit(X_train.values, Y_train, validation_data=(X_val.values, Y_val), epochs=epochs, verbose=2)
```
#### Model loss plot
```
history_pd = pd.DataFrame.from_dict(history.history)
history_pd.iplot(kind='line')
```
#### Model confusion matrix
```
cnf_matrix = confusion_matrix(np.argmax(Y_train, axis=1), model.predict_classes(X_train))
cnf_matrix_norm = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
df_cm = pd.DataFrame(cnf_matrix_norm, index=labels_names, columns=labels_names)
plt.figure(figsize=(20, 7))
ax = plt.axes()
ax.set_title('Train')
sns.heatmap(df_cm, annot=True, fmt='.2f', cmap="Blues", ax=ax)
plt.show()
cnf_matrix = confusion_matrix(np.argmax(Y_val, axis=1), model.predict_classes(X_val))
cnf_matrix_norm = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
df_cm = pd.DataFrame(cnf_matrix_norm, index=labels_names, columns=labels_names)
plt.figure(figsize=(20, 7))
ax = plt.axes()
ax.set_title('Validation')
sns.heatmap(df_cm, annot=True, fmt='.2f', cmap="Blues", ax=ax)
plt.show()
```
### Test predictions
```
predictions = model.predict_classes(test[features].values)
test['surface'] = le.inverse_transform(predictions)
df = test[['series_id', 'surface']]
df = df.groupby('series_id', as_index=False).agg(lambda x:x.value_counts().index[0])
df.to_csv('submission.csv', index=False)
df.head(10)
```
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern).
## Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
```
## Explore the Data
Play around with `view_sentence_range` to view different parts of the data.
```
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
```
## Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`
```
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
".": "||period||",
",": "||comma||",
"\"": "||quotation_mark||",
";": "||semicolon||",
"!": "||exclamation_mark||",
"?": "||question_mark||",
"(": "||left_parentheses||",
")": "||rigth_parentheses||",
"--": "||dash||",
"\n": "||return||",
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
### Check the Version of TensorFlow and Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
### Input
Implement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple `(Input, Targets, LearningRate)`
```
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
```
### Build RNN Cell and Initialize
Stack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).
- The Rnn size should be set using `rnn_size`
- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function
- Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)
Return the cell and initial state in the following tuple `(Cell, InitialState)`
```
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
zero_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(zero_state, name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
```
### Word Embedding
Apply embedding to `input_data` using TensorFlow. Return the embedded sequence.
```
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
```
### Build RNN
You created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.
- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
- Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)
Return the outputs and final_state state in the following tuple `(Outputs, FinalState)`
```
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.
- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.
- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
```
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
```
### Batches
Implement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:
- The first element is a single batch of **input** with the shape `[batch size, sequence length]`
- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`
If you can't fill the last batch with enough data, drop the last batch.
For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive.
```
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.roll(xdata,-1)
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `num_epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `embed_dim` to the size of the embedding.
- Set `seq_length` to the length of sequence.
- Set `learning_rate` to the learning rate.
- Set `show_every_n_batches` to the number of batches the neural network should print progress.
```
# Number of Epochs
num_epochs = 2000
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 128
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
```
## Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forums](https://discussions.udacity.com/) to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
```
## Save Parameters
Save `seq_length` and `save_dir` for generating a new TV script.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
```
## Implement Generate Functions
### Get Tensors
Get tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)`
```
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
input = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return input, initial_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
```
### Choose Word
Implement the `pick_word()` function to select the next word using `probabilities`.
```
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
index = np.argmax(probabilities)
return int_to_vocab[index]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
```
## Generate TV Script
This will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.
```
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
```
# The TV Script is Nonsensical
It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
y = activation(torch.sum(features * weights) + bias)
# or y = activation((features * weights).sum() + bias) tensor has .sum method
print(y)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
y = activation(torch.mm(features, weights.view(5,1)) + bias)
print(y)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
h = activation(torch.mm(features,W1) + B1)
output = activation(torch.mm(h, W2) + B2)
print(output)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
## Explore The Data: Plot Categorical Features
Using the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.
This dataset contains information about 891 people who were on board the ship when departed on April 15th, 1912. As noted in the description on Kaggle's website, some people aboard the ship were more likely to survive the wreck than others. There were not enough lifeboats for everybody so women, children, and the upper-class were prioritized. Using the information about these 891 passengers, the challenge is to build a model to predict which people would survive based on the following fields:
- **Name** (str) - Name of the passenger
- **Pclass** (int) - Ticket class (1st, 2nd, or 3rd)
- **Sex** (str) - Gender of the passenger
- **Age** (float) - Age in years
- **SibSp** (int) - Number of siblings and spouses aboard
- **Parch** (int) - Number of parents and children aboard
- **Ticket** (str) - Ticket number
- **Fare** (float) - Passenger fare
- **Cabin** (str) - Cabin number
- **Embarked** (str) - Port of embarkation (C = Cherbourg, Q = Queenstown, S = Southampton)
**This section focuses on exploring the `Name`, `Sex`, `Ticket`, `Cabin`, and `Embarked` features.**
### Read In Data
```
# Read in our data
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import numpy as np
import pandas as pd
titanic = pd.read_csv('titanic.csv',
usecols=['Survived', 'Name', 'Sex', 'Cabin', 'Embarked'])
titanic.head()
```
### Plot Categorical Features
```
# Create a title feature by parsing passenger name and create a cabin indicator variable
titanic['Title_Raw'] = titanic['Name'].apply(lambda x: x.split(',')[1].split('.')[0].strip())
titanic['Title'] = titanic['Title_Raw'].apply(lambda x: x if x in ['Master', 'Miss', 'Mr', 'Mrs'] else 'Other')
titanic['Cabin_ind'] = np.where(titanic['Cabin'].isnull(), 0, 1)
titanic.head()
```
* we just built 'Title' column for the sake of visualization because as we saw the only group that have strong relationship as well as larger number are Mr, Miss, Mrs, and Master
* this also applied for cabin because there was strong survival rate with missing cabin
```
# Generate categorical plots for features
for col in ['Title', 'Sex', 'Cabin_ind', 'Embarked']:
sns.catplot(x=col, y='Survived', data=titanic, kind='point', aspect=2, )
plt.ylim(0, 1)
# Split embarked by whether the passenger had a cabin
titanic.pivot_table('Survived', index='Cabin_ind', columns='Embarked', aggfunc='count')
```
| github_jupyter |
```
%matplotlib inline
```
# Tensors
Tensors are a specialized data structure that are very similar to arrays and matrices.
In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.
Tensors are similar to [NumPy’s](https://numpy.org/) ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see `bridge-to-np-label`). Tensors are also optimized for automatic differentiation (we'll see more about that later in the Autograd unit). If you’re familiar with `ndarrays`, you’ll be right at home with the Tensor API. If not, follow along!
Let's start by setting up our environment.
```
import torch
import numpy as np
```
# Initializing a Tensor
Tensors can be initialized in various ways. Take a look at the following examples:
## Directly from data
Tensors can be created directly from data. The data type is automatically inferred.
```
data = [[1, 2],[3, 4]]
x_data = torch.tensor(data)
```
## From a NumPy array
Tensors can be created from NumPy arrays (and vice versa - see `bridge-to-np-label`).
```
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
```
## From another tensor:
The new tensor retains the properties (shape, data type) of the argument tensor, unless explicitly overridden.
```
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
```
## With random or constant values:
``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.
```
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
```
# Attributes of a Tensor
Tensor attributes describe their shape, data type, and the device on which they are stored.
```
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
```
# Operations on Tensors
Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing,
indexing, slicing), sampling and more are
comprehensively described [here](https://pytorch.org/docs/stable/torch.html).
Each of these operations can be run on the GPU (at typically higher speeds than on a
CPU).
By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using
`.to` method (after checking for GPU availability). Keep in mind that copying large tensors
across devices can be expensive in terms of time and memory!
```
# We move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
```
Try out some of the operations from the list.
If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.
## Standard numpy-like indexing and slicing:
```
tensor = torch.ones(4, 4)
print('First row: ',tensor[0])
print('First column: ', tensor[:, 0])
print('Last column:', tensor[..., -1])
tensor[:,1] = 0
print(tensor)
```
## Joining tensors
You can use `torch.cat` to concatenate a sequence of tensors along a given dimension.
See also [torch.stack](https://pytorch.org/docs/stable/generated/torch.stack.html),
another tensor joining op that is subtly different from ``torch.cat``.
```
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
```
## Arithmetic operations
```
# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)
y3 = torch.rand_like(tensor)
torch.matmul(tensor, tensor.T, out=y3)
# This computes the element-wise product. z1, z2, z3 will have the same value
z1 = tensor * tensor
z2 = tensor.mul(tensor)
z3 = torch.rand_like(tensor)
torch.mul(tensor, tensor, out=z3)
```
## Single-element tensors
If you have a one-element tensor, for example by aggregating all
values of a tensor into one value, you can convert it to a Python
numerical value using `item()`:
```
agg = tensor.sum()
agg_item = agg.item()
print(agg_item, type(agg_item))
```
## In-place operations
Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix.
For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
> **Note:** In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged.
```
print(tensor, "\n")
tensor.add_(5)
print(tensor)
```
## Bridge with NumPy
Tensors on the CPU and NumPy arrays can share their underlying memory
locations, and changing one will change the other.
### Tensor to NumPy array
```
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
```
A change in the tensor reflects in the NumPy array.
```
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
```
### NumPy array to Tensor
```
n = np.ones(5)
t = torch.from_numpy(n)
```
Changes in the NumPy array reflects in the tensor.
```
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
```
| github_jupyter |
# Deep Learning & Art: Neural Style Transfer
Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576).
**In this assignment, you will:**
- Implement the neural style transfer algorithm
- Generate novel artistic images using your algorithm
Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!
```
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
```
## 1 - Problem Statement
Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S.
In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).
<img src="images/louvre_generated.png" style="width:750px;height:200px;">
Let's see how you can do this.
## 2 - Transfer Learning
Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.
Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).
Run the following code to load parameters from the VGG model. This may take a few seconds.
```
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
```
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this:
```python
model["input"].assign(image)
```
This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows:
```python
sess.run(model["conv4_2"])
```
## 3 - Neural Style Transfer
We will build the NST algorithm in three steps:
- Build the content cost function $J_{content}(C,G)$
- Build the style cost function $J_{style}(S,G)$
- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$.
### 3.1 - Computing the content cost
In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
```
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
```
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**
As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.
We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)
So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:
$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$
Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)
<img src="images/NST_LOSS.png" style="width:800px;height:400px;">
**Exercise:** Compute the "content cost" using TensorFlow.
**Instructions**: The 3 steps to implement this function are:
1. Retrieve dimensions from a_G:
- To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`
2. Unroll a_C and a_G as explained in the picture above
- If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).
3. Compute the content cost:
- If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract).
```
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = tf.convert_to_tensor(a_C, dtype=tf.float32).get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.reshape(a_C,[m,-1,n_C])
a_G_unrolled = tf.reshape(a_G,[m,-1,n_C])
# compute the cost with tensorflow (≈1 line)
J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled)))*(1 / (4 * n_H * n_W * n_C))
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**J_content**
</td>
<td>
6.76559
</td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are.
- When we minimize the content cost later, this will help make sure $G$ has similar content as $C$.
### 3.2 - Computing the style cost
For our running example, we will use the following style image:
```
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
```
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.
Lets see how you can now define a "style" const function $J_{style}(S,G)$.
### 3.2.1 - Style matrix
The style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large.
Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context.
In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:
<img src="images/NST_GM.png" style="width:900px;height:300px;">
The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$.
One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture.
By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image.
**Exercise**:
Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose).
```
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A,tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**GA**
</td>
<td>
[[ 6.42230511 -4.42912197 -2.09668207] <br>
[ -4.42912197 19.46583748 19.56387138] <br>
[ -2.09668207 19.56387138 20.6864624 ]]
</td>
</tr>
</table>
### 3.2.2 - Style cost
After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as:
$$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$
where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network.
**Exercise**: Compute the style cost for a single layer.
**Instructions**: The 3 steps to implement this function are:
1. Retrieve dimensions from the hidden layer activations a_G:
- To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`
2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above.
- You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.
3. Compute the Style matrix of the images S and G. (Use the function you had previously written.)
4. Compute the Style cost:
- You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful.
```
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = tf.convert_to_tensor(a_S, dtype=tf.float32).get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) * (1 / (4 * n_C **2 * (n_H * n_H)**2))
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**J_style_layer**
</td>
<td>
9.19028
</td>
</tr>
</table>
### 3.2.3 Style Weights
So far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
```
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
```
You can combine the style costs for different layers as follows:
$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$
where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`.
We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing.
<!--
2. Loop over (layer_name, coeff) from STYLE_LAYERS:
a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"]
b. Get the style of the style image from the current layer by running the session on the tensor "out"
c. Get a tensor representing the style of the generated image from the current layer. It is just "out".
d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer
e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)
3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.
!-->
```
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
```
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.
<!--
How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers
!-->
<font color='blue'>
**What you should remember**:
- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.
- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$.
</font color='blue'>
### 3.3 - Defining the total cost to optimize
Finally, let's create a cost function that minimizes both the style and the content cost. The formula is:
$$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$
**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
```
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha * J_content + beta * J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
```
**Expected Output**:
<table>
<tr>
<td>
**J**
</td>
<td>
35.34667875478276
</td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$
- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style
## 4 - Solving the optimization problem
Finally, let's put everything together to implement Neural Style Transfer!
Here's what the program will have to do:
<font color='purple'>
1. Create an Interactive Session
2. Load the content image
3. Load the style image
4. Randomly initialize the image to be generated
5. Load the VGG16 model
7. Build the TensorFlow graph:
- Run the content image through the VGG16 model and compute the content cost
- Run the style image through the VGG16 model and compute the style cost
- Compute the total cost
- Define the optimizer and the learning rate
8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.
</font>
Lets go through the individual steps in detail.
You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.
Lets start the interactive session.
```
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
```
Let's load, reshape, and normalize our "content" image (the Louvre museum picture):
```
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
```
Let's load, reshape and normalize our "style" image (Claude Monet's painting):
```
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
```
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
```
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
```
Next, as explained in part (2), let's load the VGG16 model.
```
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
```
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:
1. Assign the content image to be the input to the VGG model.
2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".
3. Set a_G to be the tensor giving the hidden layer activation for the same layer.
4. Compute the content cost using a_C and a_G.
```
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
```
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
```
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
```
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`.
```
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, 10, 40)
### END CODE HERE ###
```
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
```
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
```
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
```
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
_ =sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
```
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
```
model_nn(sess, generated_image)
```
**Expected Output**:
<table>
<tr>
<td>
**Iteration 0 : **
</td>
<td>
total cost = 5.05035e+09 <br>
content cost = 7877.67 <br>
style cost = 1.26257e+08
</td>
</tr>
</table>
You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :)
You should see something the image presented below on the right:
<img src="images/louvre_generated.png" style="width:800px;height:300px;">
We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.
Here are few other examples:
- The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)
<img src="images/perspolis_vangogh.png" style="width:750px;height:300px;">
- The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.
<img src="images/pasargad_kashi.png" style="width:750px;height:300px;">
- A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.
<img src="images/circle_abstract.png" style="width:750px;height:300px;">
## 5 - Test with your own image (Optional/Ungraded)
Finally, you can also rerun the algorithm on your own images!
To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do:
1. Click on "File -> Open" in the upper tab of the notebook
2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example.
3. Change the code in part (3.4) from :
```python
content_image = scipy.misc.imread("images/louvre.jpg")
style_image = scipy.misc.imread("images/claude-monet.jpg")
```
to:
```python
content_image = scipy.misc.imread("images/my_content.jpg")
style_image = scipy.misc.imread("images/my_style.jpg")
```
4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).
You can also tune your hyperparameters:
- Which layers are responsible for representing the style? STYLE_LAYERS
- How many iterations do you want to run the algorithm? num_iterations
- What is the relative weighting between content and style? alpha/beta
## 6 - Conclusion
Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them!
<font color='blue'>
What you should remember:
- Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image
- It uses representations (hidden layer activations) based on a pretrained ConvNet.
- The content cost function is computed using one hidden layer's activations.
- The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers.
- Optimizing the total cost function results in synthesizing new images.
This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!
### References:
The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.
- Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576)
- Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/
- Log0, TensorFlow Implementation of "A Neural Algorithm of Artistic Style". http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style
- Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf)
- MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/
| github_jupyter |
## Evaluate CNTK Fast-RCNN model directly from python
This notebook demonstrates how to evaluate a single image using a CNTK Fast-RCNN model.
For a full description of the model and the algorithm, please see the following <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN" target="_blank">tutorial</a>.
Below, you will see sample code for:
1. Preparing the input data for the network (including image size adjustments)
2. Evaluation of the input data using the model
3. Processing the evaluation result and presenting the selected regions back on the image.
<b>Important</b>: Before running this notebook, please make sure that:
<ol>
<li>You have version >= 2.0 RC 1 of CNTK installed. Installation instructions are available <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine" target="_blank">here</a>.
<li>This notebook uses the CNTK python APIs and should be run from the CNTK python environment.</li>
<li>OpenCV and the other required python packages for the Fast-RCNN scenario are installed. Please follow the instructions <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN#setup" target="_blank">in here</a> to install the required packages.
</ol>
##### 1. Download the sample dataset and make sure that the model exists
First things first - we will download the sample Grocery dataset (if it's not already there), and we'll also make sure that the Fast-RCNN model file exists. The script will use your local trained model (if available), or will download and use the pre-trained model if a local trained model isn't available.
In case we run inside the CNTK test enviornment, the model and data are copied from the test data directory.
We also set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.
```
%matplotlib inline
# the above line enable us to draw the images inside the notebooks
import os
import sys
from os import path
import cntk
# Check for an environment variable defined in CNTK's test infrastructure
def is_test(): return 'CNTK_EXTERNAL_TESTDATA_SOURCE_DIRECTORY' in os.environ
# Select the right target device when this notebook is being tested
# Currently supported only for GPU
# Setup data environment for pre-built data sources for testing
if is_test():
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
cntk.device.try_set_default_device(cntk.device.cpu())
else:
cntk.device.try_set_default_device(cntk.device.gpu(0))
sys.path.append(os.path.join(*"../../../../Tests/EndToEndTests/CNTKv2Python/Examples".split("/")))
import prepare_test_data as T
T.prepare_Grocery_data()
T.prepare_fastrcnn_grocery_100_model()
#Make sure the grocery dataset is installed
sys.path.append('../../DataSets/Grocery')
from install_grocery import download_grocery_data
download_grocery_data()
# Make sure the FRCNN model exists - check if the model was trained and exists, if not - download the existing model
sys.path.append('../../PretrainedModels')
from models_util import download_model_by_name
download_model_by_name("Fast-RCNN_grocery100")
model_path = '../../PretrainedModels/Fast-RCNN_grocery100.model'
```
### 3. load the model and prepare it for evaluation:
As a first step for using the Fast-RCNN model, we load the trained model file.
The trained model accepts 3 inputs: The image data, the bounding box (region of interest, or ROI) proposals and the ground truth labels of the ROIs. Since we are evaluating a new image - we probably don't have the ground truth labels for the image, hence - we need to adjust the network to accept only the image and the ROIs as input.
In order to do that we use the CNTK APIs to clone the network and change its input nodes.
More information and examples regarding cloning nodes of a network are available in the <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/Build-your-own-image-classifier-using-Transfer-Learning" target="_blank">Transfer Learning</a> tutorial.
```
from cntk import load_model
from cntk import placeholder
from cntk.logging.graph import find_by_name, get_node_outputs
from cntk.ops import combine
from cntk.ops.sequence import input_variable
from cntk.ops.functions import CloneMethod
# load the trained model
trained_frcnn_model = load_model(model_path)
# find the original features and rois input nodes
features_node = find_by_name(trained_frcnn_model, "features")
rois_node = find_by_name(trained_frcnn_model, "rois")
# find the output "z" node
z_node = find_by_name(trained_frcnn_model, 'z')
# define new input nodes for the features (image) and rois
image_input = input_variable(features_node.shape, name='features')
roi_input = input_variable(rois_node.shape, name='rois')
# Clone the desired layers with fixed weights and place holder for the new input nodes
cloned_nodes = combine([z_node.owner]).clone(
CloneMethod.freeze,
{features_node: placeholder(name='features'), rois_node: placeholder(name='rois')})
# apply the cloned nodes to the input nodes
frcnn_model = cloned_nodes(image_input, roi_input)
print("Fast-RCNN Grocery model loaded succesfully!")
```
### 4. Load an image and convert it to the network format
Next, we load an image from the test set using OpenCV, and then resize according to the network input dimensions. (Which are set when the network is trained).
When resizing, we preserve scale and pad the border areas with a constant value (114), which is later used for normalization by the network.
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
image_height = 1000
image_width = 1000
def resize_and_pad(img, width, height, pad_value=114):
# port of the c++ code from CNTK: https://github.com/Microsoft/CNTK/blob/f686879b654285d06d75c69ee266e9d4b7b87bc4/Source/Readers/ImageReader/ImageTransformers.cpp#L316
img_width = len(img[0])
img_height = len(img)
scale_w = img_width > img_height
target_w = width
target_h = height
if scale_w:
target_h = int(np.round(img_height * float(width) / float(img_width)))
else:
target_w = int(np.round(img_width * float(height) / float(img_height)))
resized = cv2.resize(img, (target_w, target_h), 0, 0, interpolation=cv2.INTER_NEAREST)
top = int(max(0, np.round((height - target_h) / 2)))
left = int(max(0, np.round((width - target_w) / 2)))
bottom = height - top - target_h
right = width - left - target_w
resized_with_pad = cv2.copyMakeBorder(resized, top, bottom, left, right,
cv2.BORDER_CONSTANT, value=[pad_value, pad_value, pad_value])
#tranpose(2,0,1) converts the image to the HWC format which CNTK accepts
model_arg_rep = np.ascontiguousarray(np.array(resized_with_pad, dtype=np.float32).transpose(2,0,1))
return resized_with_pad, model_arg_rep
def load_image_and_scale(image_path, width, height, pad_value=114):
img = cv2.imread(image_path)
return resize_and_pad(img, width, height, pad_value), img
test_image_path = r"../../DataSets/Grocery/testImages/WIN_20160803_11_28_42_Pro.jpg"
(test_img, test_img_model_arg), original_img = load_image_and_scale(test_image_path, image_width, image_height)
plt.imshow(cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB))
plt.axis("off")
```
### 5. Generate ROIs for testing
Now, we produce regions of interest (ROIs) proposals using selective search & grid methods, using the same method as in the script: A1_GenerateInputROIs.py.
Each ROI is in the format of [x,y,w,h], where the coordinates real numbers in the range of 0 to 1, and scaled according to the resized and padded image.
The ROIs array is padded with regions of [0,0,0,0] at the end to match the 2000 ROIs input format of the model.
```
# Parameters taken from PARAMETERS.py
# ROI generation
roi_minDimRel = 0.04
roi_maxDimRel = 0.4
roi_minNrPixelsRel = 2 * roi_minDimRel * roi_minDimRel
roi_maxNrPixelsRel = 0.33 * roi_maxDimRel * roi_maxDimRel
roi_maxAspectRatio = 4.0 # maximum aspect Ratio of a ROI vertically and horizontally
roi_maxImgDim = 200 # image size used for ROI generation
ss_scale = 100 # selective search ROIS: parameter controlling cluster size for segmentation
ss_sigma = 1.2 # selective search ROIs: width of gaussian kernal for segmentation
ss_minSize = 20 # selective search ROIs: minimum component size for segmentation
grid_nrScales = 7 # uniform grid ROIs: number of iterations from largest possible ROI to smaller ROIs
grid_aspectRatios = [1.0, 2.0, 0.5] # uniform grid ROIs: aspect ratio of ROIs
cntk_nrRois = 100 # 100 # how many ROIs to zero-pad
cntk_padWidth = 1000
cntk_padHeight = 1000
from cntk_helpers import imArrayWidthHeight, getSelectiveSearchRois, imresizeMaxDim
from cntk_helpers import getGridRois, filterRois, roiTransformPadScaleParams, roiTransformPadScale
def get_rois_for_image(img, use_selective_search=True, use_grid_rois=True):
roi_minDim = roi_minDimRel * roi_maxImgDim
roi_maxDim = roi_maxDimRel * roi_maxImgDim
roi_minNrPixels = roi_minNrPixelsRel * roi_maxImgDim*roi_maxImgDim
roi_maxNrPixels = roi_maxNrPixelsRel * roi_maxImgDim*roi_maxImgDim
imgOrig = img.copy()
# get rois
if use_selective_search:
print ("Calling selective search..")
rects, scaled_img, scale = getSelectiveSearchRois(imgOrig, ss_scale, ss_sigma, ss_minSize, roi_maxImgDim) #interpolation=cv2.INTER_AREA
print ("Number of rois detected using selective search: " + str(len(rects)))
else:
rects = []
scaled_img, scale = imresizeMaxDim(imgOrig, roi_maxImgDim, boUpscale=True, interpolation=cv2.INTER_AREA)
imgWidth, imgHeight = imArrayWidthHeight(scaled_img)
# add grid rois
if use_grid_rois:
rectsGrid = getGridRois(imgWidth, imgHeight, grid_nrScales, grid_aspectRatios)
print ("Number of rois on grid added: " + str(len(rectsGrid)))
rects += rectsGrid
# run filter
print ("Number of rectangles before filtering = " + str(len(rects)))
rois = filterRois(rects, imgWidth, imgHeight, roi_minNrPixels, roi_maxNrPixels, roi_minDim, roi_maxDim, roi_maxAspectRatio)
if len(rois) == 0: #make sure at least one roi returned per image
rois = [[5, 5, imgWidth-5, imgHeight-5]]
print ("Number of rectangles after filtering = " + str(len(rois)))
# scale up to original size and save to disk
# note: each rectangle is in original image format with [x,y,x2,y2]
original_rois = np.int32(np.array(rois) / scale)
img_width = len(img[0])
img_height = len(img)
# all rois need to be scaled + padded to cntk input image size
targetw, targeth, w_offset, h_offset, scale = roiTransformPadScaleParams(img_width, img_height,
cntk_padWidth, cntk_padHeight)
rois = []
for original_roi in original_rois:
x, y, x2, y2 = roiTransformPadScale(original_roi, w_offset, h_offset, scale)
xrel = float(x) / (1.0 * targetw)
yrel = float(y) / (1.0 * targeth)
wrel = float(x2 - x) / (1.0 * targetw)
hrel = float(y2 - y) / (1.0 * targeth)
rois.append([xrel, yrel, wrel, hrel])
# pad rois if needed:
if len(rois) < cntk_nrRois:
rois += [[0, 0, 0, 0]] * (cntk_nrRois - len(rois))
elif len(rois) > cntk_nrRois:
rois = rois[:cntk_nrRois]
return np.array(rois), original_rois
test_rois, original_rois = get_rois_for_image(original_img)
roi_padding_index = len(original_rois)
print("Number of rois for evaluation:", len(test_rois))
```
### 6. Evaluate the sample
Here, we prepare the data to be in CNTK's expected arguments format and run it through the model used the model's **eval** method.
We then process the result by trimming the padded ROIs part, and calculate the predicted labels and their probabilities.
```
from cntk_helpers import softmax2D
# a dummy variable for labels the will be given as an input to the network but will be ignored
dummy_labels = np.zeros((2000,17))
#Index the names of the arguments so we can get them by name
args_indices = {}
for i,arg in enumerate(frcnn_model.arguments):
args_indices[arg.name] = i
# prepare the arguments
arguments = {
frcnn_model.arguments[args_indices['features']]: [test_img_model_arg],
frcnn_model.arguments[args_indices['rois']]: [test_rois],
}
# run it through the model
output = frcnn_model.eval(arguments)
# we now extract the "z" values from the output, which are the values of the layer that is just before
# the softmax layer.
# we take just the relevant part from that array
rois_values = output[0][0][:roi_padding_index]
# get the prediction for each roi by taking the index with the maximal value in each row
rois_labels_predictions = np.argmax(rois_values, axis=1)
# calculate the probabilities using softmax
rois_probs = softmax2D(rois_values)
# print the number of ROIs that were detected as non-background
print("Number of detections: %d"%np.sum(rois_labels_predictions > 0))
```
### 7. Merge overlapping regions using Non-Maxima-Suppression
Before inspecting the predictions, we need to merge overlapping regions that were detected using the Non-Maxima-Suppression algorithm that is implemented in the cntk_helpers module.
```
from cntk_helpers import applyNonMaximaSuppression
nms_threshold = 0.1
non_padded_rois = test_rois[:roi_padding_index]
max_probs = np.amax(rois_probs, axis=1).tolist()
rois_prediction_indices = applyNonMaximaSuppression(nms_threshold, rois_labels_predictions, max_probs, non_padded_rois)
print("Indices of selected regions:",rois_prediction_indices)
```
### 8. Visualize the results
As a final step, we use the OpenCV **rectangle** and **putText** methods in order to draw the selected regions on the original image alongside their corresponding predicted labels.
```
rois_with_prediction = test_rois[rois_prediction_indices]
rois_prediction_labels = rois_labels_predictions[rois_prediction_indices]
rois_predicion_scores = rois_values[rois_prediction_indices]
original_rois_predictions = original_rois[rois_prediction_indices]
# class names taken from PARAMETERS.py:
classes = ('__background__', # always index 0
'avocado', 'orange', 'butter', 'champagne', 'eggBox', 'gerkin', 'joghurt', 'ketchup',
'orangeJuice', 'onion', 'pepper', 'tomato', 'water', 'milk', 'tabasco', 'mustard')
original_img_cpy = original_img.copy()
for roi,label in zip(original_rois_predictions, rois_prediction_labels):
(x1,y1,x2,y2) = roi
cv2.rectangle(original_img_cpy, (x1, y1), (x2, y2), (0, 255, 0), 5)
cv2.putText(original_img_cpy,classes[label],(x1,y2 + 30), cv2.FONT_HERSHEY_DUPLEX, 2,(200,0,255),3,cv2.LINE_AA)
print("Evaluation result:")
plt.figure(figsize=(10, 10))
plt.imshow(cv2.cvtColor(original_img_cpy, cv2.COLOR_BGR2RGB), interpolation='nearest')
plt.axis("off")
```
| github_jupyter |
# Predicting Review rating from review text
# <span style="color:dodgerblue"> Naive Bayes Classifier Using 5 Classes (1,2,3,4 and 5 Rating)</span>
```
%pylab inline
import warnings
warnings.filterwarnings('ignore')
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
# Importing the reviews dataset
reviews_dataset = pd.read_csv('reviews_restaurants_text.csv')
# Creating X and Y for the classifier. X is the review text and Y is the rating
x = reviews_dataset['text']
y = reviews_dataset['stars']
# Text preprocessing
import string
def text_preprocessing(text):
no_punctuation = [ch for ch in text if ch not in string.punctuation]
no_punctuation = ''.join(no_punctuation)
return [w for w in no_punctuation.split() if w.lower() not in stopwords.words('english')]
%%time
# Estimated time: 30 min
# Vectorization
# Converting each review into a vector using bag-of-words approach
from sklearn.feature_extraction.text import CountVectorizer
vector = CountVectorizer(analyzer=text_preprocessing).fit(x)
x = vector.transform(x)
# Spitting data into training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x, y, test_size=0.20, random_state=0, shuffle =False)
# Building Multinomial Naive Bayes modle and fit it to our training set
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(X_train, Y_train)
# Using our trained classifier to predict the ratings from text
# Testing our model on the test set
preds = classifier.predict(X_test)
print("Actual Ratings(Stars): ",end = "")
display(Y_test[:15])
print("Predicted Ratings: ",end = "")
print(preds[:15])
```
## Evaluating the model
## <span style="color:orangered"> Accuracy </span>
```
# Accuracy of the model
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, preds)
```
## <span style="color:orangered"> Precision and Recall of the model</span>
```
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
print ('Precision: ' + str(precision_score(Y_test, preds, average='weighted')))
print ('Recall: ' + str(recall_score(Y_test,preds, average='weighted')))
```
## <span style="color:orangered"> Classification Report </span>
```
# Evaluating the model
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(Y_test, preds))
print('\n')
print(classification_report(Y_test, preds))
```
## <span style="color:orangered">Confusion Matrix of the model</span>
```
# citation: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
from sklearn import metrics
class_names = ['1','2','3','4','5']
# Compute confusion matrix
cnf_matrix = metrics.confusion_matrix(Y_test, preds
)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
```
# <span style="color:dodgerblue"> Naive Bayes Classifier Using 2 Classes <span style="color:dodgerblue"> (1 and 5 Rating: Positive & Negative Reviews)</span>
```
# Importing the datasets
reviews = pd.read_csv('reviews_restaurants_text.csv')
reviews['text'] = reviews['text'].str[2:-2]
# Reducing the dataset to 2 classes i.e 1 and 5 star rating
reviews['stars'][reviews.stars == 3] = 1
reviews['stars'][reviews.stars == 2] = 1
reviews['stars'][reviews.stars == 4] = 5
#Undersampling of the dataset to get a balanced dataset
review1 = reviews[reviews['stars'] == 1]
review5 = reviews[reviews['stars'] == 5][0:34062]
frames = [review1, review5]
reviews = pd.concat(frames)
# Creating X and Y for the classifier. X is the review text and Y is the rating
x2 = reviews['text']
y2 = reviews['stars']
# Vectorization
# Converting each review into a vector using bag-of-words approach
from sklearn.feature_extraction.text import CountVectorizer
vector2 = CountVectorizer(analyzer=text_preprocessing).fit(x2)
x2 = vector.transform(x2)
# Spitting data into training and test set
from sklearn.model_selection import train_test_split
X2_train, X2_test, Y2_train, Y2_test = train_test_split(x2, y2, test_size=0.20, random_state=0)
# Building Multinomial Naive Bayes modle and fit it to our training set
from sklearn.naive_bayes import MultinomialNB
classifier2 = MultinomialNB()
classifier2.fit(X2_train, Y2_train)
# Testing our model on the test set
Y2_pred = classifier2.predict(X2_test)
```
## <span style="color:orangered"> Classification Report </span>
```
# Evaluating the model
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(Y2_test, Y2_pred))
print('\n')
print(classification_report(Y2_test, Y2_pred))
```
## <span style="color:orangered"> Accuracy of the model </span>
```
# Accuracy of the model
from sklearn.metrics import accuracy_score
accuracy_score(Y2_test, Y2_pred)
```
## <span style="color:orangered"> Precision and Recall of the model</span>
```
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
print ('Precision: ' + str(precision_score(Y2_test, Y2_pred, average='weighted')))
print ('Recall: ' + str(recall_score(Y2_test, Y2_pred, average='weighted')))
```
## <span style="color:orangered"> Confusion Matrix of the model </font>
```
class_names = ['Negative','Positive']
# Compute confusion matrix
cnf_matrix = metrics.confusion_matrix(Y2_test, Y2_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
```
| github_jupyter |
# Exact Cover問題
最初にExact Cover問題について説明します。
ある自然数の集合Uを考えます。またその自然数を含むいくつかのグループ$V_{1}, V_{2}, \ldots, V_{N}$を想定します。1つの自然数が複数のグループに属していても構いません。さて、そのグループ$V_{i}$からいくつかピックアップしたときに、それらに同じ自然数が複数回含まれず、Uに含まれる自然数セットと同じになるようにピックアップする問題をExact Cover問題といいます。
さらに、選んだグループ数を最小になるようにするものを、Smallest Exact Coverといいます。
## 準備
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import blueqat.wq as wq
from blueqat import vqe
```
## QUBOの作成
解きたい問題のQUBOマトリクスを作成します。
最初に自然数の集合を $U = \{1, \ldots, n\}$、グループを$V_{i} \subseteq U(i=1, \ldots, N)$とします。また、i番目のグループをピックアップしたかどうかを$x_{i} \in \{1, 0\}$で表します。ピックアップされた場合は1、されなかった場合は0です。ここで、各自然数(αとします)についてピックアップされた1つのグループのみに含まれている場合に最小となるようなコスト関数$E_{A}$を考えます。
この場合、
$E_{A} = A \sum _ { \alpha = 1 } ^ { n } \left( 1 - \sum _ { i : \alpha \in V _ { i } } x _ { i } \right) ^ { 2 }$
とすると、各自然数αに対して1つのグループのみがピックアップされた場合、$E_{A} = 0$となります。
これをQUBO形式に変換していきます。まず括弧の中を展開します。
$E_{A} = A \sum _ { \alpha = 1 } ^ { n } \{ 1 - 2\sum _ { i : \alpha \in V _ { i } } x _ { i } + ( \sum _ { i : \alpha \in V _ { i } } x _ { i } ) ^ { 2 } \} $
今回$E_{A}$を最小化する問題なので、定数である{}内の第一項は無視できます。
第二項は、$x_{i} \in {1,0}$であることを利用して、次のように書き換えることができます。
$ - 2\sum _ { i : \alpha \in V _ { i } } x _ { i } = - 2\sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j}$
第三項についても、i = jの場合と、$i \neq j$の場合に分けると、次の様に書き換えられます。
$ ( \sum _ { i : \alpha \in V _ { i } } x _ { i } ) ^ { 2 } = \sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} + 2 \sum _ { i \neq j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} $
まとめると、
$E_{A} = A \sum _ { \alpha = 1 } ^ { n } ( - \sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} + 2 \sum _ { i \neq j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} )$
となり、QUBO形式にすることができました。
```
U = [1,2,3,4,5,6,7,8,9,10]
A = 1
def get_qubo(V):
Q = np.zeros( (len(V), len(V)) )
for i in range(len(V)):
for j in range(len(V)):
for k in range(len(U)):
alpha = U[k]
in_Vi = V[i].count(alpha) > 0 #V[i]に存在しているか
in_Vj = V[j].count(alpha) > 0 #V[j]に存在しているか
if i == j and in_Vi:
Q[i][j] += -1
elif i < j and in_Vi and in_Vj:
Q[i][j] += 2
return Q * A
```
また、結果を表示する関数を定義しておきます。
```
def display_answer(list_x, energies = None, show_graph = False):
print("Result x:", list_x)
text = ""
for i in range(len(list_x)):
if(list_x[i]):
text += str(V[i])
print("Picked {} group(s): {}".format(sum(list_x), text))
if energies is not None:
print("Energy:", a.E[-1])
if show_graph:
plt.plot(a.E)
plt.show()
```
次の通り実行してみると、正しい答えが得られていることが分かります。
```
V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10] ]
qubo = get_qubo(V)
result = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=4)).run()
answer = result.most_common(12)
print(answer)
display_answer(answer[0][0])
```
## Vをもう少し複雑にしてみる
Vをもう少し複雑にして(2つグループを追加して)、実行してみます。
```
V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10], [7,9], [2,4,6,8] ]
qubo = get_qubo(V)
result = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=2)).run()
answer = result.most_common(12)
print(answer)
display_answer(answer[0][0])
```
正しい答えが得られていることが分かります。
### 意地悪ケース
最後に意地悪なケースを試します。
{1,2}{3}{4}{5}{6}{7}{8}{9}{10}が選ばれるのが正解です。
結果を見ると、概ね正しい答えが選ばれるようですが、まれに少しエネルギーの高い不正解の方が選ばれてしまいます。
```
V = [ [1,2], [3], [4], [5], [6], [7], [8], [9], [10], [2,3,4,5,6,7,8,9,10]]
for i in range(5):
print("---{}回目".format(i+1))
qubo = get_qubo(V)
result = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=6)).run()
answer = result.most_common(12)
display_answer(answer[0][0])
```
| github_jupyter |
# Preprocessing
Source: https://www.kaggle.com/c/GiveMeSomeCredit/
```
import os
import numpy as np
import pandas as pd
import config as cfg
from sklearn.model_selection import train_test_split
from imblearn.under_sampling import RandomUnderSampler
from pandas_profiling import ProfileReport
pd.set_option("display.max_columns", None)
```
### Train/test split
```
df = pd.read_csv(os.path.join("Data", "data_original", "cs-training.csv")).drop(['Unnamed: 0'], axis=1)
df["BAD"] = df["SeriousDlqin2yrs"]
df = df.drop(["SeriousDlqin2yrs"], axis=1)
df
print("Bad rate:", df["BAD"].mean())
X = df.drop(['BAD'], axis=1)
y = df['BAD']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=cfg.TEST_SIZE, random_state=cfg.SEED, stratify=y)
X_train = pd.get_dummies(X_train)
X_test = pd.get_dummies(X_test)
rus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY)
X_train, y_train = rus.fit_resample(X_train, y_train)
X_train.to_csv(os.path.join("Data", "data_preprocessed", "X_train.csv"), index=False)
X_test.to_csv(os.path.join("Data", "data_preprocessed", "X_test.csv"), index=False)
y_train.to_csv(os.path.join("Data", "data_preprocessed", "y_train.csv"), index=False)
y_test.to_csv(os.path.join("Data", "data_preprocessed", "y_test.csv"), index=False)
ProfileReport(X_train, minimal=True).to_file(os.path.join("Results", "X_train.html"))
print("X_train:", X_train.shape)
print("X_test:", X_test.shape)
print("Bad rate:", y_train.mean())
```
### Train/test split with binning
```
df_binned = df.copy()
df_binned['age'] = pd.qcut(df['age'], 10)
df_binned['RevolvingUtilizationOfUnsecuredLines'] = pd.qcut(df['RevolvingUtilizationOfUnsecuredLines'], 10)
df_binned['NumberOfTime30-59DaysPastDueNotWorse'] = pd.cut(df_binned['NumberOfTime30-59DaysPastDueNotWorse'], bins=[0, 1, 100], right=False)
df_binned['DebtRatio'] = pd.qcut(df_binned['DebtRatio'], 10)
df_binned['MonthlyIncome'] = pd.qcut(df_binned['MonthlyIncome'], 10)
df_binned['NumberOfOpenCreditLinesAndLoans'] = pd.qcut(df_binned['NumberOfOpenCreditLinesAndLoans'], 10)
df_binned['NumberOfTimes90DaysLate'] = pd.cut(df_binned['NumberOfTimes90DaysLate'], bins=[0, 1, 100], right=False)
df_binned['NumberRealEstateLoansOrLines'] = pd.cut(df_binned['NumberRealEstateLoansOrLines'], bins=[0, 1, 2, 100], right=False)
df_binned['NumberOfTime60-89DaysPastDueNotWorse'] = pd.cut(df_binned['NumberOfTime60-89DaysPastDueNotWorse'], bins=[0, 1, 100], right=False)
df_binned['NumberOfDependents'] = pd.cut(df_binned['NumberOfDependents'], bins=[0, 1, 2, 3, 100], right=False)
df_binned
print("Bad rate:", df_binned["BAD"].mean())
X = df_binned.drop(['BAD'], axis=1)
y = df_binned['BAD']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=cfg.TEST_SIZE, random_state=cfg.SEED, stratify=y)
rus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY)
X_train, y_train = rus.fit_resample(X_train, y_train)
X_train.to_csv(os.path.join("Data", "data_preprocessed_binned", "X_train.csv"), index=False)
X_test.to_csv(os.path.join("Data", "data_preprocessed_binned", "X_test.csv"), index=False)
y_train.to_csv(os.path.join("Data", "data_preprocessed_binned", "y_train.csv"), index=False)
y_test.to_csv(os.path.join("Data", "data_preprocessed_binned", "y_test.csv"), index=False)
print("X_train:", X_train.shape)
print("X_test:", X_test.shape)
print("Bad rate:", y_train.mean())
```
| github_jupyter |
# Bayesian optimization
## Introduction
Many optimization problems in machine learning are black box optimization problems where the objective function $f(\mathbf{x})$ is a black box function<sup>[1][2]</sup>. We do not have an analytical expression for $f$ nor do we know its derivatives. Evaluation of the function is restricted to sampling at a point $\mathbf{x}$ and getting a possibly noisy response.
If $f$ is cheap to evaluate we could sample at many points e.g. via grid search, random search or numeric gradient estimation. However, if function evaluation is expensive e.g. tuning hyperparameters of a deep neural network, probe drilling for oil at given geographic coordinates or evaluating the effectiveness of a drug candidate taken from a chemical search space then it is important to minimize the number of samples drawn from the black box function $f$.
This is the domain where Bayesian optimization techniques are most useful. They attempt to find the global optimimum in a minimum number of steps. Bayesian optimization incorporates prior belief about $f$ and updates the prior with samples drawn from f to get a posterior that better approximates $f$. The model used for approximating the objective function is called *surrogate model*. Bayesian optimization also uses an *acquisition function* that directs sampling to areas where an improvement over the current best observation is likely.
### Surrogate model
A popular surrogate model for Bayesian optimization are [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) (GPs). I wrote about Gaussian processes in a [previous post](https://krasserm.github.io/2018/03/19/gaussian-processes/). If you are not familiar with GPs I recommend reading it first. GPs define a prior over functions and we can use them to incorporate prior beliefs about the objective function (smoothness, ...). The GP posterior is cheap to evaluate and is used to propose points in the search space where sampling is likely to yield an improvement.
### Acquisition functions
Proposing sampling points in the search space is done by acquisition functions. They trade off exploitation and exploration. Exploitation means sampling where the surrogate model predicts a high objective and exploration means sampling at locations where the prediction uncertainty is high. Both correspond to high acquisition function values and the goal is to maximize the acquisition function to determine the next sampling point.
More formally, the objective function $f$ will be sampled at $\mathbf{x}_t = \mathrm{argmax}_{\mathbf{x}} u(\mathbf{x} \lvert \mathcal{D}_{1:t-1})$ where $u$ is the acquisition function and $\mathcal{D}_{1:t-1} = \{(\mathbf{x}_1, y_1),...,(\mathbf{x}_{t-1}, y_{t-1})\}$ are the $t-1$ samples drawn from $f$ so far. Popular acquisition functions are *maximum probability of improvement* (MPI), *expected improvement* (EI) and *upper confidence bound* (UCB)<sup>[1]</sup>. In the following, we will use the expected improvement (EI) which is most widely used and described further below.
### Optimization algorithm
The Bayesian optimization procedure is as follows. For $t = 1,2,...$ repeat:
- Find the next sampling point $\mathbf{x}_{t}$ by optimizing the acquisition function over the GP: $\mathbf{x}_t = \mathrm{argmax}_{\mathbf{x}} u(\mathbf{x} \lvert \mathcal{D}_{1:t-1})$
- Obtain a possibly noisy sample $y_t = f(\mathbf{x}_t) + \epsilon_t$ from the objective function $f$.
- Add the sample to previous samples $\mathcal{D}_{1:t} = \{\mathcal{D}_{1:t-1}, (\mathbf{x}_t,y_t)\}$ and update the GP.
### Expected improvement
Expected improvement is defined as
$$\mathrm{EI}(\mathbf{x}) = \mathbb{E}\max(f(\mathbf{x}) - f(\mathbf{x}^+), 0)\tag{1}$$
where $f(\mathbf{x}^+)$ is the value of the best sample so far and $\mathbf{x}^+$ is the location of that sample i.e. $\mathbf{x}^+ = \mathrm{argmax}_{\mathbf{x}_i \in \mathbf{x}_{1:t}} f(\mathbf{x}_i)$. The expected improvement can be evaluated analytically under the GP model<sup>[3]</sup>:
$$
\mathrm{EI}(\mathbf{x}) =
\begin{cases}
(\mu(\mathbf{x}) - f(\mathbf{x}^+) - \xi)\Phi(Z) + \sigma(\mathbf{x})\phi(Z) &\text{if}\ \sigma(\mathbf{x}) > 0 \\
0 & \text{if}\ \sigma(\mathbf{x}) = 0
\end{cases}\tag{2}
$$
where
$$
Z =
\begin{cases}
\frac{\mu(\mathbf{x}) - f(\mathbf{x}^+) - \xi}{\sigma(\mathbf{x})} &\text{if}\ \sigma(\mathbf{x}) > 0 \\
0 & \text{if}\ \sigma(\mathbf{x}) = 0
\end{cases}
$$
where $\mu(\mathbf{x})$ and $\sigma(\mathbf{x})$ are the mean and the standard deviation of the GP pesterior predictive at $\mathbf{x}$, respectively. $\Phi$ and $\phi$ are the PDF and CDF of the standard normal distribution, respectively. The first summation term in Equation (2) is the exploitation term and second summation term is the exploration term.
Parameter $\xi$ in Equation (2) determines the amount of exploration during optimization and higher $\xi$ values lead to more exploration. In other words, with increasing $\xi$ values, the importance of improvements predicted by the GP posterior mean $\mu(\mathbf{x})$ decreases relative to the importance of potential improvements in regions of high prediction uncertainty, represented by large $\sigma(\mathbf{x})$ values. A recommended default value for $\xi$ is $0.01$.
With this minimum of theory we can start implementing Bayesian optimization. The next section shows a basic implementation with plain NumPy and SciPy, later sections demonstrate how to use existing libraries. Finally, Bayesian optimization is used to tune the hyperparameters of a tree-based regression model.
## Implementation with NumPy and SciPy
In this section, we will implement the acquisition function and its optimization in plain NumPy and SciPy and use scikit-learn for the Gaussian process implementation. Although we have an analytical expression of the optimization objective `f` in the following example, we treat is as black box and iteratively approximate it with a Gaussian process during Bayesian optimization. Furthermore, samples drawn from the objective function are noisy and the noise level is given by the `noise` variable. Optimization is done within given `bounds`. We also assume that there exist two initial samples in `X_init` and `Y_init`.
```
import numpy as np
%matplotlib inline
bounds = np.array([[-1.0, 2.0]])
noise = 0.2
def f(X, noise=noise):
return -np.sin(3*X) - X**2 + 0.7*X + noise * np.random.randn(*X.shape)
X_init = np.array([[-0.9], [1.1]])
Y_init = f(X_init)
```
The following plot shows the noise-free objective function, the amount of noise by plotting a large number of samples and the two initial samples.
```
import matplotlib.pyplot as plt
# Dense grid of points within bounds
X = np.arange(bounds[:, 0], bounds[:, 1], 0.01).reshape(-1, 1)
# Noise-free objective function values at X
Y = f(X,0)
# Plot optimization objective with noise level
plt.plot(X, Y, 'y--', lw=2, label='Noise-free objective')
plt.plot(X, f(X), 'bx', lw=1, alpha=0.1, label='Noisy samples')
plt.plot(X_init, Y_init, 'kx', mew=3, label='Initial samples')
plt.legend();
```
Goal is to find the global optimum on the left in a small number of steps. The next step is to implement the acquisition function defined in Equation (2) as `expected_improvement` function.
```
from scipy.stats import norm
def expected_improvement(X, X_sample, Y_sample, gpr, xi=0.01):
'''
Computes the EI at points X based on existing samples X_sample
and Y_sample using a Gaussian process surrogate model.
Args:
X: Points at which EI shall be computed (m x d).
X_sample: Sample locations (n x d).
Y_sample: Sample values (n x 1).
gpr: A GaussianProcessRegressor fitted to samples.
xi: Exploitation-exploration trade-off parameter.
Returns:
Expected improvements at points X.
'''
mu, sigma = gpr.predict(X, return_std=True)
mu_sample = gpr.predict(X_sample)
sigma = sigma.reshape(-1, X_sample.shape[1])
# Needed for noise-based model,
# otherwise use np.max(Y_sample).
# See also section 2.4 in [...]
mu_sample_opt = np.max(mu_sample)
with np.errstate(divide='warn'):
imp = mu - mu_sample_opt - xi
Z = imp / sigma
ei = imp * norm.pdf(Z) + sigma * norm.pdf(Z)
ei[sigma == 0.0] = 0.0
return ei
```
We also need a function that proposes the next sampling point by computing the location of the acquisition function maximum. Optimization is restarted `n_restarts` times to avoid local optima.
```
from scipy.optimize import minimize
def propose_location(acquisition, X_sample, Y_sample, gpr, bounds, n_restarts=25):
'''
Proposes the next sampling point by optimizing the acquisition function.
Args:
acquisition: Acquisition function.
X_sample: Sample locations (n x d).
Y_sample: Sample values (n x 1).
gpr: A GaussianProcessRegressor fitted to samples.
Returns:
Location of the acquisition function maximum.
'''
dim = X_sample.shape[1]
min_val = 1
min_x = None
def min_obj(X):
# Minimization objective is the negative acquisition function
return -acquisition(X.reshape(-1, dim), X_sample, Y_sample, gpr)
# Find the best optimum by starting from n_restart different random points.
for x0 in np.random.uniform(bounds[:, 0], bounds[:, 1], size=(n_restarts, dim)):
res = minimize(min_obj, x0=x0, bounds=bounds, method='L-BFGS-B')
if res.fun < min_val:
min_val = res.fun[0]
min_x = res.x
return min_x.reshape(-1, 1)
```
Now we have all components needed to run Bayesian optimization with the [algorithm](#Optimization-algorithm) outlined above. The Gaussian process in the following example is configured with a [Matérn kernel](http://scikit-learn.org/stable/modules/gaussian_process.html#matern-kernel) which is a generalization of the squared exponential kernel or RBF kernel. The known noise level is configured with the `alpha` parameter.
Bayesian optimization runs for 10 iterations. In each iteration, a row with two plots is produced. The left plot shows the noise-free objective function, the surrogate function which is the GP posterior predictive mean, the 95% confidence interval of the mean and the noisy samples obtained from the objective function so far. The right plot shows the acquisition function. The vertical dashed line in both plots shows the proposed sampling point for the next iteration which corresponds to the maximum of the acquisition function.
```
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import ConstantKernel, Matern
from bayesian_optimization_util import plot_approximation, plot_acquisition
# Gaussian process with Matérn kernel as surrogate model
m52 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5)
gpr = GaussianProcessRegressor(kernel=m52, alpha=noise**2)
# Initialize samples
X_sample = X_init
Y_sample = Y_init
# Number of iterations
n_iter = 10
plt.figure(figsize=(12, n_iter * 3))
plt.subplots_adjust(hspace=0.4)
for i in range(n_iter):
# Update Gaussian process with existing samples
gpr.fit(X_sample, Y_sample)
# Obtain next sampling point from the acquisition function (expected_improvement)
X_next = propose_location(expected_improvement, X_sample, Y_sample, gpr, bounds)
# Obtain next noisy sample from the objective function
Y_next = f(X_next, noise)
# Plot samples, surrogate function, noise-free objective and next sampling location
plt.subplot(n_iter, 2, 2 * i + 1)
plot_approximation(gpr, X, Y, X_sample, Y_sample, X_next, show_legend=i==0)
plt.title(f'Iteration {i+1}')
plt.subplot(n_iter, 2, 2 * i + 2)
plot_acquisition(X, expected_improvement(X, X_sample, Y_sample, gpr), X_next, show_legend=i==0)
# Add sample to previous samples
X_sample = np.vstack((X_sample, X_next))
Y_sample = np.vstack((Y_sample, Y_next))
```
Note how the two initial samples initially drive search into the direction of the local maximum on the right side but exploration allows the algorithm to escape from that local optimum and find the global optimum on the left side. Also note how sampling point proposals often fall within regions of high uncertainty (exploration) and are not only driven by the highest surrogate function values (exploitation).
A convergence plot reveals how many iterations are needed the find a maximum and if the sampling point proposals stay around that maximum i.e. converge to small proposal differences between consecutive steps.
```
from bayesian_optimization_util import plot_convergence
plot_convergence(X_sample, Y_sample)
```
## Bayesian optimization libraries
There are numerous Bayesian optimization libraries out there and giving a comprehensive overview is not the goal of this article. Instead, I'll pick two that I used in the past and show the minimum setup needed to get the previous example running.
### Scikit-optimize
[Scikit-optimize](https://scikit-optimize.github.io/) is a library for sequential model-based optimization that is based on [scikit-learn](http://scikit-learn.org/). It also supports Bayesian optimization using Gaussian processes. The API is designed around minimization, hence, we have to provide negative objective function values. The results obtained here slightly differ from previous results because of non-deterministic optimization behavior and different noisy samples drawn from the objective function.
```
from sklearn.base import clone
from skopt import gp_minimize
from skopt.learning import GaussianProcessRegressor
from skopt.learning.gaussian_process.kernels import ConstantKernel, Matern
# Use custom kernel and estimator to match previous example
m52 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5)
gpr = GaussianProcessRegressor(kernel=m52, alpha=noise**2)
r = gp_minimize(lambda x: -f(np.array(x))[0],
bounds.tolist(),
base_estimator=gpr,
acq_func='EI', # expected improvement
xi=0.01, # exploitation-exploration trade-off
n_calls=10, # number of iterations
n_random_starts=0, # initial samples are provided
x0=X_init.tolist(), # initial samples
y0=-Y_init.ravel())
# Fit GP model to samples for plotting results
gpr.fit(r.x_iters, -r.func_vals)
# Plot the fitted model and the noisy samples
plot_approximation(gpr, X, Y, r.x_iters, -r.func_vals, show_legend=True)
plot_convergence(np.array(r.x_iters), -r.func_vals)
```
## GPyOpt
[GPyOpt](http://sheffieldml.github.io/GPyOpt/) is a Bayesian optimization library based on [GPy](https://sheffieldml.github.io/GPy/). The abstraction level of the API is comparable to that of scikit-optimize. The `BayesianOptimization` API provides a `maximize` parameter to configure whether the objective function shall be maximized or minimized (default). In version 1.2.1, this seems to be ignored when providing initial samples, so we have to negate their target values manually in the following example. Also, the built-in `plot_acquisition` and `plot_convergence` methods display the minimization result in any case. Again, the results obtained here slightly differ from previous results because of non-deterministic optimization behavior and different noisy samples drawn from the objective function.
```
import GPy
import GPyOpt
from GPyOpt.methods import BayesianOptimization
kernel = GPy.kern.Matern52(input_dim=1, variance=1.0, lengthscale=1.0)
bds = [{'name': 'X', 'type': 'continuous', 'domain': bounds.ravel()}]
optimizer = BayesianOptimization(f=f,
domain=bds,
model_type='GP',
kernel=kernel,
acquisition_type ='EI',
acquisition_jitter = 0.01,
X=X_init,
Y=-Y_init,
noise_var = noise**2,
exact_feval=False,
normalize_Y=False,
maximize=True)
optimizer.run_optimization(max_iter=10)
optimizer.plot_acquisition()
optimizer.plot_convergence()
```
## Application
This section demonstrates how to optimize the hyperparameters of an `XGBRegressor` with GPyOpt and how Bayesian optimization performance compares to random search. `XGBRegressor` is part of [XGBoost](https://xgboost.readthedocs.io/), a flexible and scalable gradient boosting library. `XGBRegressor` implements the scikit-learn estimator API and can be applied to regression problems. Regression is performed on a small [toy dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes) that is part of scikit-learn.
```
from sklearn import datasets
from sklearn.model_selection import RandomizedSearchCV, cross_val_score
from scipy.stats import uniform
from xgboost import XGBRegressor
# Load the diabetes dataset (for regression)
X, Y = datasets.load_diabetes(return_X_y=True)
# Instantiate an XGBRegressor with default hyperparameter settings
xgb = XGBRegressor()
# and compute a baseline to beat with hyperparameter optimization
baseline = cross_val_score(xgb, X, Y, scoring='neg_mean_squared_error').mean()
```
### Hyperparameter tuning with random search
For hyperparameter tuning with random search, we use `RandomSearchCV` of scikit-learn and compute a cross-validation score for each randomly selected point in hyperparameter space. Results will be discussed below.
```
# Hyperparameters to tune and their ranges
param_dist = {"learning_rate": uniform(0, 1),
"gamma": uniform(0, 5),
"max_depth": range(1,50),
"n_estimators": range(1,300),
"min_child_weight": range(1,10)}
rs = RandomizedSearchCV(xgb, param_distributions=param_dist,
scoring='neg_mean_squared_error', n_iter=25)
# Run random search for 25 iterations
rs.fit(X, Y);
```
### Hyperparameter tuning with Bayesian optimization
To tune hyperparameters with Bayesian optimization we implement an objective function `cv_score` that takes hyperparameters as input and returns a cross-validation score. Here, we assume that cross-validation at a given point in hyperparameter space is deterministic and therefore set the `exact_feval` parameter of `BayesianOptimization` to `True`. Depending on model fitting and cross-validation details this might not be the case but we ignore that here.
```
bds = [{'name': 'learning_rate', 'type': 'continuous', 'domain': (0, 1)},
{'name': 'gamma', 'type': 'continuous', 'domain': (0, 5)},
{'name': 'max_depth', 'type': 'discrete', 'domain': (1, 50)},
{'name': 'n_estimators', 'type': 'discrete', 'domain': (1, 300)},
{'name': 'min_child_weight', 'type': 'discrete', 'domain': (1, 10)}]
# Optimization objective
def cv_score(parameters):
parameters = parameters[0]
score = cross_val_score(
XGBRegressor(learning_rate=parameters[0],
gamma=int(parameters[1]),
max_depth=int(parameters[2]),
n_estimators=int(parameters[3]),
min_child_weight = parameters[4]),
X, Y, scoring='neg_mean_squared_error').mean()
score = np.array(score)
return score
optimizer = BayesianOptimization(f=cv_score,
domain=bds,
model_type='GP',
acquisition_type ='EI',
acquisition_jitter = 0.05,
exact_feval=True,
maximize=True)
# Only 20 iterations because we have 5 initial random points
optimizer.run_optimization(max_iter=20)
```
### Results
On average, Bayesian optimization finds a better optimium in a smaller number of steps than random search and beats the baseline in almost every run. This trend becomes even more prominent in higher-dimensional search spaces. Here, the search space is 5-dimensional which is rather low to substantially profit from Bayesian optimization. One advantage of random search is that it is trivial to parallelize. Parallelization of Bayesian optimization is much harder and subject to research (see \[4\], for example).
```
y_rs = np.maximum.accumulate(rs.cv_results_['mean_test_score'])
y_bo = np.maximum.accumulate(-optimizer.Y).ravel()
print(f'Baseline neg. MSE = {baseline:.2f}')
print(f'Random search neg. MSE = {y_rs[-1]:.2f}')
print(f'Bayesian optimization neg. MSE = {y_bo[-1]:.2f}')
plt.plot(y_rs, 'ro-', label='Random search')
plt.plot(y_bo, 'bo-', label='Bayesian optimization')
plt.xlabel('Iteration')
plt.ylabel('Neg. MSE')
plt.ylim(-5000, -3000)
plt.title('Value of the best sampled CV score');
plt.legend();
```
## References
\[1\] Eric Brochu, Vlad M. Cora, Nando de Freitas, [A Tutorial on Bayesian Optimization of Expensive Cost Functions](https://arxiv.org/abs/1012.2599).
\[2\] Jonas Mockus, [Application of Bayesian approach to numerical methods of global and stochastic optimization](https://link.springer.com/article/10.1007/BF01099263).
\[3\] Donald R. JonesMatthias SchonlauWilliam J. Welch, [Efficient Global Optimization of Expensive Black-Box Functions](https://link.springer.com/article/10.1023/A:1008306431147).
\[4\] Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier, [Parallel Bayesian Global Optimization of Expensive Functions](https://arxiv.org/abs/1602.05149).
| github_jupyter |
# Simple genetic algorithm
## Step-by-step implementation
```
import numpy as np
# initiate random number generator
seed = 1
rng = np.random.default_rng(seed)
# population number
population_size = 4
# initialize the population
population = list()
for i in range(population_size):
gene = rng.integers(low=0, high=2, size=5, dtype=np.uint8)
population.append(gene)
population
# gene decoding function
def gene_decode(gene):
n = gene.shape[0]
b = 2**np.arange(n)
x = np.sum(b * np.flip(gene))
return x
# decode the genotype
genotype_decode = [gene_decode(g) for g in population]
genotype_decode
# calculate fitness for each individual
fitness = np.square(genotype_decode)
fitness
# calculate the probability that an individual will be chosen to become a parent
parenting_probability = fitness/np.sum(fitness)
parenting_probability
# calculate the expected number of copies of each individual after selection
expected_count = fitness/np.mean(fitness)
expected_count
# calculate the actual number of copies in the mating pool
actual_count = np.around(expected_count, decimals=0).astype(int)
while sum(actual_count) < population_size:
actual_count[np.argmax(expected_count)] += 1
actual_count
# form the mating pool
mating_pool = list()
for i, count in enumerate(actual_count):
for j in range(count):
mating_pool.append(population[i])
mating_pool
# form pairs at random
arranged_mates = list(rng.permutation(mating_pool))
formed_pairs = [(arranged_mates[i], arranged_mates[i+1]) for i in range(len(arranged_mates)) if i%2 == 0]
formed_pairs
# select the crossover point at random
children = list()
for pair in formed_pairs:
xover = rng.integers(1, 5)
print(xover)
child1 = np.concatenate((pair[0][:xover], pair[1][xover:]))
child2 = np.concatenate((pair[1][:xover], pair[0][xover:]))
children.append(child1)
children.append(child2)
children
# mutate the genes with mutation rate 0.001
mutation_rate = 0.001
for child in children:
for i, gene in enumerate(child):
if rng.uniform() < mutation_rate:
print('Flipped in', child, i)
if gene == 0:
child[i] = 1
else:
child[i] = 0
children
# replace the population with descendants
population = children
population
```
## Putting it all together
```
import numpy as np
# initiate random number generator
seed = 1
rng = np.random.default_rng(seed)
# population number
population_size = 4
overall_fitness = list()
maximum_fitness = list()
# initialize the population
population = list()
for i in range(population_size):
gene = rng.integers(low=0, high=2, size=5, dtype=np.uint8)
population.append(gene)
def sga(population):
# gene decoding function
def gene_decode(gene):
n = gene.shape[0]
b = 2**np.arange(n)
x = np.sum(b * np.flip(gene))
return x
# decode the genotype
genotype_decode = [gene_decode(g) for g in population]
# calculate fitness for each individual
fitness = np.square(genotype_decode)
# calculate the probability that an individual will be chosen to become a parent
parenting_probability = fitness/np.sum(fitness)
# calculate the expected number of copies of each individual after selection
expected_count = fitness/np.mean(fitness)
# calculate the actual number of copies in the mating pool
actual_count = np.around(expected_count, decimals=0).astype(int)
while sum(actual_count) < population_size:
actual_count[np.argmax(expected_count)] += 1
# form the mating pool
mating_pool = list()
for i, count in enumerate(actual_count):
for j in range(count):
mating_pool.append(population[i])
# form pairs at random
arranged_mates = list(rng.permutation(mating_pool))
formed_pairs = [(arranged_mates[i], arranged_mates[i+1]) for i in range(len(arranged_mates)) if i%2 == 0]
# select the crossover point at random
children = list()
for pair in formed_pairs:
xover = rng.integers(1, 5)
child1 = np.concatenate((pair[0][:xover], pair[1][xover:]))
child2 = np.concatenate((pair[1][:xover], pair[0][xover:]))
children.append(child1)
children.append(child2)
# mutate the genes with mutation rate 0.001
mutation_rate = 0.001
for child in children:
for i, gene in enumerate(child):
if rng.uniform() < mutation_rate:
if gene == 0:
child[i] = 1
else:
child[i] = 0
# replace the population with descendants
population = children
# decode the new genotype
genotype_decode = [gene_decode(g) for g in population]
# calculate fitness for each new individual
fitness = np.square(genotype_decode)
return (population, fitness)
for i in range(10):
population, fitness = sga(population)
overall_fitness.append(fitness.sum())
maximum_fitness.append(fitness.max())
overall_fitness
maximum_fitness
population
```
| github_jupyter |
# Introduction to optimization
The basic components
* The objective function (also called the 'cost' function)
```
import numpy as np
objective = np.poly1d([1.3, 4.0, 0.6])
print(objective)
```
* The "optimizer"
```
import scipy.optimize as opt
x_ = opt.fmin(objective, [3])
print("solved: x={}".format(x_))
%matplotlib notebook
x = np.linspace(-4,1,101)
import matplotlib.pylab as mpl
mpl.plot(x, objective(x))
mpl.plot(x_, objective(x_), 'ro')
```
Additional components
* "Box" constraints
```
import scipy.special as ss
import scipy.optimize as opt
import numpy as np
import matplotlib.pylab as mpl
x = np.linspace(2, 7, 200)
# 1st order Bessel
j1x = ss.j1(x)
mpl.plot(x, j1x)
# use scipy.optimize's more modern "results object" interface
result = opt.minimize_scalar(ss.j1, method="bounded", bounds=[2, 4])
j1_min = ss.j1(result.x)
mpl.plot(result.x, j1_min,'ro')
```
* The gradient and/or hessian
```
import mystic.models as models
print(models.rosen.__doc__)
import mystic
mystic.model_plotter(mystic.models.rosen, kwds='-f -d -x 1 -b "-3:3:.1, -1:5:.1, 1"')
import scipy.optimize as opt
import numpy as np
# initial guess
x0 = [1.3, 1.6, -0.5, -1.8, 0.8]
result = opt.minimize(opt.rosen, x0)
print(result.x)
# number of function evaluations
print(result.nfev)
# again, but this time provide the derivative
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print(result.x)
# number of function evaluations and derivative evaluations
print(result.nfev, result.njev)
print('')
# however, note for a different x0...
for i in range(5):
x0 = np.random.randint(-20,20,5)
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print("{} @ {} evals".format(result.x, result.nfev))
```
* The penalty functions
$\psi(x) = f(x) + k*p(x)$
```
# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp
'''
Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2
Subject to: x0**3 - x1 == 0
x1 >= 1
'''
import numpy as np
def objective(x, sign=1.0):
return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
def derivative(x, sign=1.0):
dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)
dfdx1 = sign*(2*x[0] - 4*x[1])
return np.array([ dfdx0, dfdx1 ])
# unconstrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),
jac=derivative, method='SLSQP', options={'disp': True})
print("unconstrained: {}".format(result.x))
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - 1]),
'jac' : lambda x: np.array([0.0, 1.0])})
# constrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,
constraints=cons, method='SLSQP', options={'disp': True})
print("constrained: {}".format(result.x))
```
Optimizer classifications
* Constrained versus unconstrained (and importantly LP and QP)
```
# from scipy.optimize.minimize documentation
'''
**Unconstrained minimization**
Method *Nelder-Mead* uses the Simplex algorithm [1]_, [2]_. This
algorithm has been successful in many applications but other algorithms
using the first and/or second derivatives information might be preferred
for their better performances and robustness in general.
Method *Powell* is a modification of Powell's method [3]_, [4]_ which
is a conjugate direction method. It performs sequential one-dimensional
minimizations along each vector of the directions set (`direc` field in
`options` and `info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method *CG* uses a nonlinear conjugate gradient algorithm by Polak and
Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp.
120-122. Only the first derivatives are used.
Method *BFGS* uses the quasi-Newton method of Broyden, Fletcher,
Goldfarb, and Shanno (BFGS) [5]_ pp. 136. It uses the first derivatives
only. BFGS has proven good performance even for non-smooth
optimizations. This method also returns an approximation of the Hessian
inverse, stored as `hess_inv` in the OptimizeResult object.
Method *Newton-CG* uses a Newton-CG algorithm [5]_ pp. 168 (also known
as the truncated Newton method). It uses a CG method to the compute the
search direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm.
Method *Anneal* uses simulated annealing, which is a probabilistic
metaheuristic algorithm for global optimization. It uses no derivative
information from the function being optimized.
Method *dogleg* uses the dog-leg trust-region algorithm [5]_
for unconstrained minimization. This algorithm requires the gradient
and Hessian; furthermore the Hessian is required to be positive definite.
Method *trust-ncg* uses the Newton conjugate gradient trust-region
algorithm [5]_ for unconstrained minimization. This algorithm requires
the gradient and either the Hessian or a function that computes the
product of the Hessian with a given vector.
**Constrained minimization**
Method *L-BFGS-B* uses the L-BFGS-B algorithm [6]_, [7]_ for bound
constrained minimization.
Method *TNC* uses a truncated Newton algorithm [5]_, [8]_ to minimize a
function with variables subject to bounds. This algorithm uses
gradient information; it is also called Newton Conjugate-Gradient. It
differs from the *Newton-CG* method described above as it wraps a C
implementation and allows each variable to be given upper and lower
bounds.
Method *COBYLA* uses the Constrained Optimization BY Linear
Approximation (COBYLA) method [9]_, [10]_, [11]_. The algorithm is
based on linear approximations to the objective function and each
constraint. The method wraps a FORTRAN implementation of the algorithm.
Method *SLSQP* uses Sequential Least SQuares Programming to minimize a
function of several variables with any combination of bounds, equality
and inequality constraints. The method wraps the SLSQP Optimization
subroutine originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into large
floating values.
'''
```
The typical optimization algorithm (local or global) is unconstrained. Constrained algorithms tend strongly to be local, and also often use LP/QP approximations. Hence, most optimization algorithms are good either for quick linear/quadratic approximation under some constraints, or are intended for nonlinear functions without constraints. Any information about the problem that impacts the potential solution can be seen as constraining information. Constraining information is typically applied as a penatly, or as a box constraint on an input. The user is thus typically forced to pick whether they want to apply constraints but treat the problem as a LP/QP approximation, or to ignore the constraining information in exchange for a nonliear solver.
```
import scipy.optimize as opt
# constrained: linear (i.e. A*x + b)
print(opt.cobyla.fmin_cobyla)
print(opt.linprog)
# constrained: quadratic programming (i.e. up to x**2)
print(opt.fmin_slsqp)
# http://cvxopt.org/examples/tutorial/lp.html
'''
minimize: f = 2*x0 + x1
subject to:
-x0 + x1 <= 1
x0 + x1 >= 2
x1 >= 0
x0 - 2*x1 <= 4
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
A = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])
cost = cvx.matrix([ 2.0, 1.0 ])
sol = cvx_solvers.lp(cost, A, b)
print(sol['x'])
# http://cvxopt.org/examples/tutorial/qp.html
'''
minimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2
subject to:
x1 >= 0
x2 >= 0
x1 + x2 == 1
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = 2*cvx.matrix([ [2, .5], [.5, 1] ])
p = cvx.matrix([1.0, 1.0])
G = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])
h = cvx.matrix([0.0,0.0])
A = cvx.matrix([1.0, 1.0], (1,2))
b = cvx.matrix(1.0)
sol = cvx_solvers.qp(Q, p, G, h, A, b)
print(sol['x'])
```
Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took in finding the solution. `scipy.optimize` has a version of this, with `options={'retall':True}`, which returns the solver trajectory.
**EXERCISE:** Solve the constrained programming problem by any of the means above.
Minimize: f = -1*x[0] + 4*x[1]
Subject to: <br>
-3*x[0] + 1*x[1] <= 6 <br>
1*x[0] + 2*x[1] <= 4 <br>
x[1] >= -3 <br>
where: -inf <= x[0] <= inf
* Local versus global
```
import scipy.optimize as opt
# probabilstic solvers, that use random hopping/mutations
print(opt.differential_evolution)
print(opt.basinhopping)
import scipy.optimize as opt
# bounds instead of an initial guess
bounds = [(-10., 10)]*5
for i in range(10):
result = opt.differential_evolution(opt.rosen, bounds)
# result and number of function evaluations
print(result.x, '@ {} evals'.format(result.nfev))
```
Global optimizers tend to be much slower than local optimizers, and often use randomness to pick points within some box constraints instead of starting with an initial guess. The choice then is between algorithms that are non-deterministic and algorithms that are deterministic but depend very strongly on the selected starting point.
Local optimization algorithms have names like "gradient descent" and "steepest descent", while global optimizations tend to use things like "stocastic" and "genetic" algorithms.
* Not covered: other exotic types
Other important special cases:
* Least-squares fitting
```
import scipy.optimize as opt
import scipy.stats as stats
import numpy as np
# Define the function to fit.
def function(x, a, b, f, phi):
result = a * np.exp(-b * np.sin(f * x + phi))
return result
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print("target parameters: {}".format(true_params))
x = np.linspace(0, 2*np.pi, 25)
exact = function(x, *true_params)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
estimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)
print("solved parameters: {}".format(estimated_params))
# err_est is an estimate of the covariance matrix of the estimates
print("covarance: {}".format(err_est.diagonal()))
import matplotlib.pylab as mpl
mpl.plot(x, noisy, 'ro')
mpl.plot(x, function(x, *estimated_params))
```
Least-squares tends to be chosen when the user wants a measure of the covariance, typically as an error estimate.
* Integer programming
Integer programming (IP) or Mixed-integer programming (MIP) requires special optimizers that only select parameter values from the set of integers. These optimizers are typically used for things like cryptography, or other optimizations over a discrete set of possible solutions.
Typical uses
* Function minimization
* Data fitting
* Root finding
```
import numpy as np
import scipy.optimize as opt
def system(x,a,b,c):
x0, x1, x2 = x
eqs= [
3 * x0 - np.cos(x1*x2) + a, # == 0
x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0
np.exp(-x0*x1) + 20*x2 + c # == 0
]
return eqs
# coefficients
a = -0.5
b = 1.06
c = (10 * np.pi - 3.0) / 3
# initial guess
x0 = [0.1, 0.1, -0.1]
# Solve the system of non-linear equations.
result = opt.root(system, x0, args=(a, b, c))
print("root:", result.x)
print("solution:", result.fun)
```
* Parameter estimation
```
import numpy as np
import scipy.stats as stats
# Create clean data.
x = np.linspace(0, 4.0, 100)
y = 1.5 * np.exp(-0.2 * x) + 0.3
# Add a bit of noise.
noise = 0.1 * stats.norm.rvs(size=100)
noisy_y = y + noise
# Fit noisy data with a linear model.
linear_coef = np.polyfit(x, noisy_y, 1)
linear_poly = np.poly1d(linear_coef)
linear_y = linear_poly(x)
# Fit noisy data with a quadratic model.
quad_coef = np.polyfit(x, noisy_y, 2)
quad_poly = np.poly1d(quad_coef)
quad_y = quad_poly(x)
import matplotlib.pylab as mpl
mpl.plot(x, noisy_y, 'ro')
mpl.plot(x, linear_y)
mpl.plot(x, quad_y)
#mpl.plot(x, y)
```
Standard diagnostic tools
* Eyeball the plotted solution against the objective
* Run several times and take the best result
* Analyze a log of intermediate results, per iteration
* Rare: look at the covariance matrix
* Issue: how can you really be sure you have the results you were looking for?
**EXERCISE:** Use any of the solvers we've seen thus far to find the minimum of the `zimmermann` function (i.e. use `mystic.models.zimmermann` as the objective). Use the bounds suggested below, if your choice of solver allows it.
```
import mystic.models as models
print(models.zimmermann.__doc__)
```
**EXERCISE:** Do the same for the `fosc3d` function found at `mystic.models.fosc3d`, using the bounds suggested by the documentation, if your chosen solver accepts bounds or constraints.
More to ponder: what about high-dimenstional and nonlinear constraints?
Let's look at optimization "redesigned" in [mystic](mystic.ipynb)...
| github_jupyter |
# Other programming languages
**Today we talk about various programming languages:** If you have learned one programming language, it is easy to learn the next.
**Different kinds** of programming languages:
1. **Low-level, compiled (C/C++, Fortran):** You are in full control, but need to specify types, allocate memory and clean up after your-self
2. **High-level, interpreted (MATLAB, Python, Julia, R):** Types are inferred, memory is allocated automatically, and there is automatic garbage collection
**Others:**
1. **[Wolfram Mathematica](https://www.wolfram.com/mathematica/)**: A mathematical programming langauge. The inspiration for **sympy**.
2. **[STATA](https://www.stata.com/)**: For many economists still the prefered statistical program, because it is so good at panel data and provides standard errors for a lot of the commonly used estimators.
> **Note:** Data cleaning and structuring is increasingly done in **R** or **Python**, and **STATA** is then only used for estimation.
**Comparison:** We solve the same Simulated Minimum Distance (SMD) problem in MATLAB, Python and Julia.
**Observations:**
1. Any language can typically be used to solve a task. But some have a **comparative advantage**.
2. If a **syntax** in a language irritates you, you will write worse code.
3. A **community** in your field around a language is important.
4. **No language is the best at everything**.
**Comparisons:**
- Coleman et al. (2020): MATLAB, [Python and Julia: What to choose in economics?](https://lmaliar.ws.gc.cuny.edu/files/2019/01/CEPR-DP13210.pdf)
- Fernández-Villaverde and Valencia (2019): [A Practical Guide to Parallization in Economics](https://www.sas.upenn.edu/~jesusfv/Guide_Parallel.pdf)
# High-level programming languages
## MATLAB
The **godfather** of high-level scientific programming. *The main source of inspiration for numpy and Julia*.
The **good** things:
1. Full scientific programming langauge
2. Especially good at optimization and (sparse) matrix algebra
3. Well-developed interface (IDE) and debugger
4. Integration with C++ through mex functions
The **bad** things:
1. Not open source and costly outside of academia
2. Not always easy to parallelize natively
3. Not complete programming langauge
4. Not in JupyterLab
**Download:** Available in the Absalon software library.
**Example:** `SMD_MATLAB.mlx`
**More:**
1. **Mini-course in MATLAB:** See the folder `\MATLAB_course`
2. [NumPy for Matlab users](https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html)
## Python
The **swiss-knife** of programming languages.
The **good** things:
1. Allround programming language
2. Full scientific programming (numpy+scipy)
3. Good at statistics (in particular data handling and machine learning)
4. Just-in-time (jit) compilation availible (numba)
4. Easy to integrate with C++ (ctypes, cffi)
The **bad** things:
1. Messy package system at times
2. Sometimes hard to jit-compile and parallelize
**Example:** `SMD_Python.ipynb`
## Julia
The **newcomer** of scientific programming languages.
1. All-round programming language
2. Automatic just-in-time compilation with native parallization - almost as fast as C++
3. Focused on scientific computing and high performance computing
The **bad** things:
1. Young language, with smallish, but growing, community
2. Sometimes hard to ensure that the just-in-time compliation works efficiently
**Example:** `SMD_Julia.ipynb`
**Download Julia:**
- [Open source version](https://julialang.org/downloads/)
- [JuliaPro from Julia Computing (bundled with IDE and notebook support)](https://juliacomputing.com/products/juliapro)
- [Documentation (language and about 1900 packages)](https://pkg.julialang.org/docs/)
**Julia community:**
- [Discourse](https://discourse.julialang.org)
- [Slack](https://julialang.slack.com)
For **introductory material on Julia for economists**, see [https://lectures.quantecon.org/jl/](https://lectures.quantecon.org/jl/).
## R
The **statistician favorite choice** of programming language.
1. Great package system
2. The best statistical packages
3. Well-developed interface (IDE) (Rstudio)
4. Easy to integrate with C++ (Rcpp)
The **bad** things:
1. Not designed to be a scientific programming langauge
2. Not a complete programming langauge
**Download:** https://www.rstudio.com/
# Low-level programming languages
## Fortran
What I have nightmares about...
In the old days, it was a bit faster than C++. This is no longer true.
## C/C++
**The fastest you can get.** A very powerfull tool, but hard to learn, and impossible to master.
```
import numpy as np
import ctypes as ct
import callcpp # local library
import psutil
CPUs = psutil.cpu_count()
CPUs_list = set(np.sort([1,2,4,*np.arange(8,CPUs+1,4)]))
print(f'this computer has {CPUs} CPUs')
```
## Calling C++ from Python
> **Note I:** This section can only be run on a Windows computer with the free **Microsoft Visual Studio 2017 Community Edition** ([download here](https://visualstudio.microsoft.com/downloads/)) installed.
>
> **Note II:** Learning C++ is somewhat hard. These [tutorials](http://www.cplusplus.com/doc/tutorial/) are helpful.
Pyton contains multiple ways of calling functions written in C++. Here I use **ctypes**.
**C++ file:** example.cpp in the current folder.
**Step 1:** Compile C++ to a .dll file
```
callcpp.compile_cpp('example') # compiles example.cpp
```
> **Details:** Write a file called ``compile.bat`` and run it in a terminal under the hood.
**Step 2:** Link to .dll file
```
# funcs (list): list of functions with elements (functionname,[argtype1,argtype2,etc.])
funcs = [('myfun_cpp',[ct.POINTER(ct.c_double),ct.POINTER(ct.c_double),ct.POINTER(ct.c_double),
ct.c_long,ct.c_long,ct.c_long])]
# ct.POINTER(ct.c_double) to a double
# ct.c_long interger
cppfile = callcpp.link_cpp('example',funcs)
```
**Step 3:** Call function
```
def myfun_numpy_vec(x1,x2):
y = np.empty((1,x1.size))
I = x1 < 0.5
y[I] = np.sum(np.exp(x2*x1[I]),axis=0)
y[~I] = np.sum(np.log(x2*x1[~I]),axis=0)
return y
# setup
x1 = np.random.uniform(size=10**6)
x2 = np.random.uniform(size=np.int(100*CPUs/8)) # adjust the size of the problem
x1_np = x1.reshape((1,x1.size))
x2_np = x2.reshape((x2.size,1))
# timing
%timeit myfun_numpy_vec(x1_np,x2_np)
def myfun_cpp(x1,x2,threads):
y = np.empty(x1.size)
p_x1 = np.ctypeslib.as_ctypes(x1) # pointer to x1
p_x2 = np.ctypeslib.as_ctypes(x2) # pointer to x2
p_y = np.ctypeslib.as_ctypes(y) # pointer to y
cppfile.myfun_cpp(p_x1,p_x2,p_y,x1.size,x2.size,threads)
return y
assert np.allclose(myfun_numpy_vec(x1_np,x2_np),myfun_cpp(x1,x2,1))
for threads in CPUs_list:
print(f'threads = {threads}')
%timeit myfun_cpp(x1,x2,threads)
print('')
```
**Observation:** Compare with results in lecture 12. Numba is roughly as fast as C++ here (I get different results across different computers). In larger problems, C++ is usually faster, and while Numba is limited in terms of which Python and Numpy features it supports, everything can be coded in C++.
**Step 4:** Delink .dll file
```
callcpp.delink_cpp(cppfile,'example')
```
**More information:** See the folder "Numba and C++" in the [ConsumptionSavingNotebooks](https://github.com/NumEconCopenhagen/ConsumptionSavingNotebooks) repository. Incudes, an explanation on how to use the **NLopt optimizers** in C++.
| github_jupyter |
```
import pickle
import codecs
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.python.layers.core import Dense
import time
from nltk.corpus import stopwords
from os import listdir
import re
class BasePreprocessor:
"""The abstract class for a preprocessor. You should subclass
this and implement the methods actions and result, and possibly
__init__, goal_test, and path_cost. Then you will create instances
of your subclass and solve them with the various search functions."""
# List of contractions.
CONTRACTION_LIST = {
"ain't": "is not",
"aren't": "are not",
"can't": "cannot",
"can't've": "cannot have",
"'cause": "because",
"could've": "could have",
"couldn't": "could not",
"couldn't've": "could not have",
"didn't": "did not",
"doesn't": "does not",
"don't": "do not",
"hadn't": "had not",
"hadn't've": "had not have",
"hasn't": "has not",
"haven't": "have not",
"he'd": "he would",
"he'd've": "he would have",
"he'll": "he will",
"he'll've": "he he will have",
"he's": "he is",
"how'd": "how did",
"how'd'y": "how do you",
"how'll": "how will",
"how's": "how is",
"I'd": "I would",
"I'd've": "I would have",
"I'll": "I will",
"I'll've": "I will have",
"I'm": "I am",
"I've": "I have",
"i'd": "i would",
"i'd've": "i would have",
"i'll": "i will",
"i'll've": "i will have",
"i'm": "i am",
"i've": "i have",
"isn't": "is not",
"it'd": "it would",
"it'd've": "it would have",
"it'll": "it will",
"it'll've": "it will have",
"it's": "it is",
"let's": "let us",
"ma'am": "madam",
"mayn't": "may not",
"might've": "might have",
"mightn't": "might not",
"mightn't've": "might not have",
"must've": "must have",
"mustn't": "must not",
"mustn't've": "must not have",
"needn't": "need not",
"needn't've": "need not have",
"o'clock": "of the clock",
"oughtn't": "ought not",
"oughtn't've": "ought not have",
"shan't": "shall not",
"sha'n't": "shall not",
"shan't've": "shall not have",
"she'd": "she would",
"she'd've": "she would have",
"she'll": "she will",
"she'll've": "she will have",
"she's": "she is",
"should've": "should have",
"shouldn't": "should not",
"shouldn't've": "should not have",
"so've": "so have",
"so's": "so as",
"that'd": "that would",
"that'd've": "that would have",
"that's": "that is",
"there'd": "there would",
"there'd've": "there would have",
"there's": "there is",
"they'd": "they would",
"they'd've": "they would have",
"they'll": "they will",
"they'll've": "they will have",
"they're": "they are",
"they've": "they have",
"to've": "to have",
"wasn't": "was not",
"we'd": "we would",
"we'd've": "we would have",
"we'll": "we will",
"we'll've": "we will have",
"we're": "we are",
"we've": "we have",
"weren't": "were not",
"what'll": "what will",
"what'll've": "what will have",
"what're": "what are",
"what's": "what is",
"what've": "what have",
"when's": "when is",
"when've": "when have",
"where'd": "where did",
"where's": "where is",
"where've": "where have",
"who'll": "who will",
"who'll've": "who will have",
"who's": "who is",
"who've": "who have",
"why's": "why is",
"why've": "why have",
"will've": "will have",
"won't": "will not",
"won't've": "will not have",
"would've": "would have",
"wouldn't": "would not",
"wouldn't've": "would not have",
"y'all": "you all",
"y'all'd": "you all would",
"y'all'd've": "you all would have",
"y'all're": "you all are",
"y'all've": "you all have",
"you'd": "you would",
"you'd've": "you would have",
"you'll": "you will",
"you'll've": "you will have",
"you're": "you are",
"you've": "you have"
}
def __init__(self):
"""The constructor. Your subclass's constructor can add
other arguments."""
def cleanData(self, text, removeStopwords = True):
"""
This method is a standard implementation to clean any text that are
passed in as parameter. Here the text is split into sentences and each
sentence is in turn cleaned by invoking the cleanSentence() method.
Any custom cleaning needs to be done at the subclass Preprocessor and
the invoke this method.
Parameters
----------
text : string
The text to be cleaned.
Returns
-------
string
The cleaned text.
punctuationsToBeExcluded : list
List of any particular punctuations to be ignored when cleaning
the sentence.
"""
cleanedSentences = list()
sentences = text.split('\n')
for sentence in sentences:
# Cleaning the sentence here
sentence = self.cleanSentence(sentence, removeStopwords)
if len(sentence) > 0:
cleanedSentences.append(sentence)
return ' '.join(cleanedSentences).lower()
def cleanSentence(self, sentence, removeStopwords):
"""
The method cleans a passed in sentence parameter by:
i. removing all whitespace characters.
ii. removing all punctuations.
Parameters
----------
sentence : string
The sentence to be cleaned.
Returns
-------
string
The cleaned sentence.
"""
sentence = sentence.lower()
sentence = self.fixContractions(sentence)
sentence = self.removeUnwantedCharacters(sentence)
if removeStopwords:
sentence = self.removeStopWords(sentence)
return sentence
def fixContractions(self, text, contractionList=CONTRACTION_LIST):
"""
# Expands the contractions by finding a match in the Contraction list
Regular expression pattern matching.
Parameters
----------
text : string
The text where contractions need to be fixed.
contraction_list : dictionary, optional
The dictionary which tells the mapping for different types of
contractions. The default is CONTRACTION_LIST.
Returns
-------
string
The expanded text.
"""
text = re.findall(r"[\w']+", text)
new_text = []
for word in text:
if word in contractionList:
new_text.append(contractionList[word])
else:
new_text.append(word)
return ' '.join(new_text)
def removeUnwantedCharacters(self, text):
"""
Removes all unwanted characters from the text.
This includes any URLs, HTML tags, punctuations, line breaks.
Parameters
----------
text : string
The text that needs to be cleaned.
Returns
-------
text : string
The cleaned text.
"""
text = text.strip()
text = re.sub(r'https?:\/\/.*[\r\n]*', '', text, flags=re.MULTILINE)# remove links
text = re.sub(r'\<a href', ' ', text)# remove html link tag
text = re.sub(r'&', '', text)
text = re.sub(r'[_"\-;%()|+&=*%.,!?:#$@\[\]/]', ' ', text)
text = re.sub(r'<br />', ' ', text)
text = re.sub(r'\'', ' ', text)
return text
def removeStopWords(self, text):
"""
Removes the stop words.
Parameters
----------
text : string
The text where the stop words need to be removed.
Returns
-------
string
The stop words removed text.
"""
text = text.split()
stops = set(stopwords.words("english"))
text = [w for w in text if not w in stops]
return ' '.join(text)
class CnnPreprocessor(BasePreprocessor):
"""This is a preprocessor class which implements CNN dataset specific
cleaning methods."""
def __init__(self):
"""
The constructor method to do any initial value setting.
Returns
-------
CnnProcessor class object.
"""
super().__init__()
def stripOffNewsSource(self, text):
"""
This method helps to strip off the news source from the text.
Parameters
----------
text : string
The news text.
Returns
-------
text : string
The news text with any news source stripped off.
"""
closingBracketIndex = text.find(')')
firstWord = ''
if closingBracketIndex > -1:
firstWordToBeExcluded = False
countOfSpaceChar = 0
for i in range(closingBracketIndex-1,-1,-1):
if text[i] == ' ':
if countOfSpaceChar < 4:
countOfSpaceChar += 1
continue
else:
firstWordToBeExcluded = False
break
elif text[i] == '(' and not firstWordToBeExcluded:
countOfSpaceChar = 0
firstWordToBeExcluded = True
if firstWordToBeExcluded:
firstWord = text[:closingBracketIndex + 1]
text = text[len(firstWord):].strip()
return text
def cleanData(self, text, isSummary):
"""
This method helps to clean any text by calling the cleanData from the base
class.
The CNN dataset files can have the source of the news at the start of
the file in brackets. It iss wise to remove this as part of the cleaning
as this source name doesn't help with the actual summarisation task.
Hence another method called stripOffNewsSource() is invoked before
before calling the cleanData() method in the base class.
Parameters
----------
text : string
The text to be cleaned.
isSummary : boolean
Denotes whether the text to be cleaned is actual News text or
the summary.
Returns
-------
string
The cleaned text.
"""
# If the text is not a summary, then strip of the news source from
# the text
if not isSummary:
text = self.stripOffNewsSource(text)
# Invoking the standard cleanData method.
return super().cleanData(text, not isSummary)
"""
Implementation of base class for the data loader.
"""
class DataLoader:
"""
Class to help with the loading of data
"""
def __init__(self, cleanDataOp):
"""
The constructor method to do any initial value setting.
Returns
-------
DataLoader class object.
"""
self.cleanDataOp = cleanDataOp
def loadSourceDocument(self, filePath):
"""
Loads the contents of a single source document
Parameters
----------
filePath : string
The file path of the source document.
Returns
-------
text : string
The loaded text.
"""
file = open(filePath, encoding='utf-8')
text = file.read()
file.close()
return text
def loadSourceDocuments(self, sourceDirectoryPath, refreshSourceDocs):
"""
This method helps to load the source documents.
Parameters
----------
sourceDirectoryPath : string
Directory path where the source files reside.
refreshSourceDocs : bool
If this parameter is true, all the source files are read fresh else
already pickled file is loaded.
Returns
-------
List of dictionaries holding the loaded text and summaries.
"""
all_text = {}
all_text['Text'] = []
all_text['Summary'] = []
if refreshSourceDocs:
fileIndex = 1
for name in listdir(sourceDirectoryPath):
if not name.startswith('._'):
filePath = sourceDirectoryPath + '/' + name
# load document
doc = self.loadSourceDocument(filePath)
text, summary = self.retrieveTextAndSummary(doc)
all_text['Text'].append(self.cleanDataOp(text, False))
all_text['Summary'].append(self.cleanDataOp(summary, True))
print('Extracted and cleaned file number', fileIndex, '=>', name)
fileIndex += 1
return all_text
def retrieveTextAndSummary(self, document):
"""
This method helps separate the actual text and summary from the whole
CNN news document.
Parameters
----------
document : string
The content of the news story file from which the actual text and
summary needs to be separated.
Returns
-------
string
The text and a list of summaries.
"""
# All the summaries in the document are starting with the '@highlight'
# phrase.
textIndex = document.find('@highlight')
# Splitting the actual text content and the summary lines
text, summaries = document[:textIndex], document[textIndex:].split('@highlight')
# Stripping all the whitespaces from each of the summary lines.
summaries = [s.strip() for s in summaries if len(s) > 0]
# Returning the actual text and the list of summaries
return text, ' '.join(summaries)
"""
Implementation of base class for the Word Embedding framework.
"""
class WordEmbeddingBase:
"""The base class for Word Embedding framework.
"""
def __init__(self, embeddingsDimension, specialTokens):
"""The constructor. Your subclass's constructor can add
other arguments.
Returns
-------
WordEmbeddingBase object.
"""
self.embeddingsDimension = embeddingsDimension
self.specialTokens = specialTokens
def constructEmbeddingsIndex(self):
"""
The method to build the embedding index using the vector file
Returns
-------
embedding_index : dictionary
The word to vector data mapping.
"""
embeddingsIndex = {}
with codecs.open(self.vectorFilePath, 'r', 'utf-8') as f:
for i, line in enumerate(f):
sr = line.split()
word = sr[0]
embedding = np.asarray(sr[1:], dtype='float32')
embeddingsIndex[word] = embedding
return embeddingsIndex
def buildEmbeddingsVectorMatrix(self, wordToIntDict, embeddingsIndex):
"""
The method to build the embedding index using the vector file
Parameters
----------
embeddingDimension : number
The dimension of embedding used.
Returns
-------
embeddingMatrix : dictionary
The mapping from integer representation of the word to the
embedding vector.
"""
embeddingsMatrix = np.zeros((len(wordToIntDict), self.embeddingsDimension), dtype=np.float32)
for word, i in wordToIntDict.items():
embeddingsVector = embeddingsIndex.get(word)
if embeddingsVector is not None:
# words not found in embedding index will be all-zeros.
embeddingsMatrix[i] = embeddingsVector
else:
randomGeneratedEmbeddingsVector = np.array(np.random.uniform(-1.0, 1.0, self.embeddingsDimension))
embeddingsIndex[word] = randomGeneratedEmbeddingsVector
embeddingsMatrix[i] = randomGeneratedEmbeddingsVector
return embeddingsMatrix
"""
Implementation of custom class for the Glove Word Embedding framework.
"""
class GloveEmbedding(WordEmbeddingBase):
"""The custom class for Glove Word Embedding framework.
"""
def __init__(self, embeddingsDimension, specialTokens):
"""
The constructor to do any initial value setting.
Returns
-------
GloveEmbedding class object.
"""
self.vectorFilePath = 'embeddings/frameworks/glove.6B.50d.txt'
super().__init__(embeddingsDimension, specialTokens)
"""
Implementation of custom class for the Conceptnet Numberbatch's Embedding framework.
"""
class ConceptNetEmbedding(WordEmbeddingBase):
"""The custom class for Coneptnet Numberbatch's Embedding framework.
"""
def __init__(self, embeddingsDimension, specialTokens):
"""
The constructor to do any initial value setting.
Returns
-------
GloveEmbedding class object.
"""
self.vectorFilePath = 'embeddings/frameworks/numberbatch-en-19.08.txt'
super().__init__(embeddingsDimension, specialTokens)
class Utils:
"""A Utility class for some static helper methods"""
@staticmethod
def pickle(filename, contents):
"""
This method pickles the contents to a file
Parameters
----------
filename : string
The pickle file location.
contents : string
The contents to be pickled.
Returns
-------
None.
"""
file = open(filename, "wb")
pickle.dump(contents, file)
file.close()
@staticmethod
def unPickle(filename):
"""
This method loads the contents from a pickled file
Parameters
----------
filename : string
The pickle file location.
Returns
-------
The contents from a pickled file.
"""
file = open(filename,"rb")
contents = pickle.load(file)
file.close()
return contents
@staticmethod
def countWords(wordsCountDict, text):
"""
This method returns a dictionary with the words to number of occurrences
mapping.
Parameters
----------
wordsCountDict : dictionary
Word to number of occurrences mapping.
text : string
The text.
Returns
-------
None.
"""
for sentence in text:
for word in sentence.split():
if word not in wordsCountDict:
wordsCountDict[word] = 1
else:
wordsCountDict[word] += 1
@staticmethod
def buildWordToNumberRepresentations(wordsCountDict, specialTokens, embeddingsIndex, thresholdForRareWordsCount):
"""
This method returns two dictionaries with a word to number mapping and another one with number to word
mapping.
Parameters
----------
wordsCountDict : dictionary
Word to number of occurrences mapping.
specialTokens: dictionary
Special tokens to number mapping
embeddingsIndex: dictionary
The dictionary which has the mapping from a word to corresponding embedding vector. This dictionary
is normally constructed from a word embeddings vector file.
thresholdForRareWordsCount : int
Only those words with frequencies above this threshold are considered if they are not part of
the embeddings index dictionary.
Returns
-------
Two dictionaries:
i. Word to Number mapping
ii. Number to Word mapping
"""
wordToIntDict = {}
intToWordDict = {}
wordIndex = 0
for word, count in wordsCountDict.items():
if count >= thresholdForRareWordsCount or word in embeddingsIndex:
wordToIntDict[word] = wordIndex
intToWordDict[wordIndex] = word
wordIndex += 1
for token in specialTokens.values():
wordToIntDict[token] = wordIndex
intToWordDict[wordIndex] = token
wordIndex += 1
return wordToIntDict, intToWordDict
@staticmethod
def convertTextToNumberSequence(text, wordToIntDict, unknownToken, eosToken = None, applyEos = False):
"""
This method converts a text to a sequence of numbers based on the word to integer mapping dictionary.
If a word does not exist in the word to integer mapping dictionary, a number representation of 'Unknown'
special token is used instead.
Parameters
----------
wordToIntDict : dictionary
Word to number of mapping.
unknownToken: string
The 'Unknown' specal token string.
eosToken: number
The 'End of Sequence' special token string.
applyEos : boolean
If true, at the end of the number sequence the number corresponding to 'End of Sequence' special token
shall be appended.
Returns
-------
i. The sequence of numbers
ii. Total words count
iii. Total unknown words count
"""
numberSequenceForText = []
wordsCount = 0
unknownWordsCount = 0
for sentence in text:
numberSequenceForSentence = []
for word in sentence.split():
wordsCount += 1
if word in wordToIntDict:
numberSequenceForSentence.append(wordToIntDict[word])
else:
numberSequenceForSentence.append(wordToIntDict[unknownToken])
unknownWordsCount += 1
if applyEos and eosToken is not None:
numberSequenceForSentence.append(wordToIntDict[eosToken])
numberSequenceForText.append(numberSequenceForSentence)
return numberSequenceForText, wordsCount, unknownWordsCount
@staticmethod
def applyFilterAndSort(summariesAndTextZippedList, summaryAndTextAttributes):
"""
Filter method to filter out summary and text zipped entry based on maximum Summary Length,
maximum Text length, unknown word limit in summaries and unknown word limit in text.
Parameters
----------
summariesAndTextZippedList: list
List of zipped version of Summary and Text
summaryAndTextAttributes : dictionary
Carries:
i. The maximum number of words allowed in a Summary
ii. The maximum number of words allowed in a Text
i. The minimum number of words required in a Summary
ii. The minimum number of words required in a Text
iii. The maximum number of unknown words allowed in a Summary
iv. The maximum number of unknown words allowed in a Text
Returns
-------
i. The sequence of numbers
ii. Total words count
iii. Total unknown words count
"""
maximumSummaryLength = summaryAndTextAttributes['maximumSummaryLength']
maximumTextLength = summaryAndTextAttributes['maximumTextLength']
minimumSummaryLength = summaryAndTextAttributes['minimumSummaryLength']
minimumTextLength = summaryAndTextAttributes['minimumTextLength']
unknownsInSummaryLimit = summaryAndTextAttributes['unknownsInSummaryLimit']
unknownsInTextLimit = summaryAndTextAttributes['unknownsInTextLimit']
unknownTokenNumberRepresentation = summaryAndTextAttributes['unknownTokenNumberRepresentation']
def countUnknowns(sentence, unknownTokenNumberRepresentation):
'''Counts the number of time UNK appears in a sentence.'''
unknownsCount = 0
for word in sentence:
if word == unknownTokenNumberRepresentation:
unknownsCount += 1
return unknownsCount
def filterCondition(item):
"""
Filters an item based on certain conditions.
"""
summarySeq = item[0]
textSeq = item[1]
if(len(summarySeq) <= maximumSummaryLength and
len(textSeq) <= maximumTextLength and
len(summarySeq) >= minimumSummaryLength and
len(textSeq) >= minimumTextLength and
countUnknowns(summarySeq, unknownTokenNumberRepresentation) <= unknownsInSummaryLimit and
countUnknowns(textSeq, unknownTokenNumberRepresentation) <= unknownsInTextLimit):
return True
else:
return False
filteredSummariesAndText = list(filter(filterCondition, summariesAndTextZippedList))
summariesAndTextSorted = sorted(filteredSummariesAndText, key=lambda entry: len(entry[1]))
summariesAndTextSorted = list(zip(*summariesAndTextSorted))
return list(summariesAndTextSorted[0]), list(summariesAndTextSorted[1])
@staticmethod
def computeSequenceLengthsIntoDataFrame(textToNumberSequences):
'''Create a data frame of the sentence lengths from a text'''
lengths = []
for textToNumberSequence in textToNumberSequences:
lengths.append(len(textToNumberSequence))
return pd.DataFrame(lengths, columns=['counts'])
class Seq2SeqModel:
"""
The implementation for Sequence to sequence modelling
"""
def __init__(self):
"""The constructor. Your subclass's constructor can add
other arguments."""
def createModelInputsPlaceholders(self):
inputData = tf.placeholder(tf.int32, [None, None], name='inputData')
targetData = tf.placeholder(tf.int32, [None, None], name='targetData')
learningRate = tf.placeholder(tf.float32, name='learningRate')
dropoutRate = tf.placeholder(tf.float32, name='dropoutRate')
inputSummaryLengths = tf.placeholder(tf.int32, (None,), name='inputSummaryLengths')
maximumSummaryLength = tf.reduce_max(inputSummaryLengths, name='maximumSummaryLength')
inputTextLengths = tf.placeholder(tf.int32, (None,), name='inputTextLengths')
return inputData, targetData, learningRate, dropoutRate, inputSummaryLengths, maximumSummaryLength, inputTextLengths
def createLSTMCell(self, rnnPerCellUnitsCount, requireDropoutLayer = False, dropoutRate = 0.95):
# Creating the RNN cell
cell = tf.contrib.rnn.LSTMCell(rnnPerCellUnitsCount,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
# Attaching a dropout layer for the cell if required
if requireDropoutLayer:
cell = tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob = dropoutRate)
return cell
def doEncoding(self, rnnPerCellUnitsCount, inputTextLengths, rnnCellsCount, embeddedEncoderInput, dropoutRate):
"""
This is the implementation of an encoding process.
"""
for rnnCellIndex in range(rnnCellsCount):
with tf.variable_scope('encoder_{}'.format(rnnCellIndex)):
# Creating the forward RNN cell for the Bi-directional RNN
forwardCell = self.createLSTMCell(rnnPerCellUnitsCount,
requireDropoutLayer = True,
dropoutRate = dropoutRate)
# Creating the backward RNN cell for the Bi-directional RNN
backwardCell = self.createLSTMCell(rnnPerCellUnitsCount,
requireDropoutLayer = True,
dropoutRate = dropoutRate)
# Connecting the forward and backward cells to create a Bi-directional RNN
encoderOutput, encoderStates = tf.nn.bidirectional_dynamic_rnn(forwardCell,
backwardCell,
embeddedEncoderInput,
inputTextLengths,
dtype=tf.float32)
encoderOutput = tf.concat(encoderOutput, 2)
# The current layer's output is being fed into next layer's input
embeddedEncoderInput = encoderOutput
return encoderOutput, encoderStates
def processDecoderInput(self, targetData, wordToIntDict, batchSize, startToken):
"""
Remove the last word id from each batch and concatenate the id of the STARTOFSEQUENCE to the
begining of each batch.
"""
ending = tf.strided_slice(targetData, [0, 0], [batchSize, -1], [1, 1])
decoderInput = tf.concat([tf.fill([batchSize, 1], wordToIntDict[startToken]), ending], 1)
return decoderInput
def processTrainingLayerForDecoder(self, embeddedDecoderInput, inputSummaryLengths, decoderCell,
outputLayer, totalWordsCountInVocab, maximumSummaryLength,
batchSize):
"""
This is the implementation for a Training decoding layer.
"""
trainingHelper = tf.contrib.seq2seq.TrainingHelper(inputs = embeddedDecoderInput,
sequence_length = inputSummaryLengths,
time_major = False)
trainingDecoder = tf.contrib.seq2seq.BasicDecoder(cell = decoderCell,
helper = trainingHelper,
initial_state = decoderCell.zero_state(
dtype=tf.float32, batch_size=batchSize),
output_layer = outputLayer)
trainingLogits = tf.contrib.seq2seq.dynamic_decode(trainingDecoder,
output_time_major = False,
impute_finished = True,
maximum_iterations = maximumSummaryLength)
return trainingLogits
def processInferenceLayerForDecoder(self, embeddingsMatrix, startOfSequenceToken, endOfSequenceToken,
decoderCell, outputLayer, maximumSummaryLength, batchSize):
"""
This is the implementation for an Inference decoding layer.
"""
startTokens = tf.tile(tf.constant([startOfSequenceToken], dtype=tf.int32),
[batchSize],
name='start_tokens')
inferenceHelper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embeddingsMatrix,
startTokens,
endOfSequenceToken)
inferenceDecoder = tf.contrib.seq2seq.BasicDecoder(decoderCell,
inferenceHelper,
decoderCell.zero_state(
dtype=tf.float32, batch_size=batchSize),
outputLayer)
inferenceLogits = tf.contrib.seq2seq.dynamic_decode(inferenceDecoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=maximumSummaryLength)
return inferenceLogits
def doDecoding(self, embeddedDecoderInput, embeddingsMatrix, encoderOutput, encoderStates,
totalWordsCountInVocab, inputTextLengths, inputSummaryLengths, maximumSummaryLength,
rnnPerCellUnitsCount, wordToIntDict, dropoutRate, batchSize, rnnCellsCount,
enableAttention = True):
# Creating the RNN cell for the decoder
decoderCell = tf.contrib.rnn.MultiRNNCell([self.createLSTMCell(rnnPerCellUnitsCount, requireDropoutLayer = True, dropoutRate = dropoutRate) for _ in range(rnnCellsCount)])
# If an additional Attention layer needs to be applied
if enableAttention:
attentionMechanism = tf.contrib.seq2seq.BahdanauAttention(rnnPerCellUnitsCount,
encoderOutput,
inputTextLengths,
normalize = False,
name = 'BahdanauAttention')
decoderCell = tf.contrib.seq2seq.AttentionWrapper(decoderCell, attentionMechanism, rnnPerCellUnitsCount)
outputLayer = Dense(totalWordsCountInVocab,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
with tf.variable_scope("decode"):
trainingLogits = self.processTrainingLayerForDecoder(embeddedDecoderInput,
inputSummaryLengths,
decoderCell,
outputLayer,
totalWordsCountInVocab,
maximumSummaryLength,
batchSize)
with tf.variable_scope("decode", reuse=True):
inferenceLogits = self.processInferenceLayerForDecoder(embeddingsMatrix,
wordToIntDict[embedding.specialTokens['STARTOFSEQUENCE']],
wordToIntDict[embedding.specialTokens['ENDOFSEQUENCE']],
decoderCell,
outputLayer,
maximumSummaryLength,
batchSize)
return trainingLogits, inferenceLogits
def process(self, inputData, targetData, dropoutRate, inputTextLengths, inputSummaryLengths,
maximumSummaryLength, totalWordsCountInVocab, rnnPerCellUnitsCount,
rnnCellsCount, wordToIntDict, batchSize, embeddingsMatrix):
# Performing parallel lookups of inputData on the embeddingMatrix
embeddedEncoderInput = tf.nn.embedding_lookup(embeddingsMatrix, inputData)
# Performing the encoding
encoderOutput, encoderStates = self.doEncoding(rnnPerCellUnitsCount,
inputTextLengths,
rnnCellsCount,
embeddedEncoderInput,
dropoutRate)
# Process the decoder input before passing to decoding layer
decoderInput = self.processDecoderInput(targetData,
wordToIntDict,
batchSize,
embedding.specialTokens['STARTOFSEQUENCE'])
# Performing parallel lookups of decoder input on the embeddingMatrix
embeddedDecoderInput = tf.nn.embedding_lookup(embeddingsMatrix, decoderInput)
# Performing the encoding
trainingLogits, inferenceLogits = self.doDecoding(embeddedDecoderInput,
embeddingsMatrix,
encoderOutput,
encoderStates,
totalWordsCountInVocab,
inputTextLengths,
inputSummaryLengths,
maximumSummaryLength,
rnnPerCellUnitsCount,
wordToIntDict,
dropoutRate,
batchSize,
rnnCellsCount)
return trainingLogits, inferenceLogits
class BatchDataGenerator:
"""
A class which helps in the generation of batches of data
"""
@staticmethod
def generateBatches(summaries, texts, batchSize, paddingToken):
def padBatchContents(contents, paddingToken):
maxContentLength = max([len(content) for content in contents])
return [content + [paddingToken] * (maxContentLength - len(content)) for content in contents]
possibleBatchCount = len(texts)//batchSize
for batchIndex in range(0, possibleBatchCount):
batchStartPoint = batchIndex * batchSize
summariesBatch = summaries[batchStartPoint: batchStartPoint + batchSize]
textBatch = texts[batchStartPoint: batchStartPoint + batchSize]
paddedSummariesBatch = np.array(padBatchContents(summariesBatch, paddingToken))
paddedTextBatch = np.array(padBatchContents(textBatch, paddingToken))
# Need the lengths for the lengths parameters
paddedSummariesLength = []
for summary in paddedSummariesBatch:
paddedSummariesLength.append(len(summary))
paddedTextLength = []
for text in paddedTextBatch:
paddedTextLength.append(len(text))
yield paddedSummariesBatch, paddedTextBatch, paddedSummariesLength, paddedTextLength
# load text
sourceDirectoryPath = '../data/cnn/stories'
refreshSourceDocs = False
pickledFilePath = '../data/cnn_dataset.pkl'
if refreshSourceDocs:
preprocessor = CnnPreprocessor()
dataLoader = DataLoader(preprocessor.cleanData)
loadedContent = dataLoader.loadSourceDocuments(sourceDirectoryPath, refreshSourceDocs)
# save to file
Utils.pickle(pickledFilePath, loadedContent)
print('Pickled the cleaned data into the file:', pickledFilePath)
# load from file
news = Utils.unPickle(pickledFilePath)
print('Loaded Texts %d' % len(news['Text']))
cleanedText = news['Text']
cleanedSummaries = news['Summary']
# Creating the word embedding class
embeddingsDimension = 50
specialTokens = {
'UNKNOWN': '<UNK>',
'PADDING': '<PAD>',
'ENDOFSEQUENCE': '<EOS>',
'STARTOFSEQUENCE': '<GO>'
}
embedding = GloveEmbedding(embeddingsDimension, specialTokens)
# Creating a dictionary with word to frequency mapping
wordsCountDict = {}
Utils.countWords(wordsCountDict, cleanedText)
Utils.countWords(wordsCountDict, cleanedSummaries)
print("Size of Vocabulary:", len(wordsCountDict))
# Constructing a word embeddings index
# This is simply a word to word vector mapping dictionary
embeddingsIndex = embedding.constructEmbeddingsIndex()
print(len(embeddingsIndex))
# This value defines the threshold of the minimum number of occurrences of an Unknown word for that word
# to be included in the word to number representation dictionary.
thresholdForRareWordsCount = 10
# Building the word to number representation dictionary for representing a big text as a sequence of numbers
# when passed to the RNN
# Alone with this a number to word representation dictionary is also built which helps in the conversion of final
# output of sequence of numbers to corresponding Predicted summary text.
wordToIntDict, intToWordDict = Utils.buildWordToNumberRepresentations(
wordsCountDict, embedding.specialTokens, embeddingsIndex, thresholdForRareWordsCount
)
# Building the Embeddings vector matrix which is basically a two dimensional matrix with
# Number of rows = number of words in wordtoIntDict above
# Number of columns = the dimensionality of chosen word embedding framework (each word vector will be of this size)
# Also if there are any unknown words in workToIntDict which are not there in the embeddingsIndex constructured from
# the word embedding file, a random word vector shall be generated and inserted as a row in the
# Embeddings vector matrix.
embeddingsMatrix = embedding.buildEmbeddingsVectorMatrix(wordToIntDict, embeddingsIndex)
print('Total number of embeddings:', len(embeddingsMatrix))
# Converting all the summaries to corresponding number sequences
summariesToNumberSequence, summaryWordsCount, summaryUnknownWordsCount = Utils.convertTextToNumberSequence(
cleanedSummaries,
wordToIntDict,
embedding.specialTokens['UNKNOWN']
)
# Converting all the text to corresponding number sequences
textToNumberSequence, textWordsCount, textUnknownWordsCount = Utils.convertTextToNumberSequence(
cleanedText,
wordToIntDict,
embedding.specialTokens['UNKNOWN'],
eosToken = embedding.specialTokens['ENDOFSEQUENCE'],
applyEos = True
)
totalWordsCount = summaryWordsCount + textWordsCount
totalUnknownWordsCount = summaryUnknownWordsCount + textUnknownWordsCount
unknownPercentage = round(totalUnknownWordsCount/totalWordsCount,4) * 100
print("Total number of words:", totalWordsCount)
print("Total number of UNKs:", totalUnknownWordsCount)
print("Percent of words that are UNK: {}%".format(unknownPercentage))
lengthSummaries = Utils.computeSequenceLengthsIntoDataFrame(summariesToNumberSequence)
lengthText = Utils.computeSequenceLengthsIntoDataFrame(textToNumberSequence)
# Inspect the length of texts
print(np.percentile(lengthText.counts, 70))
print(np.percentile(lengthText.counts, 90))
print(np.percentile(lengthText.counts, 95))
print(np.percentile(lengthText.counts, 99))
# Inspect the length of summaries
print(np.percentile(lengthSummaries.counts, 70))
print(np.percentile(lengthSummaries.counts, 90))
print(np.percentile(lengthSummaries.counts, 95))
print(np.percentile(lengthSummaries.counts, 99.5))
maximumTextLength = 464
maximumSummaryLength = 67
minimumTextLength = 2
minimumSummaryLength = 2
unknownsInSummaryLimit = 4
unknownsInTextLimit = 10
summariesAndTextSequence = list(zip(summariesToNumberSequence, textToNumberSequence))
sortedSummaries, sortedText = Utils.applyFilterAndSort(summariesAndTextSequence, {
'maximumTextLength': maximumTextLength,
'maximumSummaryLength': maximumSummaryLength,
'minimumTextLength': minimumTextLength,
'minimumSummaryLength': minimumSummaryLength,
'unknownsInSummaryLimit': unknownsInSummaryLimit,
'unknownsInTextLimit': unknownsInTextLimit,
'unknownTokenNumberRepresentation': embedding.specialTokens['UNKNOWN']
})
# Compare lengths to ensure they match
print(len(sortedSummaries))
print(len(sortedText))
Utils.pickle("../data/sorted_summaries.pkl",sortedSummaries)
Utils.pickle("../data/sorted_text.pkl",sortedText)
Utils.pickle("../data/embeddings_matrix.pkl",embeddingsMatrix)
Utils.pickle("../data/word_to_int.pkl",wordToIntDict)
Utils.pickle("../data/int_to_word.pkl",intToWordDict)
sortedSummaries = Utils.unPickle("../data/sorted_summaries.pkl")
sortedText = Utils.unPickle("../data/sorted_text.pkl")
embeddingsMatrix = Utils.unPickle("../data/embeddings_matrix.pkl")
wordToIntDict = Utils.unPickle("../data/word_to_int.pkl")
intToWordDict = Utils.unPickle("../data/int_to_word.pkl")
# Set the Hyperparameters
epochs = 100
batchSize = 15
rnnPerCellUnitsCount = 128
rnnCellsCount = 2
learningRate = 0.001
dropoutRate = 0.95
seq2seqModel = Seq2SeqModel()
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, lr, dropout_rate, summary_length, max_summary_length, text_length = seq2seqModel.createModelInputsPlaceholders()
# Create the training and inference logits
trainingLogits, inferenceLogits = seq2seqModel.process(tf.reverse(input_data, [-1]),
targets,
dropout_rate,
text_length,
summary_length,
max_summary_length,
len(wordToIntDict)+1,
rnnPerCellUnitsCount,
rnnCellsCount,
wordToIntDict,
batchSize,
embeddingsMatrix)
# Create tensors for the training logits and inference logits
trainingLogits = tf.identity(trainingLogits[0].rnn_output, 'logits')
inferenceLogits = tf.identity(inferenceLogits[0].sample_id, name='predictions')
# Create the weights for sequence_loss, the sould be all True across since each batch is padded
masks = tf.sequence_mask(summary_length, max_summary_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
trainingLogits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(learningRate)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
print("Graph is built.")
graph_location = "../modelrun/graph"
print(graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(train_graph)
# Subset the data for training
start = 150
end = start + 45000
print(len(sortedSummaries))
sampledSortedSummaries = sortedSummaries[start:end:15]
sampledSortedText = sortedText[start:end:15]
print(len(sampledSortedSummaries))
print("The shortest text length:", len(sampledSortedText[0]))
print("The longest text length:",len(sampledSortedText[-1]))
# Train the Model
learning_rate_decay = 0.95
min_learning_rate = 0.0005
display_step = 10 # Check training loss after every 10 batches
stop_early = 0
stop = 2 # If the update loss does not decrease in 3 consecutive update checks, stop training
per_epoch = 2 # Make 2 update checks per epoch
update_check = (len(sampledSortedText)//batchSize//per_epoch)
update_loss = 0
batch_loss = 0
summary_update_loss = [] # Record the update losses for saving improvements in the model
paddingToken = wordToIntDict[embedding.specialTokens['PADDING']]
checkpoint = "../modelrun/best_model.ckpt"
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
# If we want to continue training a previous session
#loader = tf.train.import_meta_graph("./" + checkpoint + '.meta')
#loader.restore(sess, checkpoint)
for epoch_i in range(1, epochs+1):
update_loss = 0
batch_loss = 0
for batch_i, (summaries_batch, texts_batch, summaries_lengths, texts_lengths) in enumerate(
BatchDataGenerator.generateBatches(sampledSortedSummaries, sampledSortedText, batchSize, paddingToken)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: texts_batch,
targets: summaries_batch,
lr: learningRate,
summary_length: summaries_lengths,
text_length: texts_lengths,
dropout_rate: dropoutRate})
batch_loss += loss
update_loss += loss
end_time = time.time()
batch_time = end_time - start_time
if (batch_i+1) % display_step == 0 and batch_i > 0:
print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f}, Seconds: {:>4.2f}'
.format(epoch_i,
epochs,
batch_i+1,
len(sampledSortedText) // batchSize,
batch_loss / display_step,
batch_time*display_step))
batch_loss = 0
if (batch_i+1) % update_check == 0 and batch_i > 0:
print("Average loss for this update:", round(update_loss/update_check,3))
summary_update_loss.append(update_loss)
# If the update loss is at a new minimum, save the model
if update_loss <= min(summary_update_loss):
print('New Record!')
stop_early = 0
saver = tf.train.Saver()
saver.save(sess, checkpoint)
else:
print("No Improvement.")
stop_early += 1
if stop_early == stop:
break
update_loss = 0
# Reduce learning rate, but not below its minimum value
learningRate *= learning_rate_decay
if learningRate < min_learning_rate:
learningRate = min_learning_rate
if stop_early == stop:
print("Stopping Training.")
break
newsIndex = 165
totalNewsCount = len(textToNumberSequence)
testNews = [textToNumberSequence[newsIndex]]
maxSummaryLength = len(news['Summary'][newsIndex])
print(testNews)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('inputData:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
text_length = loaded_graph.get_tensor_by_name('inputTextLengths:0')
summary_length = loaded_graph.get_tensor_by_name('inputSummaryLengths:0')
dropout_rate = loaded_graph.get_tensor_by_name('dropoutRate:0')
#Multiply by batch_size to match the model's input parameters
for i, text in enumerate(testNews):
answer_logits = sess.run(logits, {input_data: [text]*batchSize,
summary_length: [maxSummaryLength], #summary_length: [np.random.randint(5,8)],
text_length: [len(text)]*batchSize,
dropout_rate: 1.0})[0]
# Remove the padding from the summaries
pad = wordToIntDict["<PAD>"]
#print('- News:\n\r {}\n\r\n\r'.format(" ".join([intToWordDict[j] for j in testNews[i] if j != pad])))
print('- News:\n\r {}\n\r\n\r'.format(news['Text'][newsIndex]))
print('- Actual Summary:\n\r {}\n\r\n\r'.format(news['Summary'][newsIndex]))
print('- Predicted Summary:\n\r {}\n\r\n\r'.format(" ".join([intToWordDict[j] for j in answer_logits if j != pad])))
```
| github_jupyter |
```
import networkx as nx
from custom import load_data as cf
from networkx.algorithms import bipartite
from nxviz import CircosPlot
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# Introduction
Bipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.
Bipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.

Let's load a [crime data](http://konect.uni-koblenz.de/networks/moreno_crime) bipartite graph and quickly explore it.
> This bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.
```
G = cf.load_crime_network()
list(G.edges(data=True))[0:5]
list(G.nodes(data=True))[0:10]
```
# Projections
Bipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.

## Exercise
Find the bipartite projection function in the NetworkX `bipartite` module [docs](https://networkx.github.io/documentation/networkx-1.10/reference/algorithms.bipartite.html), and use it to obtain the `unipartite` projection of the bipartite graph. (5 min.)
```
person_nodes = [n for n in G.nodes() if G.nodes[n]['bipartite'] == 'person']
pG = bipartite.projection.projected_graph(G, person_nodes)
list(pG.nodes(data=True))[0:5]
```
## Exercise
Try visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections. (5 min.)
Again, recapping the Circos Plot API:
```python
c = CircosPlot(graph_object, node_color='metadata_key1', node_grouping='metadata_key2', node_order='metadat_key3')
c.draw()
plt.show() # or plt.savefig('...')
```
```
for n, d in pG.nodes(data=True):
pG.nodes[n]['connectivity'] = len(list(pG.neighbors(n)))
c = CircosPlot(pG, node_color='gender', node_grouping='gender', node_order='connectivity')
c.draw()
plt.savefig('images/crime-person.png', dpi=300)
```
## Exercise
Use a similar logic to extract crime links. (2 min.)
```
crime_nodes = [n for n in G.nodes() if G.nodes[n]['bipartite'] == 'crime']
cG = bipartite.projection.projected_graph(G, crime_nodes)
```
## Exercise
Can you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections. (5 min.)
```
for n in cG.nodes():
cG.nodes[n]['connectivity'] = float(len(list(cG.neighbors(n))))
c = CircosPlot(cG, node_order='connectivity', node_color='connectivity')
c.draw()
plt.savefig('images/crime-crime.png', dpi=300)
```
## Exercise
NetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis.
Try the following challenges, referring to the [API documentation](https://networkx.github.io/documentation/networkx-1.9/reference/algorithms.bipartite.html) to help you:
1. Which crimes have the most number of people involved?
1. Which people are involved in the most number of crimes?
Exercise total: 5 min.
```
# Degree Centrality
bpdc = bipartite.degree_centrality(G, person_nodes)
sorted(bpdc.items(), key=lambda x: x[1], reverse=True)[0:5]
bpdc['p1']
nx.degree_centrality(G)['p1']
```
| github_jupyter |
```
import numpy as np
from scipy.stats import binom, norm, multinomial
from scipy.special import comb
```
### Solution 1
```
# 변수 초기화
n = 25
p = 0.1
## 직접 계산
# a)
probs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(4)]
prob = 1 - sum(probs)
print(f"a) 적어도 4대가 검은 색: {prob:.4f}")
# b)
probs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(7)]
prob = sum(probs)
print(f"b) 최대 6대가 검은 색 : {prob:.4f}")
# c)
probs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(4)]
prob = 1 - sum(probs)
print(f"c) 4대 이상이 검은 색 : {prob:.4f}")
# d)
prob = comb(n, 4) * (p**4) * ((1-p)**(n-4))
print(f"d) 정확히 4대가 검은색 : {prob:.4f}")
# e)
probs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in (3, 4)]
prob = sum(probs)
print(f"d) 3대~4대의 자동차가 검은 색 : {prob:.4f}")
## scipy를 이용해 pmf 함수 이용
# a)
prob = 1 - binom.cdf(3, 25, 0.1)
print(f"a) 적어도 4대가 검은 색: {prob:.4f}")
# b)
prob = binom.cdf(6, 25, 0.1)
print(f"b) 최대 6대가 검은 색 : {prob:.4f}")
# c)
prob = 1 - binom.cdf(3, 25, 0.1)
print(f"c) 4대 이상이 검은 색 : {prob:.4f}")
# d)
prob = binom.pmf(4, 25, 0.1)
print(f"d) 정확히 4대가 검은색 : {prob:.4f}")
# e)
prob = binom.pmf(3, 25, 0.1) + binom.pmf(4, 25, 0.1)
print(f"d) 3대~4대의 자동차가 검은 색 : {prob:.4f}")
```
* * *
### Solution 2
```
## 직접 계산
# a)
prob = 0.25**5
print(f"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 {prob:.4f}")
# b)
prob = (1 - 0.25)**5
print(f"b) 어떤 학생이 모든 문제를 틀릴 확률은 {prob:.4f}")
## scipy를 이용해 pmf 함수 이용
# a)
prob = binom.pmf(5, 5, 0.25)
print(f"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 {prob:.4f}")
# b)
prob = binom.pmf(0, 5, 0.25)
print(f"b) 어떤 학생이 모든 문제를 틀릴 확률은 {prob:.4f}")
```
* * *
### Solution 3
```
## 직접 계산
# a)
prob = 1 - (0.5**3)
print(f"a) 적어도 한 명이 딸일 확률은 {prob:.4f}")
# b)
daughter_2 = (0.5**3) * comb(3, 2)
daughter_3 = (0.5**3) * comb(3, 3)
prob = daughter_2 + daughter_3
print(f"b) 적어도 두 명이 딸일 확률은 {prob:.4f}")
## scipy의 pmf 함수 이용
# b)
prob = 1 - binom.cdf(1, 3, 0.5)
print(f"b) 적어도 두 명이 딸일 확률은 {prob:.4f}")
## 시뮬레이션을 통해 계산
two_more = 0
n = 100000
for _ in range(n):
daughter_count = np.random.binomial(3, 0.5)
if daughter_count >= 2:
two_more += 1
p = two_more / n
print(f"적어도 두 명이 딸일 확률은? {p:.4f}")
```
* * *
### Solution 4
```
## scipy를 이용
mean_kor = binom.mean(100, 0.3)
var_kor = binom.var(100, 0.3)
print(f"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor}, 분산: {var_kor}")
mean_not_math = binom.mean(100, (1-0.5))
var_not_math = binom.var(100, (1-0.5))
print(f"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math}, 분산: {var_not_math}")
## binom의 평균과 분산 이용: np, np(1-p)
mean_kor = 100*0.3
var_kor = 100*0.3*(1-0.3)
print(f"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor}, 분산: {var_kor}")
mean_not_math = 100*0.5
var_not_math = 100*0.5*(1-0.5)
print(f"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math}, 분산: {var_not_math}")
## 시뮬레이션을 통해 계산
kor = []
math = []
for _ in range(10000):
samples = multinomial.rvs(1, [0.3, 0.2, 0.5], 100)
kor.append(sum([(sample[0] == 1).all() for sample in samples]))
math.append(sum([(sample[2] == 0).all() for sample in samples]))
mean_kor = np.mean(kor)
var_kor = np.var(kor)
print(f"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor:.2f}, 분산: {var_kor:.2f}")
mean_not_math = np.mean(math)
var_not_math = np.var(math)
print(f"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math:.2f}, 분산: {var_not_math:.2f}")
```
| github_jupyter |
# Introduction
In the [Intro to SQL micro-course](https://www.kaggle.com/learn/intro-to-sql), you learned how to use [**INNER JOIN**](https://www.kaggle.com/dansbecker/joining-data) to consolidate information from two different tables. Now you'll learn about a few more types of **JOIN**, along with how to use **UNIONs** to pull information from multiple tables.
Along the way, we'll work with two imaginary tables, called `owners` and `pets`.

Each row of the `owners` table identifies a different pet owner, where the `ID` column is a unique identifier. The `Pet_ID` column (in the `owners` table) contains the ID for the pet that belongs to the owner (this number matches the ID for the pet from the `pets` table).
For example,
- the `pets` table shows that Dr. Harris Bonkers is the pet with ID 1.
- The `owners` table shows that Aubrey Little is the owner of the pet with ID 1.
Putting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little. Likewise, since Veronica Dunn does not have a corresponding `Pet_ID`, she does not have a pet. And, since 5 does not appear in the `Pet_ID` column, Maisie does not have an owner.
# JOINs
Recall that we can use an **INNER JOIN** to pull rows from both tables where the value in the `Pet_ID` column in the `owners` table has a match in the `ID` column of the `pets` table.

In this case, Veronica Dunn and Maisie are not included in the results. But what if we instead want to create a table containing all pets, regardless of whether they have owners? Or, what if we want to combine all of the rows in both tables? In these cases, we need only use a different type of **JOIN**.
For instance, to create a table containing all rows from the `owners` table, we use a **LEFT JOIN**. In this case, "left" refers to the table that appears before the **JOIN** in the query. ("Right" refers to the table that is after the **JOIN**.)

Replacing **INNER JOIN** in the query above with **LEFT JOIN** returns all rows where the two tables have matching entries, along with all of the rows in the left table (whether there is a match or not).
If we instead use a **RIGHT JOIN**, we get the matching rows, along with all rows in the right table (whether there is a match or not).
Finally, a **FULL JOIN** returns all rows from both tables. Note that in general, any row that does not have a match in both tables will have NULL entries for the missing values. You can see this in the image below.

# UNIONs
As you've seen, **JOINs** horizontally combine results from different tables. If you instead would like to vertically concatenate columns, you can do so with a **UNION**. The example query below combines the `Age` columns from both tables.

Note that with a **UNION**, the data types of both columns must be the same, but the column names can be different. (So, for instance, we cannot take the **UNION** of the `Age` column from the `owners` table and the `Pet_Name` column from the `pets` table.)
We use **UNION ALL** to include duplicate values - you'll notice that `9` appears in both the `owners` table and the `pets` table, and shows up twice in the concatenated results. If you'd like to drop duplicate values, you need only change **UNION ALL** in the query to **UNION DISTINCT**.
# Example
We'll work with the [Hacker News](https://www.kaggle.com/hacker-news/hacker-news) dataset. We begin by reviewing the first several rows of the `comments` table. (_The corresponding code is hidden, but you can un-hide it by clicking on the "Code" button below._)
```
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "comments" table
table_ref = dataset_ref.table("comments")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
```
You'll also work with the `stories` table.
```
# Construct a reference to the "stories" table
table_ref = dataset_ref.table("stories")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
```
Since you are already familiar with **JOINs** from the [Intro to SQL micro-course](https://www.kaggle.com/learn/intro-to-sql), we'll work with a relatively complex example of a JOIN that uses a [common table expression (CTE)](https://www.kaggle.com/dansbecker/as-with).
The query below pulls information from the `stories` and `comments` tables to create a table showing all stories posted on January 1, 2012, along with the corresponding number of comments. We use a **LEFT JOIN** so that the results include stories that didn't receive any comments.
```
# Query to select all stories posted on January 1, 2012, with number of comments
join_query = """
WITH c AS
(
SELECT parent, COUNT(*) as num_comments
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
)
SELECT s.id as story_id, s.by, s.title, c.num_comments
FROM `bigquery-public-data.hacker_news.stories` AS s
LEFT JOIN c
ON s.id = c.parent
WHERE EXTRACT(DATE FROM s.time_ts) = '2012-01-01'
ORDER BY c.num_comments DESC
"""
# Run the query, and return a pandas DataFrame
join_result = client.query(join_query).result().to_dataframe()
join_result.head()
```
Since the results are ordered by the `num_comments` column, stories without comments appear at the end of the DataFrame. (Remember that **NaN** stands for "not a number".)
```
# None of these stories received any comments
join_result.tail()
```
Next, we write a query to select all usernames corresponding to users who wrote stories or comments on January 1, 2014. We use **UNION DISTINCT** (instead of **UNION ALL**) to ensure that each user appears in the table at most once.
```
# Query to select all users who posted stories or comments on January 1, 2014
union_query = """
SELECT c.by
FROM `bigquery-public-data.hacker_news.comments` AS c
WHERE EXTRACT(DATE FROM c.time_ts) = '2014-01-01'
UNION DISTINCT
SELECT s.by
FROM `bigquery-public-data.hacker_news.stories` AS s
WHERE EXTRACT(DATE FROM s.time_ts) = '2014-01-01'
"""
# Run the query, and return a pandas DataFrame
union_result = client.query(union_query).result().to_dataframe()
union_result.head()
```
To get the number of users who posted on January 1, 2014, we need only take the length of the DataFrame.
```
# Number of users who posted stories or comments on January 1, 2014
len(union_result)
```
# Your turn
Use what you've learned to **[pull information from multiple tables](#$NEXT_NOTEBOOK_URL$)**.
| github_jupyter |
<a name='main'></a>
# **AI IN PRACTICE : HOW TO TRAIN AN IMAGE CLASSIFIER**
### **Author: Sheetal Reddy**
### **Contact : sheetal.reddy@ai.se**
---
**Introduction**
The training "AI in Practice" will give you, at a basic level, knowledge about how to train a pre-trained model, the pre-requisites, what techniques that are used and how to continue experimenting in the finetuning of the model.
In this training we are going to use image classification, open data-set, a pre-trained model and Colab* to train your model.
A pre-trained model gives you the possibility to finetune an existing model trained on a large amount of data to better fit your purposes and by that also save you time. We will go through the more of the advantages later in the training.
There are many pre-trained models available for different purposes you can find some of them here: https://pytorch.org/docs/stable/torchvision/models.html
**The objective**
The objective of this training is to give you enough knowledge to feel confident when entering an AI project.
By understanding the steps requerired to train a model you will an advantage when working in AI related project.
This by giving you both an theoretical knowledge but also by you being able to practice hands on - how to train a model.
**Learning objectives**
After the training you will be able to:
* Describe the necessary steps to train a model
* Use a Jupyter Notebook - Google Colab
* Be able to train a model
- Prepare datasets
- Finetune pre-trained models
- Visualize and quantify results
**Pre-requisites**
To be able to get the most out of this training we expect you to be aware of:
* The subject of AI
* The importance of data
**Training instructions**
The training is primarly performed individially but you will be placed in a group.
There will be some group questions and exercises but you are expected to performe the tasks your-self.
There is a Common Terminology section in the end of your Colab document. The concepts or wording available in the Common Terminology section will be marked with an (*)
There are also some links in the document if you want to learn more in the different sections
Let us know if you have any questions or your group members – **but first google it!** "Googling " is one of the most common ways that data scientists work with understanding new techniques and ways of working.
**Duration**
* Expected time to finish the training is in total 3 hours.
**The challenge**
* The challenge in this training, is to finetune the pre-trained model to the use case and dataset - capable of **image classification**, see below for explanation. We will also later on in this training go through more on the benefits of working with a pre-trained model.
* In this case you will work with improving/training the model using a data set containing different images including scenes.
* The outcome of your work will result in a model that can classify "nature scenes" with a higher accuracy.
**Image classification**
So why did we choose image classification for this training?
* Image classification is a technique that is used to classify or predict the class of a specific object in an image. Image classification is one of the most important applications of computer vision. The main goal of this technique is to accurately identify the features in an image. Its applications range from classifying objects in self-driving cars to identifying blood cells in the healthcare industry, from identifying defective items in the manufacturing industry to build a system that can classify persons wearing masks or not.
* Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do. To learn more:
https://en.wikipedia.org/wiki/Computer_vision#:~:text=Computer%20vision%20is%20an%20interdisciplinary,human%20visual%20system%20can%20do.
# **Lets start the training !**
**How to train a pre-trained model**
To train a model you usually need to plan according to the following steps below. The first three steps will set the foundation for what you will be able to train your model on and what results you will be able to expect.
We will use this structure and go through the steps in the training one by one.
1. [Define adequately our problem (objective, desired outputs…).](#main)
2. [Setup the computing environment](#computing_env)
3. [Gather data](#computing_env)
4. [Prepare the data](#data_preparation)
5. [Train the model and choose a measure of success.](#training) - In this training the measure of succes is to have a model with a low error rate.
6. [An overview of how a model learns](#results).
## **1.Define the problem**
A problem well defined is a problem half-solved.
Understanding the problem and developing the requirements isn't something you typically get right on the first attempt; this is often an iterative process where we initially define a set of rough requirements and refine the detail as we gain more information.
By asking and aswering the five questions you are in a good way to be able to define a problem.
1. What is the nature of the problem that requires solving?
2. Why does the problem require a solution?
3. How should solutions to the problem be approached?
4. What aspect of the problem will a deep learning model solve?
5. How is the solution to the problem intended to be interacted with?
Since we already have a defined problem or in this case a challenge: **To finetune a pre-trained model with the aim to classify "scenes" with a high accuracy.**
We will go ahead with setting the computing environment
<a name='computing_env'></a>
## **2. Setting up the computing environment**
**Change the runtime setting of your colab notebook to GPU*:**
Graphics Processing Units (GPUs), computing power, can significantly accelerate the training process for many deep learning models. Training models for tasks like image classification, video analysis, and natural language processing involves compute-intensive matrix multiplication and other operations that can take advantage of a GPU's massively parallel architecture.
Training a deep learning model that involves intensive compute tasks on extremely large datasets can take days to run on a single processor. However, if you design your program to offload those tasks to one or more GPUs, you can reduce training time to hours instead of days.
**How to change your runtime setting to GPU* in your environment**
The first thing you want to do is to in this Colab page go to the menubar and follow the following steps "Körning > Ändra körningstyp > Välj "GPU". This will set the google colab environment up with a free GPU that will be used to train your models. If you have CPU selected it will still work, only much slower.
## **3. Gather the Dataset**
Gathering and preparing data requires great care. It usally involves taking below steps into considaration.
1. Determine what information you want or need to Collect to solve the problem
2. Set a timeframe for data collection
3. Determine your data collection method
4. Collect the data
5. Analyze the data and implement your findings
The correct gathering of data is completely dependent on the problem you would like or need to solve.
**Domain of the problem**
Depending upon the domain of your problem, you may either use standard datasets collected by others or start collecting your own data. As you intend to use neural networks, then you should be aware that your dataset should be large, or else those techniques may not be very useful.
What is the domain of your problem? Is it related to Computer Vision, Natural Language Processing, Sensor data, or some XYZ?
In our case its related to Computer Vision for that reason we need to gather a large set of images. There are various ways to gather image data and you need to specify what images that are relevant for solving the problem.
It is important to plan ahead on how much data one may acquire. You cannot just store in a hard-disk and save it in directories and assume you are ready to go. A lot of effort goes in data storage, organization, annotation and pre-processing.
**Data Privacy**
Data privacy is an important part if individual people’s personal information is to be stored. Some data can be stored in simple text files but for other you may want to develop a database (or a light version) for faster access. If the data is too big to fit in memory, then big data techniques may need to be adopted (e.g. Hadoop framework).
For this training we chosen not to include any personal data and we have also chosen to a pretty small dataset so its possible to store in a laptop. You will learn more about the data for this training as we go along the training.
**Instructions to add the dataset to your drive**
1. Download the dataset from the dropbox folder by clicking here
https://www.dropbox.com/s/gf6d2t1zbogjjgg/AI_IN_PRACTICE.zip?dl=1
2. Upload the **AI_IN_PRACTICE.zip** file to your google drive.
3. Make sure you have a file called **AI_IN_PRACTICE.zip** in your **Mydrive** (In swedish **Min enhet**) in google drive
You will learn about the data traits later in the training.
Now you are all set to start running the code cells one by one ! The cells are they grey "boxes" that you will find throughout the Colab document. The fast and cool way to run a cell is to press shift+enter/ctrl + enter.
```
#The code in this cell connects your google drive space to the jupyter notebook and sets up fastai in your colab environment.
#This will enable the code in your jupyter notebook to access the dataset in your google drive.
#Install fastbook(contains fastai setup) in the colab environment.
!pip install -Uqq torchtext==0.8.1
!pip install -Uqq fastbook
#Importing fastai into the jupyter notebook
import fastbook
#setup fastai and mounts your google drive space in /content/gdrive
fastbook.setup_book()
print('Setup complete')
```
Now your google drive is mounted at /content/gdrive/MyDrive. It is only accesable through your Jupyter notebook for your view.
Click on the above link to make sure your drive is mounted in the right location.
If you experince any error, let the organizer know.
Now you should run the next cell to unzip/extract the dataset.
```
#When pressing the run button the code in this cell will unzip the AI_IN_PRACTICE.zip dataset and create a scenes folder in your google drive in MyDrive.
#The code below Unzips the AI_IN_PRACTICE.zip file
!unzip -q '/content/gdrive/MyDrive/AI_IN_PRACTICE.zip' -d '/content/gdrive/MyDrive/'
print('The unzip is complete now and you can move to the next cell !')
#This might take a while - Do not rerun the cell in between
#When the code is executed correctly you will see this message "The unzip is complete now and you can move to the next cell !"
#If you still do a rerun you will get the following message: "replace /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes/train/sea/1.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename:" press "A" and press Enter
```
Now we have the unziped dataset in the location /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes
Click on the link above to make sure you have scenes folder in your MyDrive. You should be able to see the different folders in the scenes dataset such as models, train, train_medium and valid.
If you experience any error when you click the link, it means that the dataset is not at the right location.
**Import the necessary packages**
In python, which fastai* uses as a building block, we import packages (containing code) to our code using import statement as shown below for eg : import os
It is a convinient way to import all the open source packages that are interesting and important for solving the challenge. There are many open source packages being produced and which ones to use for the specific problem needs to be explored.
The importance of the packages we are using are described below in the code cell. We are going to work with the fastai libary which sits on top of PyTorch*. The fastai libary provides many useful functions that enable us to quickly and easily build neural networks (NN) and train our models. To learn more about NN please watch the move through this link: https://www.youtube.com/watch?v=bfmFfD2RIcg
```
#The code in this cell imports all the necesssary packages useful for training your model.
from fastbook import *
# imports fastai vision package to work with images
from fastai.vision.all import *
# imports fastai metrics like error_rate
from fastai.metrics import error_rate # 1-accuracy
#import numpy libraries for matrix manipulations
import numpy as np
import os
from sklearn.metrics import confusion_matrix
from sklearn.utils import shuffle
#import plotting and visualization libraries
import matplotlib.pyplot as plt
#import libraries to read and write images
import cv2
matplotlib.rc('image', cmap='Greys')
print('Good Job ! You are on the right track')
```
<a name='data_preparation'></a>
# **4. Data Preparation**
Data preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data.
Data preparation is often a lengthy undertaking for data professionals or business users, but it is essential as a prerequisite to put data in context in order to turn it into insights and eliminate bias resulting from poor data quality.
For example, the data preparation process usually includes standardizing data formats, enriching source data, and/or removing outliers.
Dataset preparation can be divided into five steps
1. [Data Exploration](#data_exploration)
2. [Data Cleaning](#data_cleaning)
3. [Data Augmentation](#data_augmentation)
4. [Data Splitting](#data_splitting)
5. [Visualize data](#data_visualization)
<a name='data_exploration'></a>
## **4.1. Data Exploration**
In the data exploration stage, we understand and try to answer some basic questions about the dataset. The question are listed below and are there for you to get a fast overview of the dataset you're handeling. In some cases this will give you enough information to understand if your dataset will be able to solve your problem or not.
1. How big is the dataset?
2. How many train files and validation/test files do we have?
3. How many classes are there in the dataset ?
4. How many data samples are there per class ?
To be able to answer the above questions, we need to let our code know where our dataset is located. We do that by running the below code cell.
**Location of Scenes Dataset**
```
#The code in this cell adds the location where the data exists to a path variable.
path = '/content/gdrive/MyDrive/AI_IN_PRACTICE/scenes'
print('Cell execution Completed')
#The code in this cell stores all the locations of the train and test images in the dataset.
#gets image locations from scenes/train folder and save them to train_files
train_files=get_image_files(path+'/train')
#get image locations from scenes/valid folder and save them to test_files
test_files=get_image_files(path+'/valid')
print('Cell execution Completed')
```
If you need more information about the code in the code cells, Use doc() for more documentation. An example of how to use doc() is given below.
```
doc(get_image_files)
```
**Amount of files in the scenes dataset**
```
#The code in this cell prints the number of images used for training and test/validation. The numbers are fixed to the dataset.
print('Number of images used for training '+ str(len(train_files)))
print('Number of images used for validation '+ str(len(test_files)))
```
**Amount of Classes**
```
#The code in this cell prints the classes in our dataset
labels = os.listdir(path+'/train')
print(labels)
#The code in this cell counts the number of samples per class in the train dataset. Plotted blow in the chart.
counts = [0]*len(labels)
for i in train_files:
for j in range(0,len(labels)):
if labels[j] in str(i):
counts[j]= counts[j]+1
print('Counts extracted')
#The code below defines a function for plotting the number of samples per class
def plot_bar_counts():
# this is for plotting purpose
index = np.arange(len(labels))
plt.bar(labels, counts)
plt.xlabel('labels', fontsize=5)
plt.ylabel('No of data samples', fontsize=15)
plt.xticks(index, labels, fontsize=15, rotation=30)
plt.title('Train data analysis')
plt.show()
#Plots the bar code of the training samples
plot_bar_counts()
```
#[**4.2. Data Cleaning**](#data_cleaning)
In this training , we will do the data cleaning in the next pilot session.
<a name='data_augmentation'></a>
## **4.3. Data Augmentation**
Data augmentation is the technique of increasing the size of data used for training a model but also to create real life situations. For reliable predictions, the deep learning models often require a lot of training data, which is not always available. Therefore, the existing data is augmented in order to make a better generalized model.
Although data augmentation can be applied in various domains, it's commonly used in computer vision. Some of the most common data augmentation techniques used for images are:
**Position augmentation**
* Scaling
* Cropping
* Flipping
* Padding
* Rotation
* Translation
* Affine tranformation (ex:warping)
**Color augmentation**
* Brightness
* Contrast
* Saturation
* Hue
**Fun fact**: Color augmentations are the basis for the **Instagram filters** we use to make us look picture perfect :)
Below we go through some of the techniques and visualize different augmentations using one sample image
```
import random
num = random.randint(0, len(train_files)-1)
#Load a random image to visiaulize the image augmentations
img = PILImage(PILImage.create(train_files[num]))
#show the image
show_image(img)
```
## **Random Crop Augmentaion**
Random crop is a data augmentation technique wherein we create a random subset of an original image. This helps our model generalize better because the object(s) of interest we want our models to learn are not always wholly visible in the image or the same scale in our training data.
```
# The code in this cell applies Randomized crop to the image loaded above
'''
RandomResizedCrop(n): Randomly crops an image to size (nxn)
'''
n=224
crop = RandomResizedCrop(n)
_,axs = plt.subplots(3,3,figsize=(9,9))
for ax in axs.flatten():
cropped = crop(img)
show_image(cropped, ctx=ax);
```
## **Crop pad**
Crop Pad is an additional augmentaqtion technique to increase the scenes data set by padding an image.
```
# The code in this cell applies crop_pad to the image loaded above
_,axs = plt.subplots(1,3,figsize=(12,4))
for ax,sz in zip(axs.flatten(), [150, 300, 500]):
show_image(img.crop_pad(sz), ctx=ax, title=f'Size {sz}');
```
## **Rotation Augmentation**
A source image is random rotated clockwise or counterclockwise by some number of degrees, changing the position of the object in frame.
Random Rotate is a useful augmentation in particular because it changes the angles that objects appear in your dataset during training. Random rotation can improve your model without you having to collect and label more data.
```
# The code in this cell applies given rotations the image.
timg = TensorImage(array(img)).permute(2,0,1).float()/255.
def _batch_ex(bs): return TensorImage(timg[None].expand(bs, *timg.shape).clone())
'''
thetas - Angles which the original image is rotated to.
For ex: thetas = [-15,0,15]
Displays three images rotated to -15 degrees, 0 degrees and 15 degrees respectively
'''
thetas = [-30,-15,0,15,30]
imgs = _batch_ex(5)
deflt = Rotate()
listy = Rotate(p=1.,draw=thetas)
show_images( listy(imgs) ,suptitle='Manual List Rotate',titles=[f'{i} Degrees' for i in thetas])
```
## **Warping Augmentation**
Appling warping technique adds distorted images to the scenes dataset.
```
scales = [-0.4, -0.2, 0., 0.2, 0.4]
imgs=_batch_ex(5)
vert_warp = Warp(p=1., draw_y=scales, draw_x=0.)
horz_warp = Warp(p=1., draw_x=scales, draw_y=0.)
show_images( vert_warp(imgs) ,suptitle='Vertical warping', titles=[f'magnitude {i}' for i in scales])
show_images( horz_warp(imgs) ,suptitle='Horizontal warping', titles=[f'magnitude {i}' for i in scales])
```
**Flip**
Flips a batch of images.
```
with no_random(32):
imgs = _batch_ex(2)
deflt = Flip()
show_images( deflt(imgs) ,suptitle='Default Flip')
```
Let's now batch all these augmentation/transformation together and apply them in the code cell below.
We also change the size of the images to make sure every image is of the same shape and size (normalize). This allows the GPU to apply the same instructions on all the images.
When we normalize the images, the pixel channels standard deviations are reduced to help train models. If you do have problems training your model, one thing to do is check if you have normalized it.
***NOTE: The types of data augmentations are very specific to the dataset. In our case we only rotate the image by a smaller degree to maintain representability of the real world. If we consider Medical images (Ex:cell Images), It is okay to rotate them by a larger degree(ex: 180 degrees)***
```
#tfms = None
#The code in this cell collects all the data augmentations into one variable which can be applied to our dataset in the later stages.
tfms =[*aug_transforms(size=224, min_scale=0.75, max_rotate=10, max_zoom=1.05, max_warp=.1, do_flip=True), Normalize.from_stats(*imagenet_stats)]
#if you are running on GPU instance , this code cell will work, Otherwise it will throw an error !
#if you are not running on GPU, comment the second line (y = y.to(device=torch.device("cuda:0"))).
y = _batch_ex(9)
y = y.to(device=torch.device("cuda:0"))
for t in tfms: y = t(y, split_idx=0)
_,axs = plt.subplots(1,5, figsize=(12,3))
for i,ax in enumerate(axs.flatten()):
show_image(y[i], ctx=ax)
```
<a name='data_splitting'></a>
## **4.4 Data Splitting**
Now its time to split your data for training and validation. The training data usually contains 70% of the image dataset and the trainingValidation dataset the remaining 30%.
Run the code below to perform the splitting.
```
#The code in this cell loads the whole train and valid images into a data variable. Also applies the tfms variable that we created in the previous cells.
np.random.seed(42)
'''
The method below loads train and valid subfolders in the code (data =)
train : name of the train subfolder
valid : name of the valid subfolder
item_tfms : transforms performed on the individual image
batch_tfms : transforms performed on the batch
bs : batch size
'''
data = ImageDataLoaders.from_folder(path,train='train', valid ='valid', item_tfms=Resize(224), batch_tfms=tfms, bs=10)
```
Before we move on to next code cell, we need to be clear with the below question. Believe me, Its Important !
**What Is Batch Size?**
To refresh you menemory please look at the video explaining NN here: https://www.youtube.com/watch?v=bfmFfD2RIcg
* The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
* Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected output variables and an error is calculated. From this error, the update algorithm is used to improve the model, e.g. move down along the error gradient.
* A training dataset can be divided into one or more batches.
* Batch Size(bs) can be changed in this code cell [here](#data_splitting). It's value is currently set to 10.
To get more information about a Batch Size please follow the link: https://www.youtube.com/watch?v=U4WB9p6ODjM
## **4.5 Visualize Data**
By Visualizing the data you can confirm that you are on the right track e.g. regarding the labeling
Do your images match the correct labels?
If yes, then you have succeeded !
```
#The below line of code shows a random batch of images
data.show_batch(figsize=(10,10))
```
<a name='training'></a>
# **5.Training/Fine-Tuning the model using Transfer Learning**
**Welcome back!**
Now we will start with fine-tuning of our pretrained model. This means that we are building a model which will take images as input and will output the predicted probability for each of the categories, in this case, it will get 6 probabilities and class with the maximun probability is chosen as the label. For this task we will use a technique called Transfer Learning. To learn more about transfer learning please follow this link: https://www.youtube.com/watch?v=5T-iXNNiwIs
**What is Transfer Learning?**
* Transfer learning is a technique where you use a model trained on a very large dataset (usually ImageNet in computer vision) and then adapt it to your own dataset.
* The idea is that the model has learned to recognize many features on all of this data, like ImageNet, and that you will benefit from this knowledge, especially if your dataset is small.
* In practice, you need to change the last part of the model to be adapted to your own number of classes.
* Most convolutional models end with a few linear layers (a part we will call the head).
* The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of your classes.
* In transfer learning one keeps all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initialized randomly.
**Two-Phase Training of the model**
* We will train the model in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data). In the second phase we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possibly using differential learning rates).
For this training we have chosen a pretrained model called resnet34, it has previously been trained on 1,5 million of images. This means that we don't have to start with a model that knows nothing, we start with a model that knows something about recognizing images already. The 34 stands for the number of layers in the network, a smaller model trains faster. There is a bigger version is called resnet50.
With below code our model will be able to train with the resnet34.
```
#The code in this cell will use a cnn_learner method. With this line of code we tell a learner to create a cnn model for us, in this case it's a resnet34.
#The cnn_learner method helps you to automatically get a pretrained model from a given architecture, in this case resnet34
learn = cnn_learner(data, models.resnet34, loss_func=CrossEntropyLossFlat(), metrics=[error_rate, accuracy])
```
### **Wait !!!**
There seems to be a lot of terms in the code that are complicated in the previous cell. Let's review each of them a bit
* **CNN** : Convolutional Neural Networks are a class of neural networks that are widely used in the areas of images recognition, images classifications. Objects detections, recognition faces etc. A convolution is the basic operation of a CNN. For more explanation, watch the below video.
https://www.youtube.com/watch?v=YRhxdVk_sIs&t=419s
* **Cross-Entropy Loss** : Cross-entropy loss is a loss function used for this dataset. It has two benefits:
> 1. It works even when our dependent variable has more than two categories.
> 2. It results in faster and more reliable training.
* **Error rate**:
error_rate = 1 - accuracy
accuracy = no of correctly classified samples / all samples
The below code-cell shows the detailed architecture of the deep neural network model(in our case resnet34) we are training. Knowing the architecture of a DNN(deep neural network) is useful in designing better neural network architectures for more advanced usecases.
```
#The code in this cell shows the architecture of the model( in our case CNN) that is being trained.
learn.model
```
## **Phase 1: Finetune the head of the model**
Now we enter the first phase of the training which means that we first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data). We will train our models by letting it cycle through all our data 6 times. The number 6 is the number of times we let the model go through all the data. We can see the training loss which is telling us how much is the model learning from the data. The validation loss tells us how generalizable is the model.
In both the cases, training and validation loss, it's good to have a decreasing trend.
1 cycle = 1 epoch
It will take sometime to train your model.
Sit and relax after running the below cell ! :) You did a great job !
Or you can read [here](#cycles) on how to choose the number of cycles/epochs.
```
#The code in this cell will run the training job for 6 epochs.
learn.fit_one_cycle(6)
```
Ideally if your model is learning something, you should see a certain trend. Your train_loss and valid_loss and error_rate should be decreasing while accuracy should be increasing.
```
#Plots the loss for both training and validation dataset
learn.recorder.plot_loss()
#The code in this code cell is saving the model to the disk with name stage-1
learn.save('stage-1')
```
Observe the decreasing trend in the plots above !!
<a name='cycles'></a>
### **How do we select the number of epochs?**
* Often you will find that you are limited by time, rather than generalization and accuracy, when choosing how many epochs to train for. So your first approach to training should be to simply pick a number of epochs that will train in the amount of time that you are happy to wait for. Then look at the training and validation loss plots, as shown above, and in particular your metrics, and if you see that they are still getting better even in your final epochs, then you know that you have not trained for too long. In this situation you can increase the number of epochs you are training for.
* If you have the time to train for more epochs, you may also want to instead use that time to train more parameters—that is, use a deeper architecture.
Now we successfully finetuned our model. In order not to lose our progress, let's save our trained model in preset location. The model will be saved on your google drive at /content/gdrive/MyDrive/scenes/models
## **Phase 2: Unfreezing and fine-tuning**
As mentioned above, training is a two-phase process. In the first training, we train only last layer of the model. It’ll never overfit and will give good results, but to really make the best use of the model, we unfreeze and fine tune all the layers in the model to train it better.
Finetuning all the layers of the model let's the model weights of all the layers finetuned to the features of the scenes dataset. This makes the model perform better on the scenes dataset.
```
#The code in this code cell unfreezes and trains the whole resnet34 model. We now allow for the whole model to be trained, not just the last layer.
learn.unfreeze()
```
**Finding the best learning rate**
Finding a good learning rate is one important problem faced by the machine learning community. Learning rate decides how fast should the model weights be updated. It is mostly trial and error based but fastai has come up with a tool called learning rate finder which can give us the most appropriate learning rate.
For a more intuitive explanation on how the learning rate finder works, refer to the below link
(https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html)
The below cell plots a curve showing the learning late versus loss.
```
#The code in the code cell runs the learning rate finder provided by fastai
learn.lr_find()
```
Now you see a text above the plot which suggests a learning rate range. Change the lr_min value in the code cell below to the suggested lr_min value in the plot.
For example if you the Suggested LRs are given as below :
SuggestedLRs(lr_min=0.004786301031708717, lr_steep=0.0014454397605732083)
Then, change the lr_min value to 0.0047 below in the code cell.
```
#Change the value of lr_min to the value suggested in the previous plot.
lr_min = 1e-4
```
Now, we train the model again after unfreezing all the layers of the pretrained model and also using the learning rate from the learning rate finder.
```
#The code in the code cell here runs a training for 5 epochs.
learn.fit_one_cycle(5, lr_max=slice(1e-6,lr_min))
```
Now we successfully finished phase-2 training of our model.
In order not to lose our progress, let's save our trained model in preset location. The model will be saved on your google drive at /content/gdrive/MyDrive/scenes/models
```
#The code in this cell is saving the model to the disk with name stage-2
learn.save('stage-2')
```
<a name='results'></a>
# **Results Intepretation and Analysis**
*Now comes the most interesting part!*
We will first see which were the categories that the model was most confused with. We will try to see if what the model predicted is reasonable or not. Furthermore, we will plot a confusion matrix where we can see and learn more about the mistakes that the model made. We will explain the confusion matrix a bit further down.
```
#The code in this cell when exected performs an analysis of the model performance on all the classes. The results of the analysis are shown in the next code cells.
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
print('Interpretation and Analysis of Results done ! ')
# The code in this code cell shows some sample images, actual ground truth used for training and the predicted label.
# If the predicted label and the ground truth match, the labels are shown in green.
# If the predicted label and the ground truth do not match, the labels are shown in red.
learn.show_results()
```
So, one of the most interesting things we can do is called plot top losses. What this does is plot out when the model was very certain about a certain class, but was wrong. This means you are going to have a high loss. In other words; the model was confident about an answer, but answered wrong. The title of each image shows: prediction, actual, loss, probability of actual class.
```
#The code in this cell shows the images the model is most confused on.
'''For every image, it shows
1. Prediction: The label predicted by the model.
2. Actual: The actual label in the dataset.
3. Loss : The cross entropy loss of the image. More loss means the model is very certain about a wrong prediction.
4. Probability : How certain is the model's prediction
'''
interp.plot_top_losses(9, figsize=(15,11))
```
The confusion matrix is a way to visualuize your results and get an understanding for where your model makes mistakes and how frequent they are.
The confusion matrix so interesting that we want everyone to understand it properly. We gather in the main group to discuss it.
If you see that people are still working, grab a coffee and come back ! :)
```
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
```
The most confused grabs the most common wrong predictions out of the confusion matrix. This can allow you for example as a domain expert, to understand based on your expertise, is this something that the model should be confused about. We can all understand that a glacier in many cases may be easy to confuse with mountains as glaciers many times exist in mountains.
```
#The code in this code cell gives us the classes on which the model is confused in descending order.
'''
For example:
('glacier', 'mountain', 131)
What we can infer from the above line is that 131 glacier images have been predicted as mountain images.
'''
interp.most_confused(min_val=10)
```
## **Let's see if you can get better Accuracy ! Try it out**
<a name='data_cleaning'></a>
## **Data Cleaning**
Oops ! seems like the organizers have mixed up two datasets in rush :P.
Can you try to clean it and see if that gives any accuracy gains ?
**TIP** : The mix up happened with mostly the glacier and building classes.
Other suggestions which might help in the accuracy gain:
* Use the train_medium dataset in /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes provided which has more data
* Increase the batch size
---
**Congratulations!!**
---
You have completed the training :)
Please return to the main group.
Please be ready to let us know your error rate.
# **Common Terminology used in this training**
* **CPU**: A central processing unit, also called a central processor, main processor or just processor, is the electronic circuitry within a computer that executes instructions that make up a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.
* **GPU**: A graphics processing unit, is a specialized, electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel.
* **fastai** : is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. To learn more follow this link: https://docs.fast.ai/
* **PyTorch** : Is a Python-based scientific computing and deep learning framework. It's a replacement for NumPy to use the power of GPUs. It's a deep learning research platform that provides maximum flexibility and speed.
* **Google Colab** : At this moment you are in a google colab environment and you will be using this platform to run code and start learning about AI. Colab is a cloud based working environment that allows to to collaborate and train your models. A great environment to try things out and test.
* **Python** : Python is an interpreted programming language currently being used for any machine learning projects. Many of the open source Machine learning packages are extensively available in python only because of which it became a go-to language for Machine learning prototyping.
* **epoch** : An epoch refers to one cycle of training through the full training dataset.
* **Imagenet**: ImageNet is a dataset consisting of 1.3 million images of various sizes around 500 pixels across, in 1,000 categories, which took a few days to train
* **Pretrained model** : The model that has been trained from scratch on a very large dataset(usually ImageNet in computer vision) is called the pretrained model. To learn more about pretrained models, check the link below.
https://towardsdatascience.com/how-do-pretrained-models-work-11fe2f64eaa2
## **Acknowledgements**
1. A huge thanks to Fastai for providing a framework for fast prototyping.
2. Thanks to Kaggle and Intel for proving the scenes classification dataset
| github_jupyter |
```
!pip install transformers datasets tweet-preprocessor ray[tune] hyperopt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import wordcloud
import preprocessor as p # tweet-preprocessor
import nltk
import re
import seaborn as sns
import torch
from transformers import BertTokenizer, BertForSequenceClassification, AdamW, get_linear_schedule_with_warmup
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix
from sklearn.model_selection import train_test_split, StratifiedKFold
from scipy.special import softmax
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from tqdm.notebook import tqdm
from ray import tune
from ray.tune import CLIReporter
from ray.tune.schedulers import ASHAScheduler
from ray.tune.suggest.hyperopt import HyperOptSearch
from google.colab import drive
drive.mount('/content/drive')
# dataset_dem = pd.read_csv('/content/drive/MyDrive/democrat_tweets_v2.csv')
# dataset_gop = pd.read_csv('/content/drive/MyDrive/republican_tweets_v2.csv')
# dataset_dem["label"] = "Democrat"
# dataset_gop["label"] = "Republican"
# dataset_final = pd.concat([dataset_dem, dataset_gop])
# dataset_final.reset_index(drop=True, inplace=True)
dataset_final = pd.read_csv("/content/drive/MyDrive/Copy of 2020_labled_political_tweets.csv.zip")
# dataset_final=dataset_final[(dataset_final["party"].any()=="D")]
dataset_final = dataset_final.iloc[0:2000]
for index, row in dataset_final.iterrows():
if str(row['party']) !="D":
if str(row["party"])!="R":
dataset_final.drop(index, inplace=True)
dataset_final.head()
dataset_final.count
# dataset=pd.read_csv("/content/drive/MyDrive/Copy of 2020_labled_political_tweets.csv.zip")
# X=dataset.drop(["party"],axis=1)
# y = dataset[["party"]]
# X_train, X_val, y_train, y_val = train_test_split(X,
# y,
# test_size=0.20,
# random_state=42)
LABEL_MAP = {
"D": 0,
"R": 1
}
def buildLabels(row):
return LABEL_MAP.get(row["party"])
# def cleanTweet(row):
# tweet = row["text"]
# tweet = str(p.clean(tweet))
# tweet = re.sub(r'[^\w\s]', '', tweet) # punctuation
# tweet = re.sub("^\d+\s|\s\d+\s|\s\d+$", " ", tweet) # numbers
# return tweet
dataset_final["party"] = dataset_final.apply(lambda row: buildLabels(row), axis=1)
# dataset_final["clean_text"] = dataset_final.apply(lambda row: cleanTweet(row),
# axis=1)
dataset_final.head()
dataset_clf = dataset_final[["text", "party"]]
dataset_clf.reset_index(drop=True, inplace=True)
X_train, X_val, y_train, y_val = train_test_split(dataset_clf.index.values,
dataset_clf.party.values,
test_size=0.20,
random_state=42,
stratify=dataset_clf.party.values)
dataset_clf['data_type'] = ['not_set']*dataset_final.shape[0]
dataset_clf.loc[X_train, 'data_type'] = 'train'
dataset_clf.loc[X_val, 'data_type'] = 'test'
dataset_train = dataset_clf.loc[dataset_clf.data_type == 'train']
dataset_test = dataset_clf.loc[dataset_clf.data_type == 'test']
dataset_train.head()
def get_dataloaders(data, batch_size):
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
do_lower_case=True)
# tokenize train and test data so BERT can understand it
encoded_data_train = tokenizer.batch_encode_plus(
data[data.data_type=='train'].text.values,
add_special_tokens=True,
return_attention_mask=True,
padding=True,
max_length=64,
return_tensors='pt'
)
encoded_data_test = tokenizer.batch_encode_plus(
data[data.data_type=='test'].text.values,
add_special_tokens=True,
return_attention_mask=True,
padding=True,
max_length=64,
return_tensors='pt'
)
# destructure out the input_ids, attention masks, and labels from tokenizer & encoder output
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(data[data.data_type=='train'].party.values)
input_ids_test = encoded_data_test['input_ids']
attention_masks_test = encoded_data_test['attention_mask']
labels_test = torch.tensor(data[data.data_type=='test'].party.values)
train_data = TensorDataset(input_ids_train, attention_masks_train, labels_train)
test_data = TensorDataset(input_ids_test, attention_masks_test, labels_test)
train_dataloader = DataLoader(train_data,
sampler=RandomSampler(train_data),
batch_size=batch_size)
test_dataloader = DataLoader(test_data,
sampler=SequentialSampler(test_data),
batch_size=batch_size)
return train_dataloader, test_dataloader
def auc_score(preds, labels):
soft_preds = softmax(preds, axis=1) # logit -> probability
if np.shape(preds)[1] > 2: # check for multi-class
return roc_auc_score(labels, soft_preds, multi_class='ovr')
else:
soft_preds = soft_preds[:,1]
return roc_auc_score(labels, soft_preds)
def acc_score_by_class(preds, labels):
label_dict_inverse = {v: k for k, v in LABEL_MAP.items()}
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
for label in np.unique(labels_flat):
y_preds = preds_flat[labels_flat==label]
y_true = labels_flat[labels_flat==label]
print(f'Class: {label_dict_inverse[label]}')
print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n')
def evaluate(model, dataloader, device):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
for batch in dataloader:
# convert data to CUDA
batch = tuple(b.to(device) for b in batch)
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
with torch.no_grad():
outputs = model(**inputs) # get predictions
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
def train_and_hyperparam_search(config,
model_init, # function to init a clean version of the net
data, # data as Pandas array
cv # rounds of cross-validation
):
losses = []
aucs = []
skf = StratifiedKFold(n_splits=cv, shuffle=True)
for train_idx, test_idx in skf.split(data.text, data.party):
model = model_init()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print(f"Device: {device}")
optimizer = AdamW(model.parameters(),
lr=config['lr'],
eps=config['eps'],
weight_decay=config['weight_decay'])
data.loc[train_idx, 'data_type'] = 'train'
data.loc[test_idx, 'data_type'] = 'test'
train_dataloader, test_dataloader = get_dataloaders(data,
config['batch_size'])
for epoch in range(1, config['epochs']+1):
model.train() # enter training mode
loss_train_total = 0
for batch in train_dataloader:
model.zero_grad()
# get CUDA data
batch = tuple(b.to(device) for b in batch)
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
outputs = model(**inputs) # evaluate
# for reference, we are using cross-entropy loss here,
# as implemented in https://huggingface.co/transformers/_modules/transformers/modeling_bert.html
loss = outputs[0]
loss_train_total += loss.item()
loss.backward() # do backprop
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
loss_train_avg = loss_train_total/len(train_dataloader)
print(f"Training loss for epoch {epoch}: {loss_train_avg}")
val_loss, predictions, true_vals = evaluate(model, test_dataloader, device)
auc = auc_score(predictions, true_vals)
losses.append(val_loss)
aucs.append(auc)
tune.report(loss=np.mean(losses), auc=np.mean(aucs))
from functools import partial
def model_init():
return BertForSequenceClassification.from_pretrained('bert-base-uncased',
num_labels=2,
output_attentions=False,
output_hidden_states=False)
config = {
"lr": tune.choice([5e-5,3e-5,2e-5]),
"eps": tune.loguniform(1e-10, 1e-7),
"weight_decay": tune.loguniform(1e-10, 1e-5),
"batch_size": tune.choice([4,8,16, 32]),
"epochs": tune.choice([2, 3, 4])
}
scheduler = ASHAScheduler(
metric="auc",
mode="max",
max_t=10,
grace_period=1,
reduction_factor=2
)
reporter = CLIReporter(metric_columns=["loss", "auc", "training_iteration"])
hyperopt_search = HyperOptSearch(metric="auc", mode="max")
result = tune.run(
partial(train_and_hyperparam_search, model_init=model_init, data=dataset_clf, cv=3),
resources_per_trial={"cpu": 2, "gpu": 1},
config=config,
num_samples=8,
scheduler=scheduler,
search_alg=hyperopt_search,
progress_reporter=reporter
)
```
| github_jupyter |
```
import pandas as pd
from sklearn.tree import DecisionTreeClassifier # Import Decision Tree Classifier
from sklearn.model_selection import train_test_split # Import train_test_split function
from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation
from fastapi import FastAPI
import uvicorn
data = pd.read_csv("clothing_weather.csv")
data
app = FastAPI()
@app.get("/")
async def root():
"""Weather Advisor Welcome"""
return {"message": "Hello, welcome to Weather advisor! Enter a word to calculate it's score."}
@app.get("/weatheradvisor/{temp}/{rain}/{snow}")
async def weatheradvisor(temp: int,rain:int,snow:int):
y=predict(temp,rain,snow)
message=getMessage(y[0], rain, snow)
return "You should wear {0}".format(message))
async def predict(temp: int,rain:int,snow:int):
data["rain"] = data["rain"].replace("no", 0)
data["rain"] = data["rain"].replace("yes", 1)
data["snow"] = data["rain"].replace("no", 0)
data["snow"] = data["rain"].replace("yes", 1)
feature_cols = ['temp_f','rain','snow']
X = data[feature_cols] # Features
y = data.overall # Target variabley
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
clf = DecisionTreeClassifier(criterion="entropy", max_depth=4)
# Train Decision Tree Classifer
clf = clf.fit(X_train,y_train)
#Predict the response for test dataset
y_pred = clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
y_pred = clf.predict([temp,rain,snow])
#print the predicted outfit code
return y_pred
def getMessage(pred, rain, snow):
ans=""
outfit_code = {
1: "a short sleeve shirt and shorts.",
2: "a short sleeve shirt and long pants.",
3: "a short sleeve shirt, shorts and a light jacket or sweatshirt.",
4: "a short sleeve shirt, long pants, and a light jacket or sweatshirt.",
5: "a long sleeve shirt, long pants, and a light jacket or sweatshirt.",
6: "a short sleeve shirt, long pants, and a heavy jacket.",
7: "a long sleeve shirt or sweater, long pants, and a heavy jacket.",
8: "a long sleeve shirt and shorts."
}
if pred in outfit_code:
ans=ans+outfit_code[pred]
else:
return "an error occurred"
if rain == 1:
ans=ans+ " You may also want a rain jacket, rain boots, and/or an umbrella."
if snow == 1:
ans=ans+ " You should also bring a scarf, gloves, and snow boots!"
return ans
```
| github_jupyter |
```
import pandas as pd
fin = pd.read_pickle('fin.pkl')
mc = pd.read_pickle('mc.pkl')
info = pd.read_pickle('info.pkl')
```
# 전략
* input = 날짜
* output = 종목별 투자비중
```
date = '2018-12-31' # input
fisyear = 2017
position = fin['매출액'].xs(fisyear, level=1).nlargest(10)
position[:] = 1/len(position); position # output; [:] 빼면 될까 안될까
```
### date → fisyear
```
date = pd.Timestamp(date)
if date.month >=6:
fisyear = date.year - 1
else:
fisyear = date.year - 2
def get_fisyear(date):
date = pd.Timestamp(date)
if date.month >=6:
return date.year - 1
else:
return date.year - 2
def 매출상위(date, fin, n=10):
fisyear = get_fisyear(date)
position = fin['매출액'].xs(fisyear, level=1).nlargest(n)
position[:] = 1/len(position)
return position
```
# 벡테스터
매 리밸런싱 마다 포지션을 잡고, 주기적으로 포지션 가치를 계산하면 된다
```
dates = mc.index[8:]
for date in dates:
# print(date)
pass
pos = {}
nav = {}
for date in dates:
pos[date] = 매출상위(date, fin)
# nav[date] = 내 계좌의 가치기록, HOW?
```
### 내 계좌의 가치(NAV) 계산은 어떻게?
* 전 리밸일 기준 포지션 전체가치 = nav_prev (given)
* 전 리밸일 포지션별 = pos_prev (given)
* 전 리밸일 포지션별 가치 = nav_prev * pos_prev
* 전 리밸일 이후 포지션 가치변화 = 현재시총 / 전 리밸일 시총
* 전 리밸일 포지션의 현재가치 = nav_prev * pos_prev * 현재시총 / 전 리밸일 시총
* 현재 리밸일 기준 포지션 전체가치 = sum(전 리밸일 포지션의 현재가치)
```
pos = {}
nav = {}
for i, date in enumerate(dates):
pos[date] = 매출상위(date, fin)
date_prev = dates[i-1]
nav_prev = nav[date_prev]
pos_prev = pos[date_prev]
assets_prev = pos_prev.index
mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]
nav_pos_prev = nav_prev * pos_prev * mc_chg
nav[date] = nav_pos_prev.sum()
pos = {}
nav = {}
for i, date in enumerate(dates):
pos[date] = 매출상위(date, fin)
if i==0:
nav[date] = 1
else:
date_prev = dates[i-1]
nav_prev = nav[date_prev]
pos_prev = pos[date_prev]
assets_prev = pos_prev.index
mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]
nav_pos_prev = nav_prev * pos_prev * mc_chg
nav[date] = nav_pos_prev.sum()
nav;
from IPython.core.debugger import set_trace
pos = {}
nav = {}
def 내계좌는얼마(nav, pos, date, date_prev, mc):
nav_prev = nav[date_prev]
pos_prev = pos[date_prev]
assets_prev = pos_prev.index
mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]
nav_pos_prev = nav_prev * pos_prev * mc_chg
return nav_pos_prev.sum()
for i, date in enumerate(dates):
# set_trace()
pos[date] = 매출상위(date, fin)
if i==0:
nav[date] = 1
else:
date_prev = dates[i-1]
nav[date] = 내계좌는얼마(nav, pos, date, date_prev, mc)
pd.DataFrame({'Model':nav}).plot()
```
### BM 전략도 만들어보자
시총상위 200개 종목을 시총가중
```
position = mc.loc[date].nlargest(200)
position / position.sum();
def BM(date, fin=None, mc=None, n=200):
position = mc.loc[date].nlargest(n)
position = position / position.sum()
return position
def 매출상위(date, fin=None, mc=None, n=10):
fisyear = get_fisyear(date)
position = fin['매출액'].xs(fisyear, level=1).nlargest(n)
position[:] = 1/len(position)
return position
BM('2018-12-31', mc=mc);
pos = {}
nav = {}
pos_bm = {}
nav_bm = {}
n = 10
for i, date in enumerate(dates):
pos[date] = 매출상위(date, fin=fin, mc=mc, n=n)
pos_bm[date] = BM(date, fin=fin, mc=mc)
if i==0:
nav[date] = 1
nav_bm[date] = 1
else:
date_prev = dates[i-1]
nav[date] = 내계좌는얼마(nav, pos, date, date_prev, mc)
nav_bm[date] = 내계좌는얼마(nav_bm, pos_bm, date, date_prev, mc)
pd.DataFrame({'Model':nav, 'BM':nav_bm}).plot()
```
# 좀더 고급진 백테스터
* 전략 = 설계도
* 벡테스터 = Backtest(어떤전략, BM, DB, 기타옵션들...)
* 백테스터.run()
* 백테스터.plot() ...
```
class Backtest:
def __init__(self, model=None, bm=None, fin=None, mc=None, n=10):
self.model = model
self.bm = bm
self.fin = fin
self.mc = mc
self.n = n
bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)
bt.bm
import inspect
inspect.getsource(bt.bm)
class Backtest:
def __init__(self, model=None, bm=None, fin=None, mc=None, n=10):
self.model = model
self.bm = bm
self.fin = fin
self.mc = mc
self.n = n
# fin, mc, n, 내계좌는 얼마.를 self. 로
# pos, nav, pos_bm, nav_bm를 self 저장
def run(self):
dates = mc.index[8:]
pos = {}
nav = {}
pos_bm = {}
nav_bm = {}
for i, date in enumerate(tqdm_notebook(dates)):
pos[date] = self.model(date, fin=self.fin, mc=self.mc, n=self.n)
pos_bm[date] = self.bm(date, fin=self.fin, mc=self.mc)
if i==0:
nav[date] = 1
nav_bm[date] = 1
else:
date_prev = dates[i-1]
nav[date] = self.내계좌는얼마(nav, pos, date, date_prev)
nav_bm[date] = self.내계좌는얼마(nav_bm, pos_bm, date, date_prev)
self.pos = pos
self.nav = nav
self.pos_bm = pos_bm
self.nav_bm = nav_bm
# mc를 self.mc로
def 내계좌는얼마(self, nav, pos, date, date_prev):
nav_prev = nav[date_prev]
pos_prev = pos[date_prev]
assets_prev = pos_prev.index
mc_chg = self.mc.loc[date, assets_prev] / self.mc.loc[date_prev, assets_prev]
nav_pos_prev = nav_prev * pos_prev * mc_chg
return nav_pos_prev.sum()
def navs(self):
return pd.DataFrame({'Model':self.nav, 'BM':self.nav_bm})
# plot_perf() 추가
def plot_perf(self):
self.navs().plot()
# 성과평가 추가
def stats(self):
_navs = self.navs()
ndays = (_navs.index[-1]-_navs.index[0]).days
ann_rtn = (_navs.iloc[-1]**(365/ndays)) - 1
vol = _navs.pct_change().std() * (4**0.5)
return pd.DataFrame({
'Annual return': ann_rtn,
'Volatility': vol,
'Sharpe': ann_rtn/vol
})
```
### plot_perf() 추가
```
bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)
bt.run()
bt.plot_perf()
```
### tqdm 추가
```
from tqdm import tqdm_notebook
bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)
bt.run()
bt.plot_perf()
```
### 성과평가
* (1+R)^(총연수) = 최종nav
* R = (최종nav)^(1/총연수) - 1
```
bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)
bt.run()
bt.stats()
```
# 새로운 전략 생성
```
def 시총높고PB저평가(date, fin=None, mc=None, n=10):
fisyear = get_fisyear(date)
marketcap = mc.loc[date].nlargest(100)
univ = marketcap.index
bv = fin['자본총계'].xs(fisyear, level=1).loc[univ]
bp = bv / marketcap
position = bp.nlargest(n)
position[:] = 1/len(position)
return position
bt = Backtest(model=시총높고PB저평가, bm=BM, fin=fin, mc=mc, n=10)
bt.run()
bt.plot_perf()
```
| github_jupyter |
# Finding cellular regions with superpixel analysis
**Overview:**
Whole-slide images often contain artifacts like marker or acellular regions that
need to be avoided during analysis. In this example we show how HistomicsTK can
be used to develop saliency detection algorithms that segment the slide at low
magnification to generate a map to guide higher magnification analyses. Here we
show how superpixel analysis can be used to locate hypercellular regions that
correspond to tumor-rich content.
This uses Simple Linear Iterative Clustering (SLIC) to get superpixels at a low
slide magnification to detect cellular regions. The first step of this pipeline
detects tissue regions (i.e. individual tissue pieces) using the `get_tissue_mask`
method of the `histomicstk.saliency` module. Then, each tissue piece is processed
separately for accuracy and disk space efficiency. It is important to keep in
mind that this does NOT rely on a tile iterator, but loads the entire tissue
region (but NOT the whole slide) in memory and passes it on to
`skimage.segmentation.slic` method. Not using a tile iterator helps keep the
superpixel sizes large enough to correspond to tissue boundaries.
Once superpixels are segmented, the image is deconvolved and features are extracted from the hematoxylin channel. Features include intensity and possibly also texture features. Then, a mixed component Gaussian mixture model is fit to the features, and median intensity is used to rank superpixel clusters by 'cellularity' (since we are working with the hematoxylin channel).
Note that the decison to fit a gaussian mixture model instead of using K-means clustering is a design choice. If you'd like to experiment, feel free to try other methods of classifying superpixels into clusters using other approaches.
Additional functionality includes contour extraction to get the final segmentation boundaries of cellular regions and to visualize them in HistomicsUI using one's preferred colormap.
**Here are some sample results:**
From left to right: Slide thumbnail, superpixel classifications, contiguous cellular/acellular regions

**Where to look?**
```
|_ histomicstk/
|_saliency/
|_cellularity_detection.py
|_tests/
|_test_saliency.py
```
```
import tempfile
import girder_client
import numpy as np
from histomicstk.annotations_and_masks.annotation_and_mask_utils import (
delete_annotations_in_slide)
from histomicstk.saliency.cellularity_detection_superpixels import (
Cellularity_detector_superpixels)
import matplotlib.pylab as plt
from matplotlib.colors import ListedColormap
%matplotlib inline
# color map
vals = np.random.rand(256,3)
vals[0, ...] = [0.9, 0.9, 0.9]
cMap = ListedColormap(1 - vals)
```
## Prepwork
```
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = "5d586d76bd4404c6b1f286ae"
# SAMPLE_SLIDE_ID = "5d8c296cbd4404c6b1fa5572"
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# This is where the run logs will be saved
logging_savepath = tempfile.mkdtemp()
# color normalization values from TCGA-A2-A3XS-DX1
cnorm_thumbnail = {
'mu': np.array([9.24496373, -0.00966569, 0.01757247]),
'sigma': np.array([0.35686209, 0.02566772, 0.02500282]),
}
# from the ROI in Amgad et al, 2019
cnorm_main = {
'mu': np.array([8.74108109, -0.12440419, 0.0444982]),
'sigma': np.array([0.6135447, 0.10989545, 0.0286032]),
}
# deleting existing annotations in target slide (if any)
delete_annotations_in_slide(gc, SAMPLE_SLIDE_ID)
```
## Initialize the cellularity detector
```
print(Cellularity_detector_superpixels.__init__.__doc__)
```
In this example, and as the default behavior, we use a handful of informative intensity features extracted from the hematoxylin channel after color deconvolution to fit a gaussian mixture model. Empirically (on a few test slides), this seems to give better results than using the full suite of intensity and texture features available. Feel free to experiment with this and find the optimum combination of features for your application.
```
# init cellularity detector
cds = Cellularity_detector_superpixels(
gc, slide_id=SAMPLE_SLIDE_ID,
MAG=3.0, compactness=0.1, spixel_size_baseMag=256 * 256,
max_cellularity=40,
visualize_spixels=True, visualize_contiguous=True,
get_tissue_mask_kwargs={
'deconvolve_first': False,
'n_thresholding_steps': 2,
'sigma': 1.5,
'min_size': 500, },
verbose=2, monitorPrefix='test',
logging_savepath=logging_savepath)
```
## Set the color normalization values
You can choose to reinhard color normalize the slide thumbnail and/or the tissue image at target magnificaion. You can either provide the mu and sigma values directly or provide the path to an image from which to infer these values. Please refer to the *color_normalization* module for reinhard normalization implementation details. In this example, we use a "high-sensitivity, low-specificity" strategy to detect tissue, followed by the more specific cellularity detection module. In other words, the *tissue_detection* module is used to detect all tissue, and only exclude whitespace and marker. Here we do NOT perform color normalization before tissue detection (empirically gives worse results), but we do normalize when detecting the cellular regions within the tissue.
```
# set color normalization for thumbnail
# cds.set_color_normalization_values(
# mu=cnorm_thumbnail['mu'],
# sigma=cnorm_thumbnail['sigma'], what='thumbnail')
# set color normalization values for main tissue
cds.set_color_normalization_values(
mu=cnorm_main['mu'], sigma=cnorm_main['sigma'], what='main')
```
## Run the detector
```
print(cds.run.__doc__)
tissue_pieces = cds.run()
```
## Check the results
The resultant list of objects correspond to the results for each "tissue piece" detected in the slide. You may explore various attributes like the offset coordinates, tissue mask, superpixel labeled mask, superpixel feature data, and superpixel cluster properties.
```
plt.imshow(tissue_pieces[0].tissue_mask, cmap=cMap)
plt.imshow(tissue_pieces[0].spixel_mask, cmap=cMap)
tissue_pieces[0].fdata.head()
tissue_pieces[0].cluster_props
```
## Check the visualization on HistomicsUI
Now you may go to the slide on Digital Slide Archive and check the posted annotations.
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import pathlib
from tqdm import tqdm
from abc import ABCMeta, abstractmethod
```
I have downloaded the dataset locally and mentioned paths below. Since dataset is huge (~30 GB), I am not pushing it to the repository. You can put the `data` dir inside dataset adjacent to this jupyter notebook in order to run it successfully.
```
train_data_dir = 'data/train'
test_data_dir = 'data/test'
train_data_path = pathlib.Path(train_data_dir)
test_data_path = pathlib.Path(test_data_dir)
```
Below are all the classes given for tissue samples in `train` and `test` dataset.
```
tissue_classes = [
'spleen',
'skin_1',
'skin_2',
'pancreas',
'lymph_node',
'small_intestine',
'endometrium_1',
'endometrium_2',
'liver',
'kidney',
'lung',
'colon'
]
```
Let us display an example image from each of the `12` classes of tissues in our dataset.
```
fig, ax = plt.subplots(nrows=4, ncols=3, figsize=(10, 10))
counter = 0
for row in ax:
for col in row:
images = list(train_data_path.glob(tissue_classes[counter] + '/*'))
image = np.array(PIL.Image.open(str(images[0])))
col.set_title(tissue_classes[counter])
col.imshow(image)
counter += 1
fig.tight_layout()
plt.show()
```
From dataset, we have **1119** unique images for **training** and **600** unique images for **testing** data.
Since we are working with very large dataset, it is not advisable to load all the data at once. It is not possible to do that since the data is huge. That is why, we have created data generator which will generate training/testing examples on demand. It will only generate a batch of examples at a time.
Below class is the custom data generator we have created in order to ingest images into ML pipeline.
```
class TissueDataGenerator(tf.keras.utils.Sequence):
def __init__(self,
data_dir,
batch_size,
class_labels,
img_height=128,
img_width=128,
img_channels=3,
preprocess_func=None,
shuffle=True):
self.file_ds = tf.data.Dataset.list_files(str(data_dir + '/*/*'))
self.batch_size = batch_size
self.class_labels = class_labels
self.n_classes = len(class_labels)
self.img_size = (img_height, img_width)
self.img_n_channels = img_channels
self.shuffle = shuffle
self.preprocess_func = preprocess_func
self.label_mapping = self.find_label_mappings()
self.labeled_ds = self.file_ds.map(lambda f: tf.py_function(func=self.process_example,
inp=[f],
Tout=[tf.float32, tf.int32]))
self.labeled_ds = self.labeled_ds.batch(self.batch_size)
self.on_epoch_end()
def find_label_mappings(self):
mp = {}
for i, label in enumerate(self.class_labels):
mp[label] = i
return mp
def process_example(self, file_path):
label = tf.strings.split(file_path, os.sep)[-2]
label_map = self.label_mapping[str(label.numpy().decode('utf-8'))]
label_encode = tf.keras.utils.to_categorical(label_map, self.n_classes)
image = np.array(PIL.Image.open(str(file_path.numpy().decode('utf-8'))))
image = tf.image.resize(image, self.img_size)
if self.preprocess_func is not None:
image = self.preprocess_func(image)
return image, label_encode
def __getitem__(self, index):
'Generate one batch of data'
batch = next(self.iterator, None)
if batch is None:
self.on_epoch_end()
batch = next(self.iterator)
return batch
def on_epoch_end(self):
self.iterator = iter(self.labeled_ds)
def __len__(self):
return len(self.file_ds) // self.batch_size
```
During our research of finding best model for image classification, we usually experiment on various different kinds of models. Because of that, we usually rewrite some of the code redundantly. To prevent that, we have created abstract model class below. Whatever models we want to experiment on can inherit this class to get access to some of the common features we will use for all the model classes like compiling & training model, testing model, plotting metrics etc.
```
class ModifiedModel:
__metaclass__ = ABCMeta
def __init__(self,
input_shape,
num_classes,
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
verbose=True):
if not isinstance(input_shape, list) and not isinstance(input_shape, tuple):
raise TypeError('input_shape must be of type list or tuple.')
input_shape = tuple(input_shape)
if len(input_shape) != 3:
raise TypeError('input_shape must contain exactly 3 dimensions.')
self.input_shape = input_shape
self.num_classes = num_classes
self.optimizer = optimizer
self.loss = loss
self.metrics = metrics
self.verbose = verbose
self.history = None
self.model = None
@abstractmethod
def build_model(self):
pass
def compile_model(self, **kwargs):
self.raise_if_not_built()
self.model.compile(optimizer=self.optimizer,
loss=self.loss,
metrics=self.metrics, **kwargs)
def raise_if_not_built(self):
if self.model is None:
raise ValueError('object of model class has not created instance yet.')
def train(self, train_generator, epochs, **kwargs):
self.raise_if_not_built()
self.history = self.model.fit(train_generator, epochs=epochs, **kwargs)
def test(self, test_generator, **kwargs):
self.raise_if_not_built()
return self.model.evaluate(test_generator, **kwargs)
def plot_metrics(self):
if self.history is None:
raise ValueError('model must be trained to generate metric plot.')
if 'loss' not in self.history.history:
raise ValueError('history must contain loss information.')
if 'accuracy' not in self.history.history:
raise ValueError('history must contain accuracy information')
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
attrs = ['loss', 'accuracy']
counter = 0
for col in ax:
info = self.history.history[attrs[counter]]
col.plot(range(len(info)), info)
col.set_title(attrs[counter])
col.set_xlabel('Epochs')
col.set_ylabel(attrs[counter])
counter += 1
fig.tight_layout()
plt.show()
def display_score(self, score):
if len(score) < 2:
raise ValueError('score must have atleast 2 values')
print('Loss: {}, Accuracy: {}'.format(score[0], score[1]))
```
Below are some of the parameters which will be common across all the experiments and that is why we have decided to initialize them at the top and all other experiments will consume these three parameters.
**Note:** We haven't fixed shape of input images because the input image shape may differ based on the model we experiment on. Also, We haven't used original dimension `(3000, 3000, 3)` because of computational power restrictions. We are using smaller shapes of images as input as per the model requirements
```
batch_size = 4
num_channels = 3
epochs = 15
```
## Training Custom CNN model for image classification
Custom model inherits the `ModifiedModel` class defined above. We have used multiple Conv - Max pooling blocks following softmax output. The input images resized to shape `(128, 128, 3)`.
```
custom_img_height = 128
custom_img_width = 128
custom_train_gen = TissueDataGenerator(train_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=custom_img_height,
img_width=custom_img_width)
custom_test_gen = TissueDataGenerator(test_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=custom_img_height,
img_width=custom_img_width)
class CustomModel(ModifiedModel):
def __init__(self,
input_shape,
num_classes,
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
verbose=True):
super().__init__(input_shape,
num_classes,
optimizer,
loss,
metrics,
verbose)
self.build_model()
self.compile_model()
def build_model(self):
self.model = Sequential([
layers.Rescaling(1./255, input_shape=self.input_shape),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(self.num_classes, activation = 'softmax')
])
customModel = CustomModel(input_shape=(custom_img_height, custom_img_width, num_channels),
num_classes=len(tissue_classes))
customModel.model.summary()
customModel.train(custom_train_gen, epochs=epochs)
customModel.plot_metrics()
custom_score = customModel.test(custom_test_gen)
customModel.display_score(custom_score)
```
Now, we also are experimenting on some of the pretrained models like VGG, InceptionNet and EfficientNet. We have defined single class `PretrainedModel` below which will take instance of pretrained model and define it as functional unit in the classification model followed by multiple fully connected layers and softmax output.
```
class PretrainedModel(ModifiedModel):
def __init__(self,
input_shape,
num_classes,
pretrainedModel,
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
verbose=True):
super().__init__(input_shape,
num_classes,
optimizer,
loss,
metrics,
verbose)
self.pretrained = pretrainedModel
self.build_model()
self.compile_model()
def build_model(self):
for layer in self.pretrained.layers:
layer.trainable = False
self.model = Sequential([
self.pretrained,
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(self.num_classes, activation = 'softmax')
])
```
## Transfer Learning on VGG16
We are using pretrained `VGG16` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(224, 224, 3)`.
```
vgg_img_height = 224
vgg_img_width = 224
vgg_train_gen = TissueDataGenerator(train_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=vgg_img_height,
img_width=vgg_img_width,
preprocess_func=tf.keras.applications.vgg16.preprocess_input)
vgg_test_gen = TissueDataGenerator(test_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=vgg_img_height,
img_width=vgg_img_width,
preprocess_func=tf.keras.applications.vgg16.preprocess_input)
vggModel = PretrainedModel(input_shape=(vgg_img_height, vgg_img_width, num_channels),
num_classes=len(tissue_classes),
pretrainedModel=tf.keras.applications.vgg16.VGG16())
vggModel.model.summary()
vggModel.train(vgg_train_gen, epochs=epochs)
vggModel.plot_metrics()
vgg_score = vggModel.test(vgg_test_gen)
vggModel.display_score(vgg_score)
```
## Transfer Learning on InceptionV3
We are using pretrained `InceptionV3` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(299, 299, 3)`.
```
inception_img_height = 299
inception_img_width = 299
inception_train_gen = TissueDataGenerator(train_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=inception_img_height,
img_width=inception_img_width,
preprocess_func=tf.keras.applications.inception_v3.preprocess_input)
inception_test_gen = TissueDataGenerator(test_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=inception_img_height,
img_width=inception_img_width,
preprocess_func=tf.keras.applications.inception_v3.preprocess_input)
inceptionModel = PretrainedModel(input_shape=(inception_img_height, inception_img_width, num_channels),
num_classes=len(tissue_classes),
pretrainedModel=tf.keras.applications.inception_v3.InceptionV3())
inceptionModel.model.summary()
inceptionModel.train(inception_train_gen, epochs=epochs)
inceptionModel.plot_metrics()
inception_score = inceptionModel.test(inception_test_gen)
inceptionModel.display_score(inception_score)
```
## Transfer Learning on EfficientNetB7
We are using pretrained `EfficientNetB7` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(128, 128, 3)`.
```
effnet_img_height = 128
effnet_img_width = 128
effnet_train_gen = TissueDataGenerator(train_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=effnet_img_height,
img_width=effnet_img_width,
preprocess_func=tf.keras.applications.efficientnet.preprocess_input)
effnet_test_gen = TissueDataGenerator(test_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=effnet_img_height,
img_width=effnet_img_width,
preprocess_func=tf.keras.applications.efficientnet.preprocess_input)
effnetModel = PretrainedModel(input_shape=(effnet_img_height, effnet_img_width, num_channels),
num_classes=len(tissue_classes),
pretrainedModel=tf.keras.applications.efficientnet.EfficientNetB7())
effnetModel.model.summary()
effnetModel.train(effnet_train_gen, epochs=epochs)
effnetModel.plot_metrics()
effnet_score = effnetModel.test(effnet_test_gen)
effnetModel.display_score(effnet_score)
```
Note that above three pretrained model accuracy will improve on training for more epochs but we were not able to do that because of less computational power and time constraint.
## t-SNE plot for visualizing data distributions
Let us draw t-SNE plot of image features w.r.t. `customModel` that we created.
```
img_height = 128
img_width = 128
model = customModel
label2int = {}
for i, t in enumerate(tissue_classes):
label2int[t] = i
def process_path(file_path):
label = tf.strings.split(file_path, os.sep)[-2]
label_map = label2int[str(label.numpy().decode('utf-8'))]
image = np.array(PIL.Image.open(str(file_path.numpy().decode('utf-8'))))
image = tf.image.resize(image, (img_height, img_width))
feature = model.model(np.array([image]))
return feature.numpy()[0], label_map
train_gen = TissueDataGenerator(train_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=img_height,
img_width=img_width)
train_ds = train_gen.file_ds.map(lambda f: tf.py_function(func=process_path,
inp=[f],
Tout=[tf.float32, tf.int32]))
test_gen = TissueDataGenerator(test_data_dir,
batch_size=batch_size,
class_labels=tissue_classes,
img_height=img_height,
img_width=img_width)
test_ds = test_gen.file_ds.map(lambda f: tf.py_function(func=process_path,
inp=[f],
Tout=[tf.float32, tf.int32]))
def extract_data(ds):
images = None
labels = None
for img, lab in tqdm(ds):
if images is None:
images = np.array([img])
labels = np.array([lab])
else:
images = np.append(images, [img], axis=0)
labels = np.append(labels, [lab], axis=0)
return images, labels
train_images, train_labels = extract_data(train_ds)
test_images, test_labels = extract_data(test_ds)
from sklearn.manifold import TSNE
import seaborn as sns
import matplotlib.patheffects as PathEffects
train_tsne = TSNE(n_components=2, random_state=41).fit_transform(train_images)
test_tsne = TSNE(n_components=2, random_state=41).fit_transform(test_images)
def tissue_scatter(x, colors):
num_classes = len(np.unique(colors))
palette = np.array(sns.color_palette("hls", num_classes))
# create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=palette[colors.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
# add the labels for each digit corresponding to the label
txts = []
for i in range(num_classes):
# Position of each label at median of data points.
xtext, ytext = np.median(x[colors == i, :], axis=0)
txt = ax.text(xtext, ytext, str(i), fontsize=24)
txt.set_path_effects([
PathEffects.Stroke(linewidth=5, foreground="w"),
PathEffects.Normal()])
txts.append(txt)
return f, ax, sc, txts
tissue_scatter(train_tsne, train_labels)
tissue_scatter(test_tsne, test_labels)
```
## Reasons behind missclassification
- One possible reason might be mixed pixels. The composition of the various objects in a single pixel makes identification of genuine class more difficult.
- Original size of images are `(3000, 3000, 3)` but we have resized them down to very small size `(128, 128, 3)` for the model because which many details in image data might be lost.
- We trained image only for 15 epochs becuase of limited time and computational power restriction.
| github_jupyter |
```
# General imports
import numpy as np
import pandas as pd
import os, sys, gc, time, warnings, pickle, psutil, random
warnings.filterwarnings('ignore')
# :seed to make all processes deterministic # type: int
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
# Read data
def get_data_by_store(store):
# Read and contact basic feature
df = pd.concat([pd.read_pickle(BASE),
pd.read_pickle(PRICE).iloc[:,2:],
pd.read_pickle(CALENDAR).iloc[:,2:]],
axis=1)
# Leave only relevant store
df = df[df['store_id']==store]
# With memory limits we have to read
# lags and mean encoding features
# separately and drop items that we don't need.
# As our Features Grids are aligned
# we can use index to keep only necessary rows
# Alignment is good for us as concat uses less memory than merge.
df2 = pd.read_pickle(MEAN_ENC)[mean_features]
df2 = df2[df2.index.isin(df.index)]
df3 = pd.read_pickle(LAGS).iloc[:,3:]
df3 = df3[df3.index.isin(df.index)]
df = pd.concat([df, df2], axis=1)
del df2 # to not reach memory limit
df = pd.concat([df, df3], axis=1)
del df3 # to not reach memory limit
if store_id in ['CA_1', 'CA_2', 'CA_3','CA_4','TX_1','TX_2','TX_3']:
remove_features = ['id','state_id','store_id','date','wm_yr_wk','d',TARGET,'cluster','snow_m',
'rolling_quantile_97_28', 'rolling_quantile_87.5_28', 'rolling_quantile_50_28', 'rolling_quantile_22.5_28', 'rolling_quantile_3_28', 'rolling_quantile_97_56', 'rolling_quantile_87.5_56', 'rolling_quantile_50_56', 'rolling_quantile_22.5_56', 'rolling_quantile_3_56', 'rolling_quantile_97_168', 'rolling_quantile_87.5_168', 'rolling_quantile_50_168', 'rolling_quantile_22.5_168', 'rolling_quantile_3_168']
else:
remove_features = ['id','state_id','store_id','date','wm_yr_wk','d',TARGET,'cluster',
'rolling_quantile_97_28', 'rolling_quantile_87.5_28', 'rolling_quantile_50_28', 'rolling_quantile_22.5_28', 'rolling_quantile_3_28', 'rolling_quantile_97_56', 'rolling_quantile_87.5_56', 'rolling_quantile_50_56', 'rolling_quantile_22.5_56', 'rolling_quantile_3_56', 'rolling_quantile_97_168', 'rolling_quantile_87.5_168', 'rolling_quantile_50_168', 'rolling_quantile_22.5_168', 'rolling_quantile_3_168']
# Create features list
features = [col for col in list(df) if col not in remove_features]
df = df[['id','d',TARGET]+features]
# Skipping first n rows
df = df[df['d']>=START_TRAIN].reset_index(drop=True)
return df, features
# Recombine Test set after training
def get_base_test():
base_test = pd.DataFrame()
for store_id in STORES_IDS:
temp_df = pd.read_pickle('test_'+store_id+str(VER)+'.pkl')
temp_df['store_id'] = store_id
base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
return base_test
########################### Helper to make dynamic rolling lags
#################################################################################
def make_lag(LAG_DAY):
lag_df = base_test[['id','d',TARGET]]
col_name = 'sales_lag_'+str(LAG_DAY)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(LAG_DAY)).astype(np.float16)
return lag_df[[col_name]]
def make_lag_roll(LAG_DAY,lag_df_new):
lag_df = base_test[['id','d',TARGET]]
lag_df=lag_df.sort_values(by=["d"])
for i in range(0,len(LAG_DAY)):
shift_day = LAG_DAY[i][0]
roll_wind = LAG_DAY[i][1]
col_name = 'rolling_mean_tmp_'+str(shift_day)+'_'+str(roll_wind)
lag_df[col_name] = (lag_df.groupby(['id'])[TARGET]).transform(lambda x: x.shift(shift_day).rolling(roll_wind).mean())
lag_df_new=lag_df.drop(columns=["sales"])
return lag_df_new
import lightgbm as lgb
lgb_params = {
'boosting_type': 'gbdt',
'objective': 'tweedie',
'tweedie_variance_power': 1.1,
'metric': 'rmse',
'subsample': 0.5,
'subsample_freq': 1,
'learning_rate': 0.03,
"lambda":0.1,
'num_leaves': 2**11-1,
'min_data_in_leaf': 2**12-1,
'feature_fraction': 0.5,
'max_bin': 100,
'n_estimators': 1400,
'boost_from_average': False,
'verbose': -1,
}
# lgb_params ={
# "objective" : "tweedie",
# "metric" :"rmse",
# "force_row_wise" : True,
# "learning_rate" : 0.075,
# "sub_feature" : 0.8,
# "sub_row" : 0.75,
# "bagging_freq" : 1,
# "lambda_l2" : 0.1,
# "metric": ["rmse"],
# "nthread": -1,
# "tweedie_variance_power":1.1,
# 'verbosity': 1,
# # 'num_iterations' : 1500,
# 'num_leaves': 128,
# "min_data_in_leaf": 104,
# }
# Let's look closer on params
## 'boosting_type': 'gbdt'
# we have 'goss' option for faster training
# but it normally leads to underfit.
# Also there is good 'dart' mode
# but it takes forever to train
# and model performance depends
# a lot on random factor
# https://www.kaggle.com/c/home-credit-default-risk/discussion/60921
## 'objective': 'tweedie'
# Tweedie Gradient Boosting for Extremely
# Unbalanced Zero-inflated Data
# https://arxiv.org/pdf/1811.10192.pdf
# and many more articles about tweediie
#
# Strange (for me) but Tweedie is close in results
# to my own ugly loss.
# My advice here - make OWN LOSS function
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/140564
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/143070
# I think many of you already using it (after poisson kernel appeared)
# (kagglers are very good with "params" testing and tuning).
# Try to figure out why Tweedie works.
# probably it will show you new features options
# or data transformation (Target transformation?).
## 'tweedie_variance_power': 1.1
# default = 1.5
# set this closer to 2 to shift towards a Gamma distribution
# set this closer to 1 to shift towards a Poisson distribution
# my CV shows 1.1 is optimal
# but you can make your own choice
## 'metric': 'rmse'
# Doesn't mean anything to us
# as competition metric is different
# and we don't use early stoppings here.
# So rmse serves just for general
# model performance overview.
# Also we use "fake" validation set
# (as it makes part of the training set)
# so even general rmse score doesn't mean anything))
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/133834
## 'subsample': 0.5
# Serves to fight with overfit
# this will randomly select part of data without resampling
# Chosen by CV (my CV can be wrong!)
# Next kernel will be about CV
##'subsample_freq': 1
# frequency for bagging
# default value - seems ok
## 'learning_rate': 0.03
# Chosen by CV
# Smaller - longer training
# but there is an option to stop
# in "local minimum"
# Bigger - faster training
# but there is a chance to
# not find "global minimum" minimum
## 'num_leaves': 2**11-1
## 'min_data_in_leaf': 2**12-1
# Force model to use more features
# We need it to reduce "recursive"
# error impact.
# Also it leads to overfit
# that's why we use small
# 'max_bin': 100
## l1, l2 regularizations
# https://towardsdatascience.com/l1-and-l2-regularization-methods-ce25e7fc831c
# Good tiny explanation
# l2 can work with bigger num_leaves
# but my CV doesn't show boost
## 'n_estimators': 1400
# CV shows that there should be
# different values for each state/store.
# Current value was chosen
# for general purpose.
# As we don't use any early stopings
# careful to not overfit Public LB.
##'feature_fraction': 0.5
# LightGBM will randomly select
# part of features on each iteration (tree).
# We have maaaany features
# and many of them are "duplicates"
# and many just "noise"
# good values here - 0.5-0.7 (by CV)
## 'boost_from_average': False
# There is some "problem"
# to code boost_from_average for
# custom loss
# 'True' makes training faster
# BUT carefull use it
# https://github.com/microsoft/LightGBM/issues/1514
VER = 5 # Our model version
SEED = 42 # We want all things
seed_everything(SEED) # to be as deterministic
lgb_params['seed'] = SEED # as possible
N_CORES = psutil.cpu_count() # Available CPU cores
#LIMITS and const
TARGET = 'sales' # Our target
START_TRAIN = 0 # We can skip some rows (Nans/faster training)
END_TRAIN = 1941 # End day of our train set, change this part for final
P_HORIZON = 28 # Prediction horizon
#FEATURES to remove
## These features lead to overfit
## or values not present in test set
mean_features = ['enc_cat_id_mean','enc_cat_id_std',
'enc_dept_id_mean','enc_dept_id_std',
'enc_item_id_mean','enc_item_id_std']
#PATHS for Features
BASE = 'grid_part_1.pkl'
PRICE = 'grid_part_2.pkl'
CALENDAR = 'grid_part_3.pkl'
LAGS = 'lags_df_28_v3.pkl'
MEAN_ENC = 'mean_encoding_df.pkl'
# AUX(pretrained) Models paths
#STORES ids
STORES_IDS = pd.read_csv('sales_train_evaluation.csv')['store_id']#change this part for final
STORES_IDS = list(STORES_IDS.unique())
#SPLITS for lags creation
SHIFT_DAY = 28
N_LAGS = 15
LAGS_SPLIT = [col for col in range(SHIFT_DAY,SHIFT_DAY+N_LAGS)]
ROLS_SPLIT = []
for i in [1,7,14]:
for j in [7,14,28,56]:
ROLS_SPLIT.append([i,j])
for store_id in STORES_IDS:
print('Train', store_id)
# Get grid for current store
grid_df, features_columns = get_data_by_store(store_id)
print(features_columns)
# Masks for
# Train (All data less than 1913)
# "Validation" (Last 28 days - not real validation set)
# Test (All data greater than 1913 day,
# with some gap for recursive features)
train_mask = grid_df['d']<=END_TRAIN
valid_mask = train_mask&(grid_df['d']>(END_TRAIN-P_HORIZON))
preds_mask = grid_df['d']>(END_TRAIN-100)
# Apply masks and save lgb dataset as bin
# to reduce memory spikes during dtype convertations
# https://github.com/Microsoft/LightGBM/issues/1032
# "To avoid any conversions, you should always use np.float32"
# or save to bin before start training
# https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/53773
train_data = lgb.Dataset(grid_df[train_mask][features_columns],
label=grid_df[train_mask][TARGET],
weight=grid_df[train_mask]['sell_price'])
valid_data = lgb.Dataset(grid_df[valid_mask][features_columns],
label=grid_df[valid_mask][TARGET],
weight=grid_df[valid_mask]['sell_price'])
# Saving part of the dataset for later predictions
# Removing features that we need to calculate recursively
grid_df = grid_df[preds_mask].reset_index(drop=True)
keep_cols = [col for col in list(grid_df) if '_tmp_' not in col]
grid_df = grid_df[keep_cols]
grid_df.to_pickle('test_'+store_id+str(VER)+'.pkl')
del grid_df
gc.collect()
# Launch seeder again to make lgb training 100% deterministic
# with each "code line" np.random "evolves"
# so we need (may want) to "reset" it
seed_everything(SEED)
estimator = lgb.train(lgb_params,
train_data,
valid_sets = [valid_data],
verbose_eval = 100,
)
imp_type = "gain"
features = estimator.feature_name()
importances = estimator.feature_importance(imp_type)
importance_df=pd.DataFrame(features,columns=['features'])
importance_df['importances']=importances
importance_df=importance_df.sort_values(by='importances', ascending=False)
importance_df.to_csv(store_id+'_fe_imp_'+str(VER)+'.csv',index=False)
del importance_df
gc.collect()
# Save model - it's not real '.bin' but a pickle file
# estimator = lgb.Booster(model_file='model.txt')
# can only predict with the best iteration (or the saving iteration)
# pickle.dump gives us more flexibility
# like estimator.predict(TEST, num_iteration=100)
# num_iteration - number of iteration want to predict with,
# NULL or <= 0 means use best iteration
model_name = 'lgb_model_'+store_id+'_v'+str(VER)+'.bin'
pickle.dump(estimator, open(model_name, 'wb'))
# Remove temporary files and objects
# to free some hdd space and ram memory
# !rm train_data.bin
del train_data, valid_data, estimator
gc.collect()
# Create Dummy DataFrame to store predictions
all_preds = pd.DataFrame()
# Join back the Test dataset with
# a small part of the training data
# to make recursive features
base_test = get_base_test()
# Timer to measure predictions time
main_time = time.time()
# Loop over each prediction day
# As rolling lags are the most timeconsuming
# we will calculate it for whole day
for PREDICT_DAY in range(1,29):
print('Predict | Day:', PREDICT_DAY)
start_time = time.time()
# Make temporary grid to calculate rolling lags
grid_df = base_test.copy()
lag_df_new = pd.DataFrame()
lag_df_new=make_lag_roll(ROLS_SPLIT,lag_df_new)
grid_df = grid_df.merge(lag_df_new, on=['id','d'], how='left')
for store_id in STORES_IDS:
if store_id in ['CA_1', 'CA_2', 'CA_3','CA_4','TX_1','TX_2','TX_3']:
MODEL_FEATURES = ['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max',
'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept',
'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m',
'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m',
'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA',
'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend',
'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d',
'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean',
'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean',
'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31',
'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36',
'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41',
'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14',
'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56',
'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14',
'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7',
'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56',
'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']
else:
MODEL_FEATURES = ['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max',
'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept',
'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m',
'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'snow_m',
'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA',
'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend',
'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d',
'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean',
'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean',
'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31',
'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36',
'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41',
'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14',
'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56',
'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14',
'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7',
'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56',
'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']
# Read all our models and make predictions
# for each day/store pairs
model_path = 'lgb_model_'+store_id+'_v'+str(VER)+'.bin'
estimator = pickle.load(open(model_path, 'rb'))
day_mask = base_test['d']==(END_TRAIN+PREDICT_DAY)
store_mask = base_test['store_id']==store_id
mask = (day_mask)&(store_mask)
base_test[TARGET][mask] = estimator.predict(grid_df[mask][MODEL_FEATURES])
# Make good column naming and add
# to all_preds DataFrame
temp_df = base_test[day_mask][['id',TARGET]]
temp_df.columns = ['id','F'+str(PREDICT_DAY)]
if 'id' in list(all_preds):
all_preds = all_preds.merge(temp_df, on=['id'], how='left')
else:
all_preds = temp_df.copy()
print('#'*10, ' %0.2f min round |' % ((time.time() - start_time) / 60),
' %0.2f min total |' % ((time.time() - main_time) / 60),
' %0.2f day sales |' % (temp_df['F'+str(PREDICT_DAY)].sum()))
del temp_df, lag_df_new
all_preds = all_preds.reset_index(drop=True)
all_preds.head()
all_preds.tail()
all_preds.shape
all_preds.describe()
# all the following is changed
# replace validation part
train_df = pd.read_csv('sales_train_evaluation.csv')
train_df=train_df[['id','d_1914','d_1915','d_1916','d_1917','d_1918','d_1919','d_1920','d_1921','d_1922','d_1923',
'd_1924','d_1925','d_1926','d_1927','d_1928','d_1929','d_1930','d_1931','d_1932','d_1933',
'd_1934','d_1935','d_1936','d_1937','d_1938','d_1939','d_1940','d_1941']]
train_df.head()
submission = pd.read_csv('sample_submission.csv')
submission.head()
submission.tail()
train_df['id']=train_df['id'].str.replace('evaluation','validation')
train_df.head()
train_df.columns=submission.columns
train_df.head()
train_df.tail()
train_df.shape
submission.shape
submission = submission[['id']]
sub1 = submission.merge(train_df, on=['id'], how='left')
sub1.head()
sub1.tail()
sub1=sub1[:30490]
sub1.head()
sub1.tail()
sub2 = submission.merge(all_preds, on=['id'], how='left')
sub2.head()
sub2.tail()
sub2=sub2[30490:]
sub2.head()
sub2.tail()
final_sub=pd.concat([sub1,sub2],axis=0)
final_sub.head()
final_sub.tail()
final_sub.describe()
final_sub.to_csv('lgb_bystore_final3.csv',index=False)
```
| github_jupyter |
### Bag of words model
```
# load all necessary libraries
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
pd.set_option('max_colwidth', 100)
```
#### Let's build a basic bag of words model on three sample documents
```
documents = ["Gangs of Wasseypur is a great movie.", "The success of a movie depends on the performance of the actors.", "There are no new movies releasing this week."]
print(documents)
def preprocess(document):
'changes document to lower case and removes stopwords'
# change sentence to lower case
document = document.lower()
# tokenize into words
words = word_tokenize(document)
# remove stop words
words = [word for word in words if word not in stopwords.words("english")]
# join words to make sentence
document = " ".join(words)
return document
documents = [preprocess(document) for document in documents]
print(documents)
```
#### Creating bag of words model using count vectorizer function
```
vectorizer = CountVectorizer()
bow_model = vectorizer.fit_transform(documents)
print(bow_model) # returns the row number and column number of the cells which have 1 as value
# print the full sparse matrix
print(bow_model.toarray())
print(bow_model.shape)
print(vectorizer.get_feature_names())
```
### Let's create a bag of words model on the spam dataset.
```
# load data
spam = pd.read_csv("SMSSpamCollection.txt", sep = "\t", names=["label", "message"])
spam.head()
```
##### Let's take a subset of data (first 50 rows only) and create bag of word model on that.
```
spam = spam.iloc[0:50,:]
print(spam)
# extract the messages from the dataframe
messages = spam.message
print(messages)
# convert messages into list
messages = [message for message in messages]
print(messages)
# preprocess messages using the preprocess function
messages = [preprocess(message) for message in messages]
print(messages)
# bag of words model
vectorizer = CountVectorizer()
bow_model = vectorizer.fit_transform(messages)
print(bow_model.toarray())
print(bow_model.shape)
print(vectorizer.get_feature_names())
```
* A lot of duplicate tokens such as 'win'and 'winner'; 'reply' and 'replying'; 'want' and 'wanted' etc.
## Stemming and lemmatising
```
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
stemmer = PorterStemmer()
wordnet_lemmatizer = WordNetLemmatizer()
# add stemming and lemmatisation in the preprocess function
def preprocess(document, stem=True):
'changes document to lower case and removes stopwords'
# change sentence to lower case
document = document.lower()
# tokenize into words
words = word_tokenize(document)
# remove stop words
words = [word for word in words if word not in stopwords.words("english")]
if stem:
words = [stemmer.stem(word) for word in words]
else:
words = [wordnet_lemmatizer.lemmatize(word, pos='v') for word in words]
# join words to make sentence
document = " ".join(words)
return document
```
### Bag of words model on stemmed messages
```
# stem messages
messages = [preprocess(message, stem=True) for message in spam.message]
# bag of words model
vectorizer = CountVectorizer()
bow_model = vectorizer.fit_transform(messages)
# look at the dataframe
pd.DataFrame(bow_model.toarray(), columns = vectorizer.get_feature_names())
# token names
print(vectorizer.get_feature_names())
```
### 359 tokens after stemming the messages as compared to 381 tokens without stemming.
### Let's try lemmatizing the messages.
```
# lemmatise messages
messages = [preprocess(message, stem=False) for message in spam.message]
# bag of words model
vectorizer = CountVectorizer()
bow_model = vectorizer.fit_transform(messages)
# look at the dataframe
pd.DataFrame(bow_model.toarray(), columns = vectorizer.get_feature_names())
# token names
print(vectorizer.get_feature_names())
```
### 363 tokens after lemmatizing the messages as compared to 381 tokens without lemmatising. But, on the other hand, stemmer reduces the token count to 359. Lemmatization doesn't work as expected because the data is very unclean.
| github_jupyter |
# 📃 Solution for Exercise M2.01
The aim of this exercise is to make the following experiments:
* train and test a support vector machine classifier through
cross-validation;
* study the effect of the parameter gamma of this classifier using a
validation curve;
* study if it would be useful in term of classification if we could add new
samples in the dataset using a learning curve.
To make these experiments we will first load the blood transfusion dataset.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
```
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
```
We will use a support vector machine classifier (SVM). In its most simple
form, a SVM classifier is a linear classifier behaving similarly to a
logistic regression. Indeed, the optimization used to find the optimal
weights of the linear model are different but we don't need to know these
details for the exercise.
Also, this classifier can become more flexible/expressive by using a
so-called kernel making the model becomes non-linear. Again, no requirement
regarding the mathematics is required to accomplish this exercise.
We will use an RBF kernel where a parameter `gamma` allows to tune the
flexibility of the model.
First let's create a predictive pipeline made of:
* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
with default parameter;
* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)
where the parameter `kernel` could be set to `"rbf"`. Note that this is the
default.
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
model = make_pipeline(StandardScaler(), SVC())
```
Evaluate the statistical performance of your model by cross-validation with a
`ShuffleSplit` scheme. Thus, you can use
[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)
and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)
to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`
and let the other parameters to the default.
```
from sklearn.model_selection import cross_validate, ShuffleSplit
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target, cv=cv, n_jobs=-1)
cv_results = pd.DataFrame(cv_results)
cv_results
print(
f"Accuracy score of our model:\n"
f"{cv_results['test_score'].mean():.3f} +/- "
f"{cv_results['test_score'].std():.3f}"
)
```
As previously mentioned, the parameter `gamma` is one of the parameter
controlling under/over-fitting in support vector machine with an RBF kernel.
Compute the validation curve to evaluate the effect of the parameter `gamma`.
You can vary its value between `10e-3` and `10e2` by generating samples on a
logarithmic scale. Thus, you can use `np.logspace(-3, 2, num=30)`.
Since we are manipulating a `Pipeline` the parameter name will be set to
`svc__gamma` instead of only `gamma`. You can retrieve the parameter name
using `model.get_params().keys()`. We will go more into details regarding
accessing and setting hyperparameter in the next section.
```
import numpy as np
from sklearn.model_selection import validation_curve
gammas = np.logspace(-3, 2, num=30)
param_name = "svc__gamma"
train_scores, test_scores = validation_curve(
model, data, target, param_name=param_name, param_range=gammas, cv=cv,
n_jobs=-1)
```
Plot the validation curve for the train and test scores.
```
import matplotlib.pyplot as plt
plt.errorbar(gammas, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training error')
plt.errorbar(gammas, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing error')
plt.legend()
plt.xscale("log")
plt.xlabel(r"Value of hyperparameter $\gamma$")
plt.ylabel("Accuracy score")
_ = plt.title("Validation score of support vector machine")
```
Looking at the curve, we can clearly identify the over-fitting regime of
the SVC classifier when `gamma > 1`.
The best setting is around `gamma = 1` while for `gamma < 1`,
it is not very clear if the classifier is under-fitting but the
testing score is worse than for `gamma = 1`.
Now, you can perform an analysis to check whether adding new samples to the
dataset could help our model to better generalize. Compute the learning curve
(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))
by computing the train and test scores for different training dataset size.
Plot the train and test scores with respect to the number of samples.
```
from sklearn.model_selection import learning_curve
train_sizes = np.linspace(0.1, 1, num=10)
results = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv, n_jobs=-1)
train_size, train_scores, test_scores = results[:3]
plt.errorbar(train_size, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training error')
plt.errorbar(train_size, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing error')
plt.legend()
plt.xlabel("Number of samples in the training set")
plt.ylabel("Accuracy")
_ = plt.title("Learning curve for support vector machine")
```
We observe that adding new samples in the dataset does not improve the
testing score. We can only conclude that the standard deviation of
the training error is decreasing when adding more samples which is not a
surprise.
| github_jupyter |
# Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I
In this chapter and this accompanying notebook learn with an example on how you can use Amazon Textract in asynchronous mode by extracting content from multiple PDF files in batch, and sending specific content from these PDF documents to an Amazon A2I human review loop to review and modify the values, and send them to an Amazon DynamoDB table for downstream processing.
**Important Note:** This is an accompanying notebook for Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book. Please make sure to read the instructions provided in the book prior to attempting this notebook.
### Step 0 - Create a private human review workforce
This step requires you to use the AWS Console. However, we highly recommend that you follow it, especially when creating your own task with a custom template we will use for this notebook. We will create a private workteam and add only one user (you) to it.
To create a private team:
1. Go to AWS Console > Amazon SageMaker > Labeling workforces
1. Click "Private" and then "Create private team".
1. Enter the desired name for your private workteam.
1. Enter your own email address in the "Email addresses" section.
1. Enter the name of your organization and a contact email to administer the private workteam.
1. Click "Create Private Team".
1. The AWS Console should now return to AWS Console > Amazon SageMaker > Labeling workforces. Your newly created team should be visible under "Private teams". Next to it you will see an ARN which is a long string that looks like arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name. Please copy this ARN to paste in the cell below.
1. You should get an email from no-reply@verificationemail.com that contains your workforce username and password.
1. In AWS Console > Amazon SageMaker > Labeling workforces, click on the URL in Labeling portal sign-in URL. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).
1. This is your private worker's interface. When we create a verification task in Verify your task using a private team below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the "Invite new workers" button.
Please refer to the [Amazon SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html) if you need more details.
### Step 1 - Import libraries and initiliaze variables
```
# Step 1 - Cell 1
import urllib
import boto3
import os
import json
import time
import uuid
import sagemaker
import pandas as pd
from sagemaker import get_execution_role
from sagemaker.s3 import S3Uploader, S3Downloader
textract = boto3.client('textract')
s3 = boto3.resource('s3')
bucket = "<S3-bucket-name>"
prefix = 'chapter16/input'
# Enter the Workteam ARN you created from point 7 in Step 0 above
WORKTEAM_ARN= '<your-private-workteam-arn>'
# Step 1 - Cell 2
# Upload the SEC registration documents
s3_client = boto3.client('s3')
for secfile in os.listdir():
if secfile.endswith('pdf'):
response = s3_client.upload_file(secfile, bucket, prefix+'/'+secfile)
print("Uploaded {} to S3 bucket {} in folder {}".format(secfile, bucket, prefix))
```
### Step 2 - Start Amazon Textract Text Detection Job
```
# Step 2 - Cell 1
input_bucket = s3.Bucket(bucket)
jobids = {}
# Step 2 - Cell 2
for doc in input_bucket.objects.all():
if doc.key.startswith(prefix) and doc.key.endswith('pdf'):
tres = textract.start_document_text_detection(
DocumentLocation={
"S3Object": {
"Bucket": bucket,
"Name": doc.key
}
}
)
jobids[doc.key.split('/')[2]] = tres['JobId']
# Step 2 - Cell 3
for j in jobids:
print("Textract detection Job ID for {} is {}".format(j,str(jobids[j])))
```
### Step 3 - Get Amazon Textract Text Detection Results
```
# Step 3 - Cell 1
class TextExtractor():
def extract_text(self, jobId):
""" Extract text from document corresponding to jobId and
generate a list of pages containing the text
"""
textract_result = self.__get_textract_result(jobId)
pages = {}
self.__extract_all_pages(jobId, textract_result, pages, [])
return pages
def __get_textract_result(self, jobId):
""" retrieve textract result with job Id """
result = textract.get_document_text_detection(
JobId=jobId
)
return result
def __extract_all_pages(self, jobId, textract_result, pages, page_numbers):
""" extract page content: build the pages array,
recurse if response is too big (when NextToken is provided by textract)
"""
blocks = [x for x in textract_result['Blocks'] if x['BlockType'] == "LINE"]
content = {}
line = 0
for block in blocks:
line += 1
content['Text'+str(line)] = block['Text']
content['Confidence'+str(line)] = block['Confidence']
if block['Page'] not in page_numbers:
page_numbers.append(block['Page'])
pages[block['Page']] = {
"Number": block['Page'],
"Content": content
}
else:
pages[block['Page']]['Content'] = content
nextToken = textract_result.get("NextToken", "")
if nextToken != '':
textract_result = textract.get_document_text_detection(
JobId=jobId,
NextToken=nextToken
)
self.__extract_all_pages(jobId,
textract_result,
pages,
page_numbers)
# Step 3 - Cell 2
text_extractor = TextExtractor()
indoc = {}
df_indoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence', 'CorrectedText', 'Comments'])
for x in jobids:
pages = text_extractor.extract_text(jobids[x])
contdict =pages[1]['Content']
for row in range(1,(int(len(contdict)/2))+1):
df_indoc.loc[len(df_indoc.index)] = [x, row, contdict['Text'+str(row)], round(contdict['Confidence'+str(row)],1),'','']
# Uncomment the line below if you want to review the contents of this dataframe
#df_indoc.to_csv('extract.csv')
# Step 3 - Cell 3
# The lines in each document that are of importance for the human loop to review
bounding_dict = {'lines': '9:11:12:13:15:16:17:18:19:20:21:22:23:24:25'}
# Step 3 - Cell 4
# Let us now create a new dataframe that only contains the subset of lines we need from the bounding_dict
df_newdoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence','CorrectedText','Comments'])
for idx, row in df_indoc.iterrows():
if str(row['LineNr']) in bounding_dict['lines'].split(':'):
df_newdoc.loc[len(df_newdoc.index)] = [row['DocName'],row['LineNr'], row['DetectedText'], row['Confidence'], row['CorrectedText'],row['Comments']]
df_newdoc
```
### Step 4 - Create the Amazon A2I human review Task UI
We will customize a sample tabular template from the Amazon A2I sample Task UI template page - https://github.com/aws-samples/amazon-a2i-sample-task-uis
```
# Step 4 - Cell 1
# Initialize A2I variables
a2i_prefix = "chapter16/a2i-results"
# Define IAM role
role = get_execution_role()
print("RoleArn: {}".format(role))
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
# Amazon SageMaker client
sagemaker_client = boto3.client('sagemaker')
# Amazon Augment AI (A2I) client
a2i = boto3.client('sagemaker-a2i-runtime')
# Flow definition name - this value is unique per account and region. You can also provide your own value here.
flowDefinitionName = 'fd-pdf-docs-' + timestamp
# Task UI name - this value is unique per account and region. You can also provide your own value here.
taskUIName = 'ui-pdf-docs-' + timestamp
# Flow definition outputs
OUTPUT_PATH = f's3://' + bucket + '/' + a2i_prefix
# Step 4 - Cell 2
# We will use the tabular liquid template and customize it for our requirements
template = r"""
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<style>
table, tr, th, td {
border: 1px solid black;
border-collapse: collapse;
padding: 5px;
}
</style>
<crowd-form>
<div>
<h1>Instructions</h1>
<p>Please review the SEC registration form inputs, and make corrections where appropriate. </p>
</div>
<div>
<h3>Original Registration Form - Page 1</h3>
<classification-target>
<img style="width: 70%; max-height: 40%; margin-bottom: 10px" src="{{ task.input.image | grant_read_access }}"/>
</classification-target>
</div>
<br>
<h1> Please enter your modifications below </h1>
<table>
<tr>
<th>Line Nr</th>
<th style="width:500px">Detected Text</th>
<th style="width:500px">Confidence</th>
<th>Change Required</th>
<th style="width:500px">Corrected Text</th>
<th>Comments</th>
</tr>
{% for pair in task.input.document %}
<tr>
<td>{{ pair.linenr }}</td>
<td><crowd-text-area name="predicteddoc{{ pair.linenr }}" value="{{ pair.detectedtext }}"></crowd-text-area></td>
<td><crowd-text-area name="confidence{{ pair.linenr }}" value="{{ pair.confidence }}"></crowd-text-area></td>
<td>
<p>
<input type="radio" id="agree{{ pair.linenr }}" name="rating{{ pair.linenr }}" value="agree" required>
<label for="agree{{ pair.linenr }}">Correct</label>
</p>
<p>
<input type="radio" id="disagree{{ pair.linenr }}" name="rating{{ pair.linenr }}" value="disagree" required>
<label for="disagree{{ pair.linenr }}">Incorrect</label>
</p>
</td>
<td>
<p>
<input style="width:500px" rows="3" type="text" name="correcteddoc{{ pair.linenr }}" value="{{pair.detectedtext}}" required/>
</p>
</td>
<td>
<p>
<input style="width:500px" rows="3" type="text" name="comments{{ pair.linenr }}" placeholder="Explain why you changed the value"/>
</p>
</td>
</tr>
{% endfor %}
</table>
<br>
<br>
</crowd-form>
"""
# Step 4 - Cell 2
# Define the method to initialize and create the Task UI
def create_task_ui():
response = sagemaker_client.create_human_task_ui(
HumanTaskUiName=taskUIName,
UiTemplate={'Content': template})
return response
# Step 4 - Cell 3
# Execute the method to create the Task UI
humanTaskUiResponse = create_task_ui()
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)
```
### Step 5 - Create the Amazon A2I flow definition
In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:
* The workforce that your tasks will be sent to.
* The instructions that your workforce will receive. This is called a worker task template.
* Where your output data will be stored.
This notebook is going to use the API, but you can optionally create this workflow definition in the console as well.
For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.
```
# Step 5 - Cell 1
create_workflow_definition_response = sagemaker_client.create_flow_definition(
FlowDefinitionName=flowDefinitionName,
RoleArn=role,
HumanLoopConfig= {
"WorkteamArn": WORKTEAM_ARN,
"HumanTaskUiArn": humanTaskUiArn,
"TaskCount": 1,
"TaskDescription": "Review the contents and correct values as indicated",
"TaskTitle": "SEC Registration Form Review"
},
OutputConfig={
"S3OutputPath" : OUTPUT_PATH
}
)
flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use
# Step 5 - Cell 2
for x in range(60):
describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)
print(describeFlowDefinitionResponse['FlowDefinitionStatus'])
if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):
print("Flow Definition is active")
break
time.sleep(2)
```
### Step 6 - Activate the Amazon A2I flow definition
```
# Step 6 - Cell 1
# We will display the PDF first page for reference on what is being edited by the human loop
reg_images = {}
for image in os.listdir():
if image.endswith('png'):
reg_images[image.split('_')[0]] = S3Uploader.upload(image, 's3://{}/{}'.format(bucket, prefix))
# Step 6 - Cell 2
# Activate human loops for all the three documents. These will be delivered for review sequentially in the Task UI.
# We will also send only low confidence detections to A2I so the human team can update the text for what is should actually be
humanLoopName = {}
docs = df_newdoc.DocName.unique()
# confidence threshold
confidence_threshold = 95
for doc in docs:
doc_list = []
humanLoopName[doc] = str(uuid.uuid4())
for idx, line in df_newdoc.iterrows():
# Send only those lines whose confidence score is less than threshold
if line['DocName'] == doc and line['Confidence'] <= confidence_threshold:
doc_list.append({'linenr': line['LineNr'], 'detectedtext': line['DetectedText'], 'confidence':line['Confidence']})
ip_content = {"document": doc_list,
'image': reg_images[doc.split('.')[0]]
}
start_loop_response = a2i.start_human_loop(
HumanLoopName=humanLoopName[doc],
FlowDefinitionArn=flowDefinitionArn,
HumanLoopInput={
"InputContent": json.dumps(ip_content)
}
)
# Step 6 - Cell 3
completed_human_loops = []
for doc in humanLoopName:
resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])
print(f'HumanLoop Name: {humanLoopName[doc]}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
# Step 6 - Cell 4
workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]
print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!")
print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])
# Step 6 - Cell 5
completed_human_loops = []
for doc in humanLoopName:
resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])
print(f'HumanLoop Name: {humanLoopName[doc]}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
# Step 6 - Cell 7
import re
import pandas as pd
for resp in completed_human_loops:
splitted_string = re.split('s3://' + bucket + '/', resp['HumanLoopOutput']['OutputS3Uri'])
output_bucket_key = splitted_string[1]
response = s3_client.get_object(Bucket=bucket, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
loop_name = json_output['humanLoopName']
for i in json_output['humanAnswers']:
x = i['answerContent']
docname = list(humanLoopName.keys())[list(humanLoopName.values()).index(loop_name)]
for i, r in df_newdoc.iterrows():
if r['DocName'] == docname:
df_newdoc.at[i,'CorrectedText'] = x['correcteddoc'+str(r['LineNr'])] if 'correcteddoc'+str(r['LineNr']) in x else ''
df_newdoc.at[i,'Comments'] = x['comments'+str(r['LineNr'])] if 'comments'+str(r['LineNr']) in x else ''
# Step 6 - Cell 8
df_newdoc.head(30)
```
### Step 7 - Save changes to Amazon DynamoDB
```
# Step 7 - Cell 1
# Create the Amazon DynamoDB table - note that a new DynamoDB table is created everytime you execute this cell
# Get the service resource.
dynamodb = boto3.resource('dynamodb')
tablename = "SEC-registration-"+str(uuid.uuid4())
# Create the DynamoDB table.
table = dynamodb.create_table(
TableName=tablename,
KeySchema=[
{
'AttributeName': 'row_nr',
'KeyType': 'HASH'
}
],
AttributeDefinitions=[
{
'AttributeName': 'row_nr',
'AttributeType': 'N'
},
],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
}
)
# Wait until the table exists, this will take a minute or so
table.meta.client.get_waiter('table_exists').wait(TableName=tablename)
# Print out some data about the table.
print("Table successfully created")
# Step 7 - Cell 2
# Load the Amazon DynamoDB table
for idx, row in df_newdoc.iterrows():
table.put_item(
Item={
'row_nr': idx,
'doc_name': str(row['DocName']) ,
'line_nr': str(row['LineNr']),
'detected_line': str(row['DetectedText']),
'confidence': str(row['Confidence']),
'corrected_line': str(row['CorrectedText']),
'change_comments': str(row['Comments'])
}
)
print("Items were successfully created in DynamoDB table")
```
### End of Notebook
Please go back to Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book to proceed further.
| github_jupyter |
import modules and get command-line parameters if running as script
```
from probrnn import models, data, inference
import numpy as np
import json
from matplotlib import pyplot as plt
from IPython.display import clear_output
```
parameters for the model and training
```
params = \
{
"N_ITERATIONS": 10 ** 5,
"VALIDATE_EACH": 100,
"SAVE_EACH": 1000,
"LOG_EVERY": 50,
"LEARNING_RATE": 0.0001,
"N_HIDDEN": 256,
"N_BINS": 50,
"BATCH_SIZE": 50,
}
```
Get some correlated toy data
```
datastruct = data.CoupledToyData(n_bins=params["N_BINS"])
x, _ = datastruct._gen(1).next()
x = datastruct.get_readable(x)
plt.figure()
plt.plot(x)
plt.show()
```
do some training
```
model = models.NADE(datastruct, params=params)
training = models.Training(
model,
"../models/toy_nade_bivariate",
"../models/toy_nade_bivariate_training.json",
)
def print_function(trer, i, batch):
if i % 10 == 0:
clear_output()
print "loss: {}; iteration {}".format(np.mean(trer[-100:]), i)
training.train(print_function)
```
visualize the training errors
```
with open("../models/toy_nade_bivariate_training.json") as f:
errs = json.load(f)
plt.figure()
plt.plot(np.array(errs["training_error"])[:, 0],
np.array(errs["training_error"])[:, 1])
plt.plot(np.array(errs["validation_error"])[:, 0],
np.array(errs["validation_error"])[:, 1], 'r')
plt.legend(["training", "validation"])
plt.show()
```
plot some weight traces
```
for x in errs.keys():
if x != "training_error" and x != "validation_error" and "train" not in x:
plt.figure()
for key in errs[x].keys():
if key == "mean":
plt.plot(errs[x][key], 'b', linewidth=5.0)
elif key == "random":
plt.plot(errs[x][key], 'c')
else:
plt.plot(errs[x][key], 'b', linestyle='--')
plt.title("variable: {}".format(x))
plt.show()
```
load trained model
```
load_name = "../models/toy_nade_bivariate_12000"
model = models.NADE(datastruct, fn=load_name)
print json.dumps(model.params, indent=4)
```
try some sampling
```
x = model.sample(200)
plt.plot(x[::2])
plt.plot(x[1::2])
plt.show()
```
try some imputation
```
x = datastruct.simulate()
x_missing = np.zeros(x.shape[0] * 2)
x_missing[::2] = x[:, 0]
x_missing[1::2] = np.nan
estimate = inference.NaiveSIS(model, x_missing, 1000, binned=False, quiet=False).estimate()
plt.figure()
plt.plot(estimate[::2])
plt.plot(estimate[1::2])
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import json
import time
from IPython.display import clear_output
from IPython.display import HTML
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import colors
import numpy as np
from skimage.segmentation import flood, flood_fill
%matplotlib inline
# define solver
class ARCSolver:
def __init__(self, task_filename):
# load task and extract input and output pairs
self.task_filename = task_filename
self.task = self.load_task(task_filename)
self.train_inputs, self.train_outputs, self.test_inputs, self.test_outputs = \
self.extract_io_pairs()
self.test_pred = np.zeros((5, 5))
self.test_pred_height, self.test_pred_width = self.test_pred.shape
self.solved = False # have we solved the task yet?
self.selected_colour = 0
self.clipboard = None
self.description = ''
# variables for plotting
self.cmap = colors.ListedColormap(
['#000000', '#0074D9','#FF4136','#2ECC40','#FFDC00',
'#AAAAAA', '#F012BE', '#FF851B', '#7FDBFF', '#870C25'])
self.colour_to_num = {'black': 0, 'blue': 1, 'red': 2, 'green': 3, 'yellow': 4,
'grey': 5, 'magenta': 6, 'orange': 7, 'light_blue': 8,
'maroon': 9}
self.num_to_colour = {0: 'black', 1: 'blue', 2: 'red', 3: 'green', 4: 'yellow',
5: 'grey', 6: 'magneta', 7: 'orange', 8: 'light_blue',
9: 'maroon'}
def load_task(self, task_filename):
with open(task_filename, 'r') as f:
task = json.load(f)
return task
def plot_task(self):
"""
Plots the first train and test pairs of a specified task,
using same color scheme as the ARC app
"""
norm = colors.Normalize(vmin=0, vmax=9)
n_train = len(self.task['train'])
fig, axs = plt.subplots(n_train+1, 2, figsize=(10, 10))
for i in range(n_train):
axs[i, 0].imshow(self.task['train'][i]['input'], cmap=self.cmap, norm=norm)
axs[i, 0].axis('off')
axs[i, 0].set_title('Train Input')
axs[i, 1].imshow(self.task['train'][i]['output'], cmap=self.cmap, norm=norm)
axs[i, 1].axis('off')
axs[i, 1].set_title('Train Output')
axs[n_train, 0].imshow(self.task['test'][0]['input'], cmap=self.cmap, norm=norm)
axs[n_train, 0].axis('off')
axs[n_train, 0].set_title('Test Input')
axs[n_train, 1].imshow(self.task['test'][0]['output'], cmap=self.cmap, norm=norm)
axs[n_train, 1].axis('off')
axs[n_train, 1].set_title('Test Output')
plt.tight_layout()
plt.show()
def plot_grid(self, grid):
"""
Plots a single grid
"""
#plt.clf()
#plt.draw()
#display(plt)
def plot_grids(self, grids):
"""
Plots a list of grids
"""
n_grids = len(grids)
norm = colors.Normalize(vmin=0, vmax=9)
fig, axs = plt.subplots(1, n_grids, figsize=(6, 6), squeeze=False)
for i in range(n_grids):
axs[0, i].imshow(grids[i], cmap=self.cmap, norm=norm)
axs[0, i].axis('off')
plt.tight_layout()
plt.show()
def extract_io_pairs(self):
train = self.task['train']
test = self.task['test']
n_train = len(train)
n_test = len(test)
train_inputs = np.array([train[i]['input'] for i in range(n_train)])
train_outputs = np.array([train[i]['output'] for i in range(n_train)])
test_inputs = np.array([test[i]['input'] for i in range(n_test)])
test_outputs = np.array([test[i]['output'] for i in range(n_test)])
return train_inputs, train_outputs, test_inputs, test_outputs
def copy_from_input(self):
# copy over the first test input
self.test_pred = self.test_inputs[0].copy()
self.test_pred_height, self.test_pred_width = self.test_inputs[0].shape
self.description = 'copy from input'
def reset(self):
# resets grid to all zeros with size of the grid based on current settings
self.test_pred = np.zeros((self.test_pred_height, self.test_pred_width))
self.description = 'reset'
def resize(self):
# resizes the grid
prev_test_pred = self.test_pred.copy()
prev_test_pred_width = self.test_pred_width
prev_test_pred_height = self.test_pred_height
# sample new grid size
new_test_pred_width = np.random.choice(np.arange(1, 5))
new_test_pred_height = np.random.choice(np.arange(1, 5))
new_test_pred = np.zeros((new_test_pred_height, new_test_pred_width))
# copy over values
for i in range(min(prev_test_pred_height, new_test_pred_height)):
for j in range(min(prev_test_pred_width, new_test_pred_width)):
new_test_pred[i, j] = prev_test_pred[i, j]
self.test_pred = new_test_pred
self.test_pred_width = new_test_pred_width
self.test_pred_height = new_test_pred_height
self.description = f'resize: ({new_test_pred_height}, {new_test_pred_width})'
def change_colour(self):
self.selected_colour = np.random.choice(np.arange(10))
self.description = f'change colour: {self.num_to_colour[self.selected_colour]}'
def edit(self):
# select a random location
x = np.random.choice(np.arange(self.test_pred_width))
y = np.random.choice(np.arange(self.test_pred_height))
self.test_pred[y, x] = self.selected_colour
self.description = f'edit: ({y}, {x})'
def edit_rectangle(self):
# selects a randomly selected region and changes the colour of all of the cells
x_start = np.random.choice(np.arange(self.test_pred_width))
x_end = np.random.choice(np.arange(x_start+1, self.test_pred_width+1))
y_start = np.random.choice(np.arange(self.test_pred_height))
y_end = np.random.choice(np.arange(y_start+1, self.test_pred_height+1))
# select a new colour
self.selected_colour = np.random.choice(np.arange(10))
self.test_pred[y_start:y_end, x_start:x_end] = self.selected_colour
self.description = f'edit rectangle from ({y_start}:{y_end}, {x_start}:{x_end}) to {self.selected_colour}'
def copy(self):
# copies a randomly selected region
x_start = np.random.choice(np.arange(self.test_pred_width))
x_end = np.random.choice(np.arange(x_start+1, self.test_pred_width+1))
y_start = np.random.choice(np.arange(self.test_pred_height))
y_end = np.random.choice(np.arange(y_start+1, self.test_pred_height+1))
self.clipboard = self.test_pred[y_start:y_end, x_start:x_end].copy()
self.description = f'copy from ({y_start}:{y_end}, {x_start}:{x_end})'
#print(f'clipboard: {self.clipboard}')
def paste(self):
# pastes clipboard value into randomly selected location
clipboard_height, clipboard_width = self.clipboard.shape
x_start = np.random.choice(np.arange(self.test_pred_width))
x_width = min(clipboard_width, self.test_pred_width - x_start)
y_start = np.random.choice(np.arange(self.test_pred_height))
y_height = min(clipboard_height, self.test_pred_height - y_start)
self.test_pred[y_start:y_start+y_height, x_start:x_start+x_width] = self.clipboard[:y_height, :x_width]
self.description = f'pasting from ({y_start}:{y_start+y_height}, {x_start}:{x_start+x_width})'
def flood_fill(self):
# flood fill at a random location
x = np.random.choice(self.test_pred_width)
y = np.random.choice(self.test_pred_height)
self.test_pred = flood_fill(self.test_pred, (y, x),
self.selected_colour)
self.description = f'flood fill from: ({y}, {x})'
def solve(self):
fig = plt.figure(figsize=(6, 6))
plt.ion()
plt.show()
norm = colors.Normalize(vmin=0, vmax=9)
while not self.solved:
clear_output()
# randomly select available function
if np.random.choice([0, 1]) == 0:
self.change_colour()
else:
self.edit()
plt.imshow(self.test_pred, cmap=self.cmap, norm=norm)
plt.axis('off')
plt.tight_layout()
plt.pause(1)
# check accuracy
training_path = "/Users/aysjajohnson/Desktop/ARC-master/data/training/"
solver = ARCSolver(task_filename=os.path.join(training_path, '6e02f1e3.json'))
solver.plot_grids(solver.train_inputs)
solver.plot_grids(solver.train_outputs)
solver = ARCSolver(task_filename=os.path.join(training_path, '6e02f1e3.json'))
fig = plt.figure(figsize=(5, 5))
ax = plt.axes(xlim=(-.5, 4.5), ylim=(-0.5, 4.5))
norm = colors.Normalize(vmin=0, vmax=9)
im = plt.imshow(solver.test_pred, cmap=solver.cmap, norm=norm)
plt.gca().invert_yaxis()
plt.xticks([])
plt.yticks([])
# initialization function: plot the background of each frame
def init():
# TODO: modify initialization
im.set_data(solver.test_pred)
return [im]
# animation function. This is called sequentially
def animate(i):
# TODO: replace the two function calls below with a generic next() function
# or something like that
r = np.random.choice([0, 1, 2, 3, 4, 5, 6, 7, 8])
if r == 0:
solver.change_colour()
elif r == 1:
solver.edit()
elif r == 2:
solver.resize()
elif r == 3:
solver.reset()
elif r == 4:
solver.flood_fill()
elif r == 5:
solver.copy()
elif r == 6:
if solver.clipboard is not None:
solver.paste()
elif r == 7:
solver.copy_from_input()
elif r == 8:
solver.edit_rectangle()
#print(solver.description)
#print(solver.test_pred.shape)
#plt.gcf().set_size_inches(solver.test_pred_height, solver.test_pred_width, forward=True)
plt.rcParams["figure.figsize"] = (solver.test_pred_height, solver.test_pred_width)
im.set_data(solver.test_pred)
ax.set_title(solver.description)
return [im]
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=200, blit=False)
# save the animation as an mp4. This requires ffmpeg or mencoder to be
# installed. The extra_args ensure that the x264 codec is used, so that
# the video can be embedded in html5. You may need to adjust this for
# your system: for more information, see
# http://matplotlib.sourceforge.net/api/animation_api.html
anim.save('basic_animation.mp4', fps=5, extra_args=['-vcodec', 'libx264'])
HTML(anim.to_html5_video())
np.zeros((3, 2)).shape
for i in range(1):
print(i)
```
| github_jupyter |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up the by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss.
```
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
model.eval()
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(train_losses[-1]),
"Test Loss: {:.3f}.. ".format(test_losses[-1]),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
| github_jupyter |
# Point Spread Function Photometry with Photutils
The PSF photometry module of photutils is intended to be a fully modular tool such that users are able to completly customise the photometry procedure, e.g., by using different source detection algorithms, background estimators, PSF models, etc. Photutils provides implementations for each subtask involved in the photometry process, however, users are still able to include their own implementations without having to touch into the photutils core classes!
This modularity characteristic is accomplished by using the object oriented programming approach which provides a more convient user experience while at the same time allows the developers to think in terms of classes and objects rather than isolated functions.
Photutils provides three basic classes to perform PSF photometry: `BasicPSFPhotometry`, `IterativelySubtractedPSFPhotometry`, and `DAOPhotPSFPhotometry`. In this notebook, we will go through them, explaining their differences and particular uses.
# Artificial Starlist
First things first! Let's create an artifical list of stars using photutils in order to explain the PSF procedures through examples.
```
from photutils.datasets import make_random_gaussians
from photutils.datasets import make_noise_image
from photutils.datasets import make_gaussian_sources
num_sources = 150
min_flux = 500
max_flux = 5000
min_xmean = 16
max_xmean = 240
sigma_psf = 2.0
starlist = make_random_gaussians(num_sources, [min_flux, max_flux],
[min_xmean, max_xmean],
[min_xmean, max_xmean],
[sigma_psf, sigma_psf],
[sigma_psf, sigma_psf],
random_state=1234)
shape = (256, 256)
image = (make_gaussian_sources(shape, starlist) +
make_noise_image(shape, type='poisson', mean=6., random_state=1234) +
make_noise_image(shape, type='gaussian', mean=0., stddev=2., random_state=1234))
```
Note that we also added Poisson and Gaussian background noises with the function `make_noise_image`.
Let's keep in mind this fact:
```
type(starlist)
starlist
```
Pretty much all lists of sources in `photutils` are returned or passed in as `astropy` `Table` objects, so this is something to get used to.
Let's also plot our list of stars.
```
%matplotlib inline
from matplotlib import rcParams
import matplotlib.pyplot as plt
rcParams['image.cmap'] = 'magma'
rcParams['image.aspect'] = 1 # to get images with square pixels
rcParams['figure.figsize'] = (20,10)
rcParams['image.interpolation'] = 'nearest'
rcParams['image.origin'] = 'lower'
rcParams['font.size'] = 14
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
# The `BasicPSFPhotometry` class
As the name suggests, this is a basic class which provides the minimum tools necessary to perform photometry in crowded fields (or non crowded fields). Let's take a look into its attributes and methods.
BasicPSFPhotometry has the following mandatory attributes:
* group_maker : callable or instance of any GroupStarsBase subclass
* bkg_estimator : callable, instance of any BackgroundBase subclass, or None
* psf_model : astropy.modeling.Fittable2DModel instance
* fitshape : integer or length-2 array-like
And the following optional attributes:
* finder : callable or instance of any StarFinderBase subclasses or None
* fitter : Astropy Fitter instance
* aperture_radius : float or int
## Group Maker
`group_maker` can be instantiated using any GroupStarBase subclass, such as `photutils.psf.DAOGroup` or `photutils.psf.DBSCANGroup`, or even using a `callable` provided by the user.
`photutils.psf.DAOGroup` is a class which implements the `GROUP` algorithm proposed by Stetson which is used in DAOPHOT. This class takes one attribute to be initialized namely:
* crit_separation : int or float
Distance, in units of pixels, such that any two stars separated by less than this distance will be placed in the same group.
As it is shown in its description, `crit_separation` plays a crucial role in deciding whether or not a given star belong to some group of stars. Usually, `crit_separation` is set to be a positive real number multiplied by the FWHM of the PSF.
`photutils.psf.DBSCANGroup` is a generalized case of `photutils.psf.DAOGroup`, in fact, it is a wrapper around the `sklearn.cluster.DBSCAN` class. Its usage is very similar to `photutils.psf.DAOGroup` and we refer the photutils API doc page for more information: https://photutils.readthedocs.io/en/latest/api/photutils.psf.DBSCANGroup.html#photutils.psf.DBSCANGroup
The user is welcome to check the narrative docs on the photutils RTD webpage: https://photutils.readthedocs.io/en/latest/photutils/grouping.html
Now, let's instantiate a `group_maker` from `DAOGroup`:
```
from photutils import psf
from astropy.stats import gaussian_sigma_to_fwhm
daogroup = psf.DAOGroup(crit_separation=2.*sigma_psf*gaussian_sigma_to_fwhm)
```
Now, the object `daogroup` is ready to be passed to `BasicPSFPhotometry`.
## Background Estimation
Background estimation is needed in the photometry process in order to reduce the bias added primarily by Poisson noise background into the flux estimation.
Photutils provides several classes to perform both scalar background estimation, i.e., when the background is flat and does not vary strongly across the image, and spatial varying background estimation, i.e., when there exist a gradient field associated with the background.
The user is welcome to refer to the Background Esimation narrative docs in the photutils webpage for a detailed explanation. https://photutils.readthedocs.io/en/latest/photutils/background.html
In this notebook, we will use the class `MMMBackground` which is intended to estimate scalar background. This class is based on the background estimator used in `DAOPHOT`.
`MMMBackground` gets a `SigmaClip` object as an attribute. It's basically used to perform sigma clip on the image before performing background estimation. For our scenario, we will just instatiate a object of `MMMBackground` with default attribute values:
```
from photutils import MMMBackground
mmm_bkg = MMMBackground()
mmm_bkg.sigma_clip.sigma
mmm_bkg.sigma_clip.iters
```
## PSF Models
The attribute ``psf_model`` represents an analytical function with unkwon parameters (e.g., peak center and flux) which describes the underlying point spread function. ``psf_model`` is usually a subclass of `astropy.modeling.Fittable2DModel`. In this notebook, we will use `photutils.psf.IntegratedGaussianPRF` as our underlying PSF model.
Note that the underlying PSF model has to have parameters with the following names ``x_0``, ``y_0``, and ``flux``, to describe the center peak position and the flux, respectively.
```
from photutils.psf import IntegratedGaussianPRF
gaussian_psf = IntegratedGaussianPRF(sigma=2.0)
```
## Finder
Finder is an optional attribute, meaning that if it is `None`, then the user should provide a table with the center positions of each star when calling the `BasicPSFPhotometry` object.
Later, we will see examples of both cases, i.e., when Finder is `None` and when it is not.
The finder attribute is used to perform source detection. It can be any subclass of `photutils.StarFinderBase` such as `photutils.DAOStarFinder` or `photutils.IRAFStarFinder`, which implement a DAOPHOT-like or IRAF-like source detection algorithms, respectively. The user can also set her/his own source detection algorithm as long as the input/output formats are compatible with `photutils.StarFinderBase`.
`photutils.DAOStarFinder`, for instance, receives the following mandatory attributes:
* threshold : float
The absolute image value above which to select sources.
* fwhm : float
The full-width half-maximum (FWHM) of the major axis of the Gaussian kernel in units of pixels.
Now, let's instantiate our `DAOStarFinder` object:
```
from photutils.detection import DAOStarFinder
daofinder = DAOStarFinder(threshold=2.5*mmm_bkg(image), fwhm=sigma_psf*gaussian_sigma_to_fwhm)
```
Note that we choose the `threshold` to be a multiple of the background level and we assumed the `fwhm` to be known from our list of stars.
More details about source detection can be found on the `photutils.detection` narrative docs: https://photutils.readthedocs.io/en/latest/photutils/detection.html
## Fitter
Fitter should be an instance of a fitter implemented in `astropy.modeling.fitting`. Since the PSF model is almost always nonlinear, the fitter should be able to handle nonlinear optimization problems. In this notebook, we will use the `LevMarLSQFitter`, which combines the Levenberg-Marquardt optimization algorithm with the least-squares statistic. The default value for fitter is `LevMarLSQFitter()`.
Look at http://docs.astropy.org/en/stable/modeling/index.html for more details on fitting.
NOTE: At this point it should be stated tha photutils do not have a standard way to compute uncertainties on the fitted parameters. However, this will change in the near future with the addition of a new affiliated package to the Astropy environment, namely, `SABA: Sherpa-Astropy Bridge` which made possible to use astropy models together with Sherpa Fitters.
## Fitshape and Aperture Radius
There are two attributes left: `fitshape` (mandatory) and `aperture_radius` (optional).
`fitshape` corresponds to the size of the rectangular region necessary to enclose one single source. The pixels inside that region will be used in the fitting process. `fitshape` should be an odd integer or a tuple of odd integers.
```
import numpy as np
fitshape = 11
```
The aperture radius corresponds to the radius used to compute initial guesses for the fluxes of the sources. If this value is `None`, then one fwhm will be used if it can be determined by the `psf_model`.
## Example with unknown positions and unknown fluxes
Now we are ready to take a look at an actual example. Let's first create our `BasicPSFPhotometry` object putting together the pieces that we defined along the way:
```
from photutils.psf import BasicPSFPhotometry
basic_photometry = BasicPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,
psf_model=gaussian_psf, fitshape=fitshape,
finder=daofinder)
```
To actually perform photometry on our image that we defined previously, we should use `basic_photometry` as a function call:
```
photometry_results = basic_photometry(image)
photometry_results
```
Let's plot the residual image along with the original image:
```
fig, (ax1, ax2) = plt.subplots(1,2)
im1 = ax1.imshow(basic_photometry.get_residual_image())
ax1.set_title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,
ax=ax1, mappable=im1)
im2 = ax2.imshow(image)
ax2.set_title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,
ax=ax2, mappable=im2)
```
Looking at the residual image we observe that the photometry process was able to fit many stars but not all. This is probably due to inability of the source detection algorithm to decide the number of sources in every crowded group. Therefore, let's play with the source detection classes to see whether we can improve the photometry process.
Let's use the `IRAFStarFinder` and play with the optional parameters. A complete description of these parameters can be seen at the `photutils.dection` API documentation: https://photutils.readthedocs.io/en/latest/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder
```
from photutils.detection import IRAFStarFinder
iraffind = IRAFStarFinder(threshold=2.5*mmm_bkg(image),
fwhm=sigma_psf*gaussian_sigma_to_fwhm,
minsep_fwhm=0.01, roundhi=5.0, roundlo=-5.0,
sharplo=0.0, sharphi=2.0)
```
Now let's set the `finder` attribute of our `BasicPSFPhotometry` object with `iraffind`:
```
basic_photometry.finder = iraffind
```
Let's repeat the photometry process:
```
photometry_results = basic_photometry(image)
photometry_results
plt.subplot(1,2,1)
plt.imshow(basic_photometry.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
As we can see, the residual presents a better Gaussianity with only three groups that were not fitted well. The reason for that is that the sources may be too close to be distinguishable by the source detection algorithm.
## Example with known positions and unknwon fluxes
Let's assume that somehow we know the true positions of the stars and we only would like to perform fitting on the fluxes. Then we should use the optional argument `positions` when calling the photometry object:
```
from astropy.table import Table
positions = Table(names=['x_0', 'y_0'], data=[starlist['x_mean'], starlist['y_mean']])
photometry_results = basic_photometry(image=image, positions=positions)
plt.subplot(1,2,1)
plt.imshow(basic_photometry.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
Let's do a scatter plot between ground-truth fluxes and estimated fluxes:
```
photometry_results.sort('id')
plt.scatter(starlist['flux'], photometry_results['flux_fit'])
plt.xlabel('Ground-truth fluxes')
plt.ylabel('Estimated fluxes')
```
Let's also plot the relative error on the fluxes estimation as a function of the ground-truth fluxes.
```
plt.scatter(starlist['flux'], (photometry_results['flux_fit'] - starlist['flux'])/starlist['flux'])
plt.xlabel('Ground-truth flux')
plt.ylabel('Estimate Relative Error')
```
As we can see, the relative error becomes smaller as the flux increase.
# `IterativelySubtractedPSFPhotometry`
`IterativelySubtractedPSFPhotometry` is a subclass of `BasicPSFPhotometry` which adds iteration functionality to the photometry procedure. It has the same attributes as `BasicPSFPhotometry`, except that it includes an additional `niters` which represents the number of of times to loop through the photometry process, subtracting the best-fit stars each time.
Hence, the process implemented in `IterativelySubtractedPSFPhotometry` resembles the loop used by DAOPHOT: `FIND`, `GROUP`, `NSTAR`, `SUBTRACT`, `FIND`. On its own `IterativelySubtractedPSFPhotometry` doesn't implement the specific algorithms used in DAOPHOT, but it does implement the *structure* to enambe this (and `DAOPhotPSFPhotometry`, discussed below, does).
The attribute `niters` can be `None`, which means that the photometry procedure will continue until no more sources are detected.
One final detail: the attribute `finder` (specifying the star-finder algorithm) for `IterativelySubtractedPSFPhotometry` cannot be `None` (as it can be for `BasicPSFPhotometry`). This is because it would not make sense to have an iterative process where the star finder changes completely at each step. If you want to do that you're better off manually looping over a series of calls to different `BasicPSFPhotometry` objects.
## Example with unknwon positions and unknown fluxes
Let's instantiate an object of `IterativelySubtractedPSFPhotometry`:
```
from photutils.psf import IterativelySubtractedPSFPhotometry
itr_phot = IterativelySubtractedPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,
psf_model=gaussian_psf, fitshape=fitshape,
finder=iraffind, niters=2)
```
Let's now perform photometry on our artificil image:
```
photometry_results = itr_phot(image)
photometry_results
```
Observe that there is a new column namely `iter_detected` which shows the number of the iteration in which that source was detected.
Let's plot the residual image:
```
plt.subplot(1,2,1)
plt.imshow(itr_phot.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
# `DAOPhotPSFPhotometry`
There is also a class called `DAOPhotPSFPhotometry` that is a subclass of `IterativelySubtractedPSFPhotometry`. `DAOPhotPSFPhotometry` essentially implements the DAOPHOT photometry algorithm using `IterativelySubtractedPSFPhotometry`. So instead of giving it arguments like `finder`, you provide parameters specific for the DAOPhot-like sub-tasks (e.g., the FWHM the star-finder is optimized for).
We leave the use of this class as an exercise to the user to play with the parameters which would optimize the photometry procedure.
```
from photutils.psf import DAOPhotPSFPhotometry
dao_phot = DAOPhotPSFPhotometry(...)
photometry_results = dao_phot(image)
photometry_results
```
## Documentation
Narrative and API docs of the classes used here can be found in https://photutils.readthedocs.io/en/latest/
# Future Works
The PSF Photometry module in photutils is still under development and feedback from users is much appreciated. Please open an issue on the github issue tracker of photutils with any suggestions for improvement, functionalities wanted, bugs, etc.
Near future implementations in the photutils.psf module include:
* FWHM estimation: a Python equivalent to DAOPHOT psfmeasure.
* Uncertainties computation: uncertainties are very critical and it's very likely that we are going to use astropy saba package to integrate uncertainty computation into photutils.psf.
| github_jupyter |
```
import numpy as np
from astropy.table import Table, join, MaskedColumn, vstack
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import scipy
from astropy.time import Time
import pandas as pd
import re
import seaborn as sns
import datetime
from datetime import datetime
from datetime import timedelta
from math import e
from math import pi
from astropy.table import Column
from math import sqrt
import numpy as np
import emcee
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.table import Table
import math
import corner
from numpy import exp
from scipy import integrate
from scipy.integrate import quad
import pdb
import powerlaw
import random
#Reading in data file
M51_raw=Table.read('M51_Messa_2018_CSV.csv')
M51_raw
#Messa+ 2018 only used masses greater than 5000 solar masses
M51_used_masses_ind=np.where(M51_raw['Best_Mass_Msolar']>5000)
M51_used_masses=M51_raw[M51_used_masses_ind]
M51_used_masses
#Only used Ages Less than 200 Myr
M51_age_cut=np.where(M51_used_masses['Best_Age_yr']<200000000)
M51_used_ages_masses=M51_used_masses[M51_age_cut]
M51_used_ages_masses
log_masses=np.log10(M51_used_ages_masses['Best_Mass_Msolar'])
log_max_mass=np.log10(M51_used_ages_masses['Max_Mass_Msolar'])
log_min_mass=np.log10(M51_used_ages_masses['Min_Mass_Msolar'])
#20 Clusters with no upper or lower Estimates
no_max_min_estimate=np.where(log_max_mass<0)
M51_used_ages_masses.remove_rows([no_max_min_estimate])
M51_use=M51_used_ages_masses
M51_use
#Making The Histogram Anil wanted to see
log_masses=np.log10(M51_use['Best_Mass_Msolar'])
log_max_mass=np.log10(M51_use['Max_Mass_Msolar'])
log_min_mass=np.log10(M51_use['Min_Mass_Msolar'])
mass_error=[]
for i in range(len(log_max_mass)):
mass_error.append((log_max_mass[i]-log_min_mass[i])/2)
plt.hist(log_max_mass-log_masses, color='b', histtype='step', bins=20, label='Upper-Est')
plt.hist(log_masses-log_min_mass, color='r', histtype='step', bins=20, label='Est-Lower')
plt.yscale('log')
plt.legend()
plt.show()
plt.hist((log_max_mass-log_masses)-(log_masses-log_min_mass), color='k', histtype='step', bins=20)
#plt.hist(log_masses-log_min_mass, color='r', histtype='step', bins=20, label='Est-Lower')
plt.yscale('log')
plt.hist(log_masses, histtype='step', color='k')
plt.yscale('log')
#Running their Sample
def lnZ(theta, M):
alpha, M_c = theta
lin_M_c= 10**M_c
def f(M):
return (M**alpha)*exp(-M/lin_M_c)
ans, err = quad(f, 5000, np.inf)
return np.log(ans)
def lnlike(theta, M):
alpha, M_c = theta
lin_M= 10**M
lin_M_c= 10**M_c
return (np.sum(-lin_M/lin_M_c + alpha*np.log(lin_M) - lnZ(theta, lin_M)))
def lnprior(theta):
alpha, M_c = theta
if -3 <= alpha <= -1 and 3 <= M_c <= 8:
return 0.0
return -np.inf
def lnprob(theta, M):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, M)
starting_point=np.array([-1.99, 5.00])
ndim, nwalkers = 2, 500
nsteps= 600
burnin=100
pos = starting_point + 1e-2*np.random.randn(nwalkers, ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([log_masses]))
sampler.run_mcmc(pos, nsteps)
#plot chain
plt.plot(np.transpose(sampler.chain[:,:,1]))
plt.show()
sampler.get_chain(thin=5)
samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))
fig = corner.corner(samples, labels=["Alpha", "Log(M_c)"], label_kwargs={"fontsize": 18},
quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={"fontsize": 18})
fig.show()
#Trying to generate randomn samples that follows a power law distribution with an upper mass truncation published
#in Messa+2018
theoretical_distribution = powerlaw.Power_Law(xmin=5000, xmax=100000, parameters = [2], discrete=True)
simulated_data=theoretical_distribution.generate_random(3200)
fake_M_l=[]
for i in range(len(simulated_data)):
fake_M_l.append(simulated_data[i])
A3_fml=[]
for i in range(len(fake_M_l)):
if fake_M_l[i] >=5000 and fake_M_l[i] < 10**6.2:
A3_fml.append(fake_M_l[i])
A3_fml.sort()
fake_M=np.array(A3_fml)
fake_M
print(np.where(fake_M>100000))
random_ints = np.array(random.sample(range(2991, 3200), 190))
#random_ints2 = np.array(random.sample(range(2940, 3180), 150))
new_fake_M=np.delete(fake_M, [random_ints])
#new_fake_M2=np.delete(new_fake_M, [random_ints2])
log_FMl=[3.7 for i in range(93)]
for i in range(len(new_fake_M)):
log_FMl.append(np.log10(new_fake_M[i]))
log_FM= np.array(log_FMl)
log_FM
print(len(log_FM))
#x=[3,5.3]
#y=[461,1]
plt.hist(log_FM, histtype='step', bins=10)
#plt.plot(x,y, c='r', label='alpha= -2')
#plt.xlim(2.99,5)
plt.yscale('log')
plt.ylim(1)
plt.xlabel('logM')
plt.ylabel('N Clusters')
plt.legend()
def lnZ(theta, M):
alpha, M_c = theta
lin_M_c= 10**M_c
def f(M):
return (M**alpha)*exp(-M/lin_M_c)
ans, err = quad(f, 5000, np.inf)
return np.log(ans)
def lnlike(theta, M):
alpha, M_c = theta
lin_M= 10**M
lin_M_c= 10**M_c
return (np.sum(-lin_M/lin_M_c + alpha*np.log(lin_M) - lnZ(theta, lin_M)))
def lnprior(theta):
alpha, M_c = theta
if -3 <= alpha <= -1 and 3 <= M_c <= 8:
return 0.0
return -np.inf
def lnprob(theta, M):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, M)
starting_point=np.array([-2.00, 5.00])
ndim, nwalkers = 2, 500
nsteps= 600
burnin=100
pos = starting_point + 1e-2*np.random.randn(nwalkers, ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([log_FM]))
sampler.run_mcmc(pos, nsteps)
#plot chain
plt.plot(np.transpose(sampler.chain[:,:,1]))
plt.show()
sampler.get_chain(thin=5)
samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))
fig = corner.corner(samples, labels=["Alpha", "Log(M_c)"], label_kwargs={"fontsize": 18},
quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={"fontsize": 18})
fig.show()
def uncertainty(mass_error, log_FM):
spread_masses=[]
for i in range(len(mass_error)):
rand_spread=(np.random.normal(0, mass_error[i]))
spread_masses.append(log_FM[i]+rand_spread)
spread_masses=np.array(spread_masses)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([spread_masses]))
sampler.run_mcmc(pos, nsteps)
#plot chain
# plt.plot(np.transpose(sampler.chain[:,:,1]))
# plt.show()
sampler.get_chain(thin=5)
samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))
fig = corner.corner(samples, labels=["Alpha", "Log(M_c)"], label_kwargs={"fontsize": 18},
quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={"fontsize": 18})
fig.show()
alpha=[i[0] for i in samples]
Mc= [i[1] for i in samples]
med_a=np.median(alpha)
upper_sig_a= np.percentile(alpha, 84)
lower_sig_a= np.percentile(alpha, 16)
med_Mc=np.median(Mc)
upper_sig_Mc= np.percentile(Mc, 84)
lower_sig_Mc= np.percentile(Mc, 16)
return np.array((med_a, lower_sig_a, upper_sig_a, med_Mc, lower_sig_Mc, upper_sig_Mc))
round1=uncertainty(mass_error, log_FM)
round2=uncertainty(mass_error, log_FM)
round3=uncertainty(mass_error, log_FM)
round4=uncertainty(mass_error, log_FM)
round5=uncertainty(mass_error, log_FM)
round6=uncertainty(mass_error, log_FM)
round7=uncertainty(mass_error, log_FM)
round8=uncertainty(mass_error, log_FM)
round9=uncertainty(mass_error, log_FM)
round10=uncertainty(mass_error, log_FM)
alphas=[round1[0], round2[0], round3[0], round4[0], round5[0], round6[0], round7[0], round8[0], round9[0], round10[0]]
Mcs= [round1[3], round2[3], round3[3], round4[3], round5[3], round6[3], round7[3], round8[3], round9[3], round10[3],]
print("Median:", np.median(Mcs))
print("1 Sigma:", np.percentile(Mcs, 16))
print("1 Sigma:", np.percentile(Mcs, 84))
```
| github_jupyter |
# Inexact Move Function
Let's see how we can incorporate **uncertain** motion into our motion update. We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green.
Next, you're tasked with modifying the `move` function so that it incorporates uncertainty in motion.
<img src='images/uncertain_motion.png' width=50% height=50% />
First let's include our usual resource imports and display function.
```
# importing resources
import matplotlib.pyplot as plt
import numpy as np
```
A helper function for visualizing a distribution.
```
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
```
You are given the initial variables and the complete `sense` function, below.
```
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
```
### QUIZ: Modify the move function to accommodate the added probabilities of overshooting or undershooting the intended destination.
This function should shift a distribution with the motion, U, with some probability of under/overshooting. For the given, initial `p`, you should see the result for U = 1 and incorporated uncertainties: `[0.0, 0.1, 0.8, 0.1, 0.0]`.
```
## TODO: Modify the move function to accommodate the added robabilities of overshooting or undershooting
pExact = 0.8
pOvershoot = 0.1
pUndershoot = 0.1
# Complete the move function
def move(p, U):
q=[]
# iterate through all values in p
for i in range(len(p)):
# use the modulo operator to find the new location for a p value
# this finds an index that is shifted by the correct amount
index = (i-U) % len(p)
nextIndex = (index+1) % len(p)
prevIndex = (index-1) % len(p)
s = pExact * p[index]
s = s + pOvershoot * p[nextIndex]
s = s + pUndershoot * p[prevIndex]
# append the correct, modified value of p to q
q.append(s)
return q
## TODO: try this for U = 2 and see the result
p = move(p,1)
print(p)
display_map(p)
```
| github_jupyter |
# 🦌 RuDOLPH 350M
<b><font color="white" size="+2">Official colab of [RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP](https://github.com/sberbank-ai/ru-dolph)</font></b>
<font color="white" size="-0.75."><b>RuDOLPH</b> is a fast and light text-image-text transformer (350M GPT-3) for generating text like <b>GPT</b>, generating image (e.g.: image by text, image by image prompt) like <b>DALL-E</b>, generating image captions, image classification in Zero-Shot mode and image ranking like <b>CLIP</b>.
<b>RuDOLPH 350M</b> is designed for quick and easy fine-tuning setup for solution of various tasks: from generating images by text description and image classification, to visual question answering and more. This colab demonstates the power of Hyper-Modal Transfomers.</font>
Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model
<font color="white" size="-0.75."><b>RuDOLPH for fast zero-shot text to image generation.</b> On the first phase we generate 288 in 5 min images by text! It takes Diffusion decoder is based on [Jack000](https://github.com/Jack000/) solution and ESRGAN-Real for high quality image rendering.</font>
# install all
```
!pip install rudolph==0.0.1rc8 > /dev/null
!pip install bitsandbytes-cuda111 > /dev/null
!pip install wandb > /dev/null
!pip install pytorch-lightning > /dev/null
```
#Download data
```
!pip install --upgrade gdown
import gdown
# a file
url = "http://drive.google.com/uc?id=17bPt7G3N_vGKCCxppIOPbPlhv1qUnv0o"
output = "food.zip"
gdown.download(url, output, quiet=False)
!unzip /content/food.zip
```
# Train this deer🦌🦌🦌
```
import os
import sys
import random
from collections import Counter
import PIL
import torch
import numpy as np
import pandas as pd
import bitsandbytes as bnb
import torchvision.transforms as T
import torchvision.transforms.functional as TF
from tqdm import tqdm
from wordcloud import WordCloud
from matplotlib import pyplot as plt
from torch.utils.data import Dataset, DataLoader
from rudalle import get_tokenizer, get_vae
from rudalle.utils import seed_everything
import pytorch_lightning as pl
from rudolph.model.utils import get_attention_mask
from rudolph.model import get_rudolph_model, ruDolphModel, FP16Module
from rudolph.pipelines import generate_codebooks, self_reranking_by_image, self_reranking_by_text, show, generate_captions, generate_texts, zs_clf
from rudolph import utils
device = 'cuda'
model = get_rudolph_model('350M', fp16=True, device=device)
tokenizer = get_tokenizer()
vae = get_vae(dwt=False).to(device)
class Args():
def __init__(self, model):
self.device = model.get_param('device')
self.l_text_seq_length = model.get_param('l_text_seq_length')
self.r_text_seq_length = model.get_param('r_text_seq_length')
self.image_tokens_per_dim = model.get_param('image_tokens_per_dim')
self.image_seq_length = model.get_param('image_seq_length')
self.epochs = 5
self.save_path='checkpoints/'
self.model_name = 'awesomemodel_'
self.save_every = 500
self.bs = 2
self.clip = 1.0
self.lr = 2e-5
self.freeze = False
self.wandb = False
self.train_steps = 10
self.lt_loss_weight = 0.01
self.img_loss_weight = 1
self.rt_loss_weight = 7
self.image_size = self.image_tokens_per_dim * 8
args = Args(model)
if not os.path.exists(args.save_path):
os.makedirs(args.save_path)
class FoodDataset(Dataset):
def __init__(self, file_path, csv_path, tokenizer, shuffle=True):
self.tokenizer = tokenizer
self.samples = []
self.image_transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(args.image_size, scale=(1., 1.), ratio=(1., 1.)),
T.ToTensor()
])
df = pd.read_csv(csv_path)
df.columns = ['index', 'belok', 'fats', 'uglevod', 'kkal', 'name', 'path']
for belok, fats, uglevod, kkal, caption, f_path in zip(
df['belok'],df['fats'], df['uglevod'], df['kkal'], df['name'], df['path']
):
caption = f'блюдо: {caption}; белков: {belok}; жиров: {fats}; углеводов: {uglevod}; ккал: {kkal};'
if len(caption)>10 and len(caption)<100 and os.path.isfile(f'{file_path}/{f_path}'):
self.samples.append([file_path, f_path, caption.lower()])
if shuffle:
np.random.shuffle(self.samples)
print('Shuffled')
def __len__(self):
return len(self.samples)
def load_image(self, file_path, img_name):
return PIL.Image.open(f'{file_path}/{img_name}')
def __getitem__(self, item):
item = item % len(self.samples)
file_path, img_name, text = self.samples[item]
try:
image = self.load_image(file_path, img_name)
image = self.image_transform(image)
except Exception as err:
print(err)
random_item = random.randint(0, len(self.samples) - 1)
return self.__getitem__(random_item)
text = text.lower().strip()
encoded = self.tokenizer.encode_text(text, text_seq_length=args.r_text_seq_length)
return encoded, image
```
#Lets look what is inside food Dataset 🤔
```
dataset = FoodDataset(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
args.train_steps = len(dataset)//args.bs
class FoodDataModule(pl.LightningDataModule):
def __init__(self, file_path, csv_path, tokenizer):
super().__init__()
def setup(self, stage=None):
self.train_dataset = FoodDataset(file_path='/content/food',
csv_path ='/content/food/food.csv',
tokenizer=tokenizer)
def train_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size=args.bs,
shuffle=True,
)
data_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
idx = random.randint(0, len(dataset)-1)
encoded, image = dataset[idx]
print(tokenizer.decode_text(encoded))
plt.imshow(image.permute(1,2,0).cpu().numpy());
idx = random.randint(0, len(dataset)-1)
encoded, image = dataset[idx]
print(tokenizer.decode_text(encoded))
plt.imshow(image.permute(1,2,0).cpu().numpy());
df = pd.read_csv('/content/food/food.csv')
wc, c = WordCloud(), Counter()
for text in df['name']:
try:
c.update(wc.process_text(text))
except:
continue
wc.fit_words(c)
plt.figure(figsize=(7,7));
plt.imshow(wc, interpolation='bilinear');
plt.axis("off");
import seaborn as sns
text_value_counts = pd.DataFrame(df['name'].value_counts())
ax = sns.histplot(data=text_value_counts, x="name");
ax.set_title('Duplicated text count histogram');
ax.set_xlabel('duplicates count');
```
#Train this deer 🦌🎄☃️
```
class Rudolph_(pl.LightningModule):
def __init__(self, args, vae):
super().__init__()
self.model = get_rudolph_model('350M', fp16=False, device=self.device)
#self.vae = get_vae(dwt=False).to(self.device)
print(self.device)
def forward(self,
input_ids,
lt_loss_weight=0.1,
img_loss_weight=0.8,
rt_loss_weight=0.1,
return_loss=True):
total_seq_length = args.l_text_seq_length + args.image_seq_length*args.image_seq_length + args.r_text_seq_length
masks = torch.ones(args.bs, args.r_text_seq_length, dtype=torch.int32)
attention_mask = get_attention_mask(masks, args.bs, args.l_text_seq_length, args.image_tokens_per_dim,
args.r_text_seq_length, self.device)
loss, loss_values = self.model.forward(input_ids,
attention_mask,
lt_loss_weight=lt_loss_weight,
img_loss_weight=img_loss_weight,
rt_loss_weight=rt_loss_weight,
return_loss=True)
return loss
def training_step(self, batch):
text, images = batch[0], batch[1]
image_input_ids = vae.get_codebook_indices(images).to(self.device)
r_text = text.to(self.device)
l_text = torch.zeros((args.bs, args.l_text_seq_length), dtype=torch.long).to(self.device)
input_ids = torch.cat((l_text, image_input_ids, r_text), dim=1)
loss = self.forward(input_ids,
lt_loss_weight=args.lt_loss_weight,
img_loss_weight=args.img_loss_weight,
rt_loss_weight=args.rt_loss_weight,
return_loss=True)
self.log("train_loss", loss, prog_bar=True, logger=True)
return {"loss": loss}
def training_epoch_end(self, outputs):
pass
def _freeze(self,
params,
freeze_emb=False,
freeze_ln=False,
freeze_attn=True,
freeze_ff=True,
freeze_other=False):
#print(params)
for name, p in enumerate(params):
#print(name, p)
#name = name.lower()
if 'ln' in name or 'norm' in name:
p.requires_grad = not freeze_ln
elif 'embeddings' in name:
p.requires_grad = not freeze_emb
elif 'mlp' in name:
p.requires_grad = not freeze_ff
elif 'attn' in name:
p.requires_grad = not freeze_attn
else:
p.requires_grad = not freeze_other
return model
def configure_optimizers(self):
if args.freeze:
optimizer = torch.optim.Adam(self._freeze(self.parameters()), lr=args.lr)
else:
optimizer = torch.optim.Adam(self.parameters(), lr=args.lr)
#bnb.optim.Adam8bit(self.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer,
max_lr=args.lr,
final_div_factor=500,
steps_per_epoch=args.train_steps,
epochs=args.epochs
)
return optimizer
from pytorch_lightning.loggers import WandbLogger
# я использую wandb в качестве логера, если надо замените на тенсорборду
wandb_logger = WandbLogger(project="rudolf")
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
from pytorch_lightning.loggers import TensorBoardLogger
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints",
filename="best-checkpoint",
save_top_k=1,
verbose=True,
mode="min"
)
model = Rudolph_(args,vae)
data_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
trainer = pl.Trainer(
logger=wandb_logger,
checkpoint_callback=checkpoint_callback,
max_epochs=2,
accelerator="gpu",
progress_bar_refresh_rate=30
)
trainer.fit(model,data_module)
trainer.save_checkpoint('/rudolf')
```
# 🖼2✍ Lets test trained model
```
def _fix_pl(path):
d = torch.load(path)["state_dict"]
checkpoint = {}
for key in d.keys():
checkpoint[key.replace('model.','')] = d[key]
torch.save(checkpoint,'fixed.pt')
template = 'блюдо:'
import requests
from PIL import Image
import torch
device = 'cuda'
model = get_rudolph_model('350M', fp16=True, device=device)
tokenizer = get_tokenizer()
vae = get_vae(dwt=False).to(device)
# path can change because PL
_fix_pl('/content/rudolf/1033wc66/checkpoints/epoch=1-step=474-v1.ckpt')
model.load_state_dict(torch.load('fixed.pt'))
img_by_url = 'https://kulinarenok.ru/img/steps/31445/1-7.jpg' #@param {type:"string"}
# img_by_url = 'https://img.delo-vcusa.ru/2020/11/Borshh-s-yablokami.jpg'
img_by_url = Image.open(requests.get(img_by_url, stream=True).raw).resize((128, 128))
#@markdown number of images
captions_num = 4 #@param{type:'slider'}
display(img_by_url)
texts = generate_captions(img_by_url, tokenizer, model, vae, template=template,
top_k=16, captions_num=captions_num, bs=16, top_p=0.6, seed=43,
temperature=0.8, limit_eos=False)
ppl_text, ppl_image = self_reranking_by_image(texts, img_by_url, tokenizer, model, vae, bs=16, seed=42)
for idx in ppl_image.argsort()[:8]:
print(texts[idx])
```
| github_jupyter |
# Quantitative omics
The exercises of this notebook correspond to different steps of the data analysis of quantitative omics data. We use data from transcriptomics and proteomics experiments.
## Installation of libraries and necessary software
Copy the files *me_bestprobes.csv* and _AllQuantProteinsInAllSamples.csv_ into the folder that contains this jupyter notebook or upload them to http://localhost:8888/tree
Install the necessary libraries (only needed once) by executing (shift-enter) the following cell:
```
install.packages("DAAG", repos='http://cran.us.r-project.org')
install.packages("MASS", repos='http://cran.us.r-project.org')
install.packages("matrixStats", repos='http://cran.us.r-project.org')
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c("Biobase","preprocessCore","qvalue","limma"))
```
## Loading data and libraries
This requires that the installation above have been finished without error
```
library("MASS")
library("DAAG")
library("matrixStats")
library("Biobase")
library("preprocessCore")
library("qvalue")
library("limma")
me_Kalinka <- read.csv("me_bestprobes.csv",row.names=1)
CanceriTRAQ <- read.csv("AllQuantProteinsInAllSamples.csv",row.names=1)
```
### Exercise 1
We apply different ways of normalization to a typical microarray data set.
Get the data ```geneData``` from the ```Biobase``` package. Normalize the columns (by division on normal scale or subtraction on log-scale) by a) mean, b) median, c) mean of log-values, and d) median of log-values. Revise the results extensively by comparing the multiple distributions in histograms, density plots, ranked plots and ```qqnorm```. Do also a direct comparison between replicates by scatter plots.
```
data(geneData)
geneData[geneData<=0] <- NA
logDat <- log2(geneData)
```
##### Question I: <u>Would you plot the data on log-scale or on normal scale?</u>
_Answer_
##### Question II: <u>What does qqnorm tell us?</u>
_Answer_
##### Question III: <u>What is the problem when normalizing by the mean on normal scale?</u>
_Answer_
##### Question IV: <u>What is the difference between normalization b) and d)?</u>
_Answer_
### Exercise 2
Here, we will determine differentially regulated genes from the comparison between different sample groups of geneData.
a) Take the log-transformed ```geneData``` set and perform t-tests for all genes between sample groups (B, I, K, N, P, T) and (C, G, J, O, R, U, V). You can copy and modifiy the code from the lecture. Do not forget to correct for multiple testing. Plot a histogram of the p-values and generate a volcano plot.
b) In order to see whether the t-tests also provide results for any comparison, take randomly chosen samples of 6 versus 6 groups and redo the statistical tests.
c) Carry out a principal component analysis on the entire data set and look for the groups that you tested for significantly different genes (loading plot) in a).
```
data(geneData)
geneData[geneData<=0] <- NA
logDat <- log2(geneData)
logDat <- logDat[complete.cases(logDat),]
pvals <- vector(,nrow(logDat))
for(i in 1:nrow(logDat)) {
pvals[i] <- t.test(logDat[i, c("B", "I", "K", "N", "P", "T")], logDat[i, c("C", "G", "J", "O", "R", "U", "V")])$p.value
}
pvals2 <- apply(logDat, 1, function(x) t.test(x[c("B", "I", "K", "N", "P", "T")] , x[c("C", "G", "J", "O", "R", "U", "V")])$p.value)
hist(pvals, 100)
fdrs <- p.adjust(pvals, method = "BH")
plot(rowMeans(logDat[, c("B", "I", "K", "N", "P", "T")]) -
rowMeans(logDat[, c("C", "G", "J", "O", "R", "U", "V")]),
-log10(fdrs))
abline(h=1)
abline(v=c(-2,2))
samples <- sample(LETTERS, 12)
g1 <- samples[1:6]
g2 <- samples[7:12]
pvals <- vector(,nrow(logDat))
for(i in 1:nrow(logDat)) {
pvals[i] <- t.test(logDat[i, g1], logDat[i, g2])$p.value
}
pvals2 <- apply(logDat, 1, function(x) t.test(x[g1] , x[g2])$p.value)
hist(pvals, 100)
fdrs <- p.adjust(pvals, method = "BH")
plot(rowMeans(logDat[, g1]) -
rowMeans(logDat[, g2]),
-log10(fdrs))
abline(h=1)
abline(v=c(-2,2))
pca.out <- princomp(logDat)
plot(pca.out$loadings)
text(pca.out$loadings, colnames(logDat), pos=2)
# ...
```
##### Question I: <u>How many differentially regulated genes do you find in a) and in b) (p-value below 0.01)?</u>
_Answer_
##### Question II: <u>Why does a volcano plot look like a volcano?</u>
_Answer_
##### Question III: <u>What does the PCA tell you about part a) of this exercise?</u>
_Answer_
### Exercise 3
In bottom-up LC-MS experiments, the output are peptides which can be shared between different proteins. This is why the results most of the time report protein groups instead of single proteins. Here, you will apply different operations on the reported protein groups.
Read the file _Example.csv_ and extract the column with the protein accession numbers.
a) Pick out one of the values and apply ```strsplit``` to separate database name (e.g. TREMBL, SWISS-PROT) from accession id.
b) Take a value with multiple protein accessions and extract only the accession ids.
c) Operate ```strsplit``` on the entire column and try to extract the accession ids.
d) Count the number of proteins per protein group and plot their distribution as histogram.
```
A <- read.csv("ExampleFile.csv")
protaccs <- A$Protein.Accessions
protaccs[60:65]
# a)
example_str <- strsplit(as.character(protaccs[63]),":",fixed = T)
example_str[[1]][2]
# b)
unlist(strsplit(strsplit(as.character(protaccs[63]),":",fixed = T)[[1]][2],";",fixed=T))
# c) Still some SWISS-PROT in the array though
# c) Still some SWISS-PROT in the array though
allprots <- list()
for (i in 1:length(protaccs)) {
str1 <- strsplit(as.character(protaccs[i]),":",fixed = T)
# print(str1[[1]])
if (length(str1[[1]])>1)
allprots[[i]] <- unlist(strsplit(str1[[1]][2],";",fixed=T))
}
# d) This one is on you
hist(sapply(allprots, length), 50)
table(sapply(allprots, length))
```
##### Question I: <u>What is the difference between TREMBL and SWISS-PROT annotations?</u>
_Answer_
##### Question II: <u>What is the advantage of measuring multiple peptides of a protein?</u>
_Answer_
##### Question 3: <u>How many proteins contains the largest protein group?</u>
_Answer_
### Exercise 4
We will test different normalization methods on micro-array data from _Drosophila melanogaster_ development (https://www.nature.com/articles/nature09634).
a) Make a boxplot and compare the different developmental stages.
Make a scatter plot and change sample numbers to see how they compare quantitatively.
Look at the MA plot and understand what it shows
b) Carry out median normalization and look at the plots of the normalized data
c) Carry out quantile normalization ```normalize.quantiles(microarray)``` and look at the plots again
```
microarray <- me_Kalinka[,2:ncol(me_Kalinka)]
#boxplot(microarray)
sample1 <- 1
sample2 <- 7
plot(rowMeans(microarray,na.rm=T),microarray[,sample2]-microarray[,sample1],cex=0.5,pch=15, col="#00000033",
xlab=paste("Sample",sample1), ylab=paste("Sample",sample2))
abline(h=0)
# add different normalizations here
# plot again
```
##### Question I: <u>Can you spot the difference between the developmental states from the boxplot?</u>
_Answer_
##### Question II: <u>What complicates normalization of such a data set with large differences?</u>
_Answer_
##### Question III: <u>What are the sometimes rather drastic changes in the data when using quantile normalization?</u>
_Answer_
##### Question IV: <u>Which normalization would you recommend?</u>
_Answer_
### Exercise 5
In this exercise, you will apply statistical tests to proteomics data.
Carry out t-tests between the two cancer subtypes of the ```CanceriTRAQ``` data (from https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0137048). Plot the p-values (corrected for multiple testing) in a volcano plot and compare the results to the ones in the _IsoProt_ paper (https://pubs.acs.org/doi/10.1021/acs.jproteome.8b00968)
Compare the results for the two types of correction for multiple testing "Benjamini-Hochberg" and the ```qvalue``` library ("Storey" method). You can make a scatter plot of the FDRs (corrected p-values) on log-scale and also compare by making two volcano plots.
```
CanceriTRAQRed <- CanceriTRAQ[rowSums(is.na(CanceriTRAQ))<3,]
# Add your code here:
```
##### Question I: <u>What does the first line of code do?</u>
_Answer_
##### Question II: <u>How many p-values <0.05 and 0.1 do you get? How many after correction for multiple testing?</u>
_Answer_
##### Question III: <u>What would be needed to increase the number of significantly changing proteins?</u>
_Answer_
##### Question IV: <u>How many p-values below 0.05 would a randomized data set of the same size give without correction for multiple testing?</u>
_Answer_
##### Question V: <u>Name the difference you observe when comparing the two methods ("Benjamini-Hochberg" and "Storey")</u>
_Answer_
### Exercise 6
The ```limma``` package provides better estimates of the p-values by adjusting the observed variances of the features to the generally observed trends in the data. We will further use different tools for biological interpretation.
Carry out limma testing on the cancer data and compare the results to the ones from the t-tests.
Take the 50 most regulated proteins and upload them to the following two web services for biological interpretation:
- DAVID: http://david.ncifcrf.gov
- GOrilla http://cbl-gorilla.cs.technion.ac.il/
```
## limma
# Set replicate numbers
Reps <- c(1,1,1,1,2,2,2,2)
Data <- CanceriTRAQ
NumCond <- max(Reps)
design <- model.matrix(~0+factor(Reps-1))
colnames(design)<-paste("i",c(1:NumCond),sep="")
contrasts<-NULL
First <- 1
for (i in (1:NumCond)[-First]) contrasts<-append(contrasts,paste(colnames(design)[i],"-",colnames(design)[First],sep=""))
contrast.matrix<-makeContrasts(contrasts=contrasts,levels=design)
print(dim(Data))
lm.fitted <- lmFit(Data,design)
lm.contr <- contrasts.fit(lm.fitted,contrast.matrix)
lm.bayes<-eBayes(lm.contr)
#topTable(lm.bayes)
# These are the (uncorrected) p-values from the moderated t-test from the limma package:
plvalues <- lm.bayes$p.value
head(sort(p.adjust(plvalues, method="BH")))
```
##### Question I: <u>How many regulated proteins do you find this time (FDR < 0.05)?</u>
_Answer_
##### Question II: <u>Which are the most enriched Gene ontology terms (GO terms, BP) in both web sites?</u>
_Answer_
##### Question III: <u>Which pathways are likely to distinguish the two cancer subtypes?</u>
_Answer_
| github_jupyter |
<table><tr>
<td style="background-color:#ffffff;text-align:left;"><a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="30%" align="left"></a></td>
<td style="background-color:#ffffff;"> </td>
<td style="background-color:#ffffff;vertical-align:text-middle;text-align:right;">
<table><tr style="background-color:white;">
<td> Visit</td>
<td><a href="http://qworld.lu.lv" target="_blank"><img src="../images/web-logo.png" width="35px"></a></td>
<td width="10pt"></td>
<td> Join</td>
<td><a href="https://qworldworkspace.slack.com/" target="_blank"><img src="../images/slack-icon.png" width="80px"></a></td>
<td width="10pt"></td>
<td>Follow</td>
<td><a href="https://www.facebook.com/qworld19/" target="_blank"><img src="../images/facebook-icon.png" width="40px"></a></td>
<td><a href="https://twitter.com/QWorld19" target="_blank"><img src="../images/twitter-icon.png" width="40px"></a></td>
</tr></table>
</td>
</tr></table>
<h2> Credits </h2>
<font style="color: #cd7f32;"><b>Bronze</b></font> was created by <a href="http://abu.lu.lv" target="_blank"><b>Dr. Abuzer Yakaryilmaz</b></a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) in October 2018, and the most part of it has been developed by him.
<b>Dr. Maksims Dimitrijevs</b> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) and <b>Dr. Özlem Salehi Köken</b> (<a href="http://qworld.lu.lv/index.php/qturkey/" target="_blank">QTurkey</a>) have revised all notebooks, proposed certain changes, and prepared a couple of new notebooks.
The first recording lectures were prepared by <b>Dr. Abuzer Yakaryilmaz</b>, <b>Dr. Özlem Salehi Köken</b>, and <b>Anastasija Trizna</b> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>).
Starting from <b>July 7, 2019</b>, Bronze has been on a public gitlab repository (https://gitlab.com/qkitchen/basics-of-quantum-computing) and it is expected to have contributions from public as well.
<hr>
<h3>Bronze 2020</h3>
Bronze has been revised throughout 2020.
We thank to the participants of [QTraining for Bronze program](https://qworld.lu.lv/index.php/qtraining-for-bronze-2020/) for their corrections and suggestions.
<hr>
<h3>Bronze 2019</h3>
We thank to <b><i><a href="https://qworld.lu.lv/index.php/qdrive/" target="_blank">QDrive</a> mentors and participants</i></b> for their very helpful corrections and suggestions.
We thank <b><i><a href="https://pl.linkedin.com/in/adamglos92" target="_blank">Adam Glos</a></i></b> (<a href="http://qworld.lu.lv/index.php/qpoland/" target="_blank">QPoland</a>) for his comments on Bronze 2018.
<hr>
<h3>Bronze 2018</h3>
We thank to <b><i>Katrina Kizenbaha</i></b> from Riga TechGirls for her revisions on our notebooks on python.
We thank to <b><i>Martins Kalis</i></b> (QLatvia) for his technical comments on python, qiskit, and our notebooks.
We thank to <b><i>Maksims Dimitrijevs</i></b> (QLatvia) for his careful reading and corrections on our notebooks.
We thank to QLatvia members and former members <b><i>Martins Kalis</i></b>, <b><i>Maksims Dimitrijevs</i></b>, <b><i>Aleksejs Naumovs</i></b>, <b><i>Andis Draguns</i></b>, and <b><i>Matiss Apinis</i></b> for their help and support.
We thank to <b><i>the students (<a href="https://www.df.lu.lv">DF@LU</a>) attending quantum programming's meetings</i></b> on each Friday (Fall 2018) for their comments while working with our notebooks.
<hr>
| github_jupyter |
<img src="https://s8.hostingkartinok.com/uploads/images/2018/08/308b49fcfbc619d629fe4604bceb67ac.jpg" width=500, height=450>
<h3 style="text-align: center;"><b>Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ</b></h3>
***Some parts of the notebook are almost the exact copy of [ML-MIPT course](https://github.com/girafe-ai/ml-mipt).Special thanks to ML-MIPT team for making them publicly available. [Original notebook](https://github.com/girafe-ai/ml-mipt/blob/advanced_f20/week1_05_BERT_and_GPT/week05_BERT_for_text_classification.ipynb).***
## Practice: A Visual Notebook to Using BERT for the First Time
*Credits: first part of this notebook belongs to Jay Alammar and his [great blog post](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) (while it has minor changes). His blog is a great way to dive into the DL and NLP concepts.*
<img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-sentence-classification.png" />
In this notebook, we will use pre-trained deep learning model to process some text. We will then use the output of that model to classify the text. The text is a list of sentences from film reviews. And we will calssify each sentence as either speaking "positively" about its subject of "negatively".
### Models: Sentence Sentiment Classification
Our goal is to create a model that takes a sentence (just like the ones in our dataset) and produces either 1 (indicating the sentence carries a positive sentiment) or a 0 (indicating the sentence carries a negative sentiment). We can think of it as looking like this:
<img src="https://jalammar.github.io/images/distilBERT/sentiment-classifier-1.png" />
Under the hood, the model is actually made up of two model.
* DistilBERT processes the sentence and passes along some information it extracted from it on to the next model. DistilBERT is a smaller version of BERT developed and open sourced by the team at HuggingFace. It’s a lighter and faster version of BERT that roughly matches its performance.
* The next model, a basic Logistic Regression model from scikit learn will take in the result of DistilBERT’s processing, and classify the sentence as either positive or negative (1 or 0, respectively).
The data we pass between the two models is a vector of size 768. We can think of this of vector as an embedding for the sentence that we can use for classification.
<img src="https://jalammar.github.io/images/distilBERT/distilbert-bert-sentiment-classifier.png" />
## Dataset
The dataset we will use in this example is [SST2](https://nlp.stanford.edu/sentiment/index.html), which contains sentences from movie reviews, each labeled as either positive (has the value 1) or negative (has the value 0):
<table class="features-table">
<tr>
<th class="mdc-text-light-green-600">
sentence
</th>
<th class="mdc-text-purple-600">
label
</th>
</tr>
<tr>
<td class="mdc-bg-light-green-50" style="text-align:left">
a stirring , funny and finally transporting re imagining of beauty and the beast and 1930s horror films
</td>
<td class="mdc-bg-purple-50">
1
</td>
</tr>
<tr>
<td class="mdc-bg-light-green-50" style="text-align:left">
apparently reassembled from the cutting room floor of any given daytime soap
</td>
<td class="mdc-bg-purple-50">
0
</td>
</tr>
<tr>
<td class="mdc-bg-light-green-50" style="text-align:left">
they presume their audience won't sit still for a sociology lesson
</td>
<td class="mdc-bg-purple-50">
0
</td>
</tr>
<tr>
<td class="mdc-bg-light-green-50" style="text-align:left">
this is a visually stunning rumination on love , memory , history and the war between art and commerce
</td>
<td class="mdc-bg-purple-50">
1
</td>
</tr>
<tr>
<td class="mdc-bg-light-green-50" style="text-align:left">
jonathan parker 's bartleby should have been the be all end all of the modern office anomie films
</td>
<td class="mdc-bg-purple-50">
1
</td>
</tr>
</table>
## Installing the transformers library
Let's start by installing the huggingface transformers library so we can load our deep learning NLP model.
```
!pip install transformers
```
[Transformers library doc](https://huggingface.co/transformers/)

```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
import torch
import transformers as ppb
import warnings
warnings.filterwarnings('ignore')
```
## Using BERT for text classification.
### Importing the dataset
We'll use pandas to read the dataset and load it into a dataframe.
```
df = pd.read_csv(
'https://github.com/clairett/pytorch-sentiment-classification/raw/master/data/SST2/train.tsv',
delimiter='\t',
header=None
)
```
For performance reasons, we'll only use 2,000 sentences from the dataset
```
batch_1 = df[:2000]
batch_1.head()
```
We can ask pandas how many sentences are labeled as "positive" (value 1) and how many are labeled "negative" (having the value 0)
```
batch_1[1].value_counts()
```
## Loading the Pre-trained BERT model
Let's now load a pre-trained BERT model.

```
# For DistilBERT:
model_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased')
## Want BERT instead of distilBERT? Uncomment the following line:
#model_class, tokenizer_class, pretrained_weights = (ppb.BertModel, ppb.BertTokenizer, 'bert-base-uncased')
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
```
Right now, the variable `model` holds a pretrained [distilBERT](https://medium.com/huggingface/distilbert-8cf3380435b5) model -- a version of BERT that is smaller, but much faster and requiring a lot less memory.
### Step #1: Preparing the Dataset
Before we can hand our sentences to BERT, we need to so some minimal processing to put them in the format it requires.
### Tokenization
Our first step is to tokenize the sentences -- break them up into word and subwords in the format BERT is comfortable with.
```
tokenized = batch_1[0].apply((lambda x: tokenizer.encode(x, add_special_tokens=True)))
print(tokenized[0])
text = batch_1.loc[1, 0]
print(text)
print(tokenizer.encode(text))
text_encode = tokenizer.encode(text)
print(tokenizer.encode(text, add_special_tokens=False))
print(tokenizer.decode(text_encode))
print(' '.join([tokenizer.ids_to_tokens[i] for i in text_encode]))
tokenizer.cls_token_id, tokenizer.sep_token_id, tokenizer.pad_token_id
tokenizer.vocab_size
```
<img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-tokenization-2-token-ids.png" />
### Padding
After tokenization, `tokenized` is a list of sentences -- each sentences is represented as a list of tokens. We want BERT to process our examples all at once (as one batch). It's just faster that way. For that reason, we need to pad all lists to the same size, so we can represent the input as one 2-d array, rather than a list of lists (of different lengths).
```
import matplotlib.pyplot as plt
plt.hist(list(map(len, tokenized.values)))
plt.show()
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values])
```
Our dataset is now in the `padded` variable, we can view its dimensions below:
```
np.array(padded).shape
```
### Masking
If we directly send `padded` to BERT, that would slightly confuse it. We need to create another variable to tell it to ignore (mask) the padding we've added when it's processing its input. That's what attention_mask is:
```
attention_mask = np.where(padded != 0, 1, 0)
attention_mask.shape
```
### Step #1: And Now, Deep Learning!
Now that we have our model and inputs ready, let's run our model!
<img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-tutorial-sentence-embedding.png" />
The `model()` function runs our sentences through BERT. The results of the processing will be returned into `last_hidden_states`.
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
model.eval()
model = model.to(device)
padded.shape, attention_mask.shape
from tqdm.notebook import tqdm
input_ids = torch.tensor(padded)
attention_mask = torch.tensor(attention_mask)
batch_size = 20
output = []
for idx in tqdm(range(0, len(padded), batch_size)):
batch = input_ids[idx:idx + batch_size].to(device)
print(batch.shape)
part_attention_mask = attention_mask[idx:idx + batch_size].to(device)
print(part_attention_mask.shape)
with torch.no_grad():
last_hidden_states = model(batch, attention_mask=part_attention_mask)
output.append(last_hidden_states[0].cpu())
```
Let's slice only the part of the output that we need. That is the output corresponding the first token of each sentence. The way BERT does sentence classification, is that it adds a token called `[CLS]` (for classification) at the beginning of every sentence. The output corresponding to that token can be thought of as an embedding for the entire sentence.
<img src="https://jalammar.github.io/images/distilBERT/bert-output-tensor-selection.png" />
We'll save those in the `features` variable, as they'll serve as the features to our logitics regression model.
$Z_{[CLS]} = \sum\limits_{token \: \in \: Vocab} \text{similarity}(Q_{[CLS]} \cdot K_{token}) \cdot V_{token}$
----------
```
input_ids.shape
# change output
output = torch.cat(output, dim=0)
output.shape
features = output[:,0,:].numpy()
```
The labels indicating which sentence is positive and negative now go into the `labels` variable
```
labels = batch_1[1]
features.shape, labels.shape
```
### Step #3: Train/Test Split
Let's now split our datset into a training set and testing set (even though we're using 2,000 sentences from the SST2 training set).
```
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, random_state=0)
```
<img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-train-test-split-sentence-embedding.png" />
### [Extra] Grid Search for Parameters
We can dive into Logistic regression directly with the Scikit Learn default parameters, but sometimes it's worth searching for the best value of the C parameter, which determines regularization strength.
```
parameters = {'C': np.linspace(0.0001, 10, 20)}
grid_search = GridSearchCV(LogisticRegression(), parameters)
grid_search.fit(train_features, train_labels)
print('best parameters: ', grid_search.best_params_)
print('best scrores: ', grid_search.best_score_)
```
We now train the LogisticRegression model. If you've chosen to do the gridsearch, you can plug the value of C into the model declaration (e.g. `LogisticRegression(C=5.2)`).
```
lr_clf = LogisticRegression(C=1.052721052631579)
lr_clf.fit(train_features, train_labels)
```
<img src="https://jalammar.github.io/images/distilBERT/bert-training-logistic-regression.png" />
### Step #4: Evaluating Model
So how well does our model do in classifying sentences? One way is to check the accuracy against the testing dataset:
```
lr_clf.score(test_features, test_labels)
```
How good is this score? What can we compare it against? Let's first look at a dummy classifier:
```
from sklearn.dummy import DummyClassifier
clf = DummyClassifier()
scores = cross_val_score(clf, train_features, train_labels)
print("Dummy classifier score: %0.3f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
```
So our model clearly does better than a dummy classifier. But how does it compare against the best models?
### Proper SST2 scores
For reference, the [highest accuracy score](http://nlpprogress.com/english/sentiment_analysis.html) for this dataset is currently **96.8**. DistilBERT can be trained to improve its score on this task – a process called **fine-tuning** which updates BERT’s weights to make it achieve a better performance in this sentence classification task (which we can call the downstream task). The fine-tuned DistilBERT turns out to achieve an accuracy score of **90.7**. The full size BERT model achieves **94.9**.
And that’s it! That’s a good first contact with BERT. The next step would be to head over to the documentation and try your hand at [fine-tuning](https://huggingface.co/transformers/examples.html#glue). You can also go back and switch from distilBERT to BERT and see how that works.
So, how does it look? Did we achieve better results?
Here come some further ideas:
* Try using the larger BERT (e.g. BERT-base or BERT-large) and compare the results (be careful, they require more memory).
* Using BERT output for translation? Why not ;)
| github_jupyter |
# Laboratorio: Convolutional Neural Networks
En este laboratorio, vamos a trabajar con Convolutional Neural Networks para resolver un problema de clasificación de imágenes. En particular, vamos a clasificar imágenes de personajes de la conocida serie de los Simpsons.
Como las CNN profundas son un tipo de modelo bastante avanzado y computacionalmente costoso, se recomienda hacer la práctica en Google Colaboratory con soporte para GPUs. En [este enlace](https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d) se explica cómo activar un entorno con GPUs. *Nota: para leer las imágenes y estandarizarlas al mismo tamaño se usa la librería opencv. Esta ĺibrería está ya instalada en el entorno de Colab, pero si trabajáis de manera local tendréis que instalarla.*
<center><img src="https://i.imgur.com/i8zIGqX.jpg" style="text-align: center" height="300px"></center>
El dataset a utilizar consiste en imágenes de personajes de los Simpsons extraídas directamente de capítulos de la serie. Este dataset ha sido recopilado por [Alexandre Attia](http://www.alexattia.fr/) y es más complejo que el dataset de Fashion MNIST que hemos utilizado hasta ahora. Aparte de tener más clases (vamos a utilizar los 18 personajes con más imágenes), los personajes pueden aparecer en distintas poses, en distintas posiciones de la imagen o con otros personajes en pantalla (si bien el personaje a clasificar siempre aparece en la posición predominante).
El dataset de training puede ser descargado desde aquí:
[Training data](https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219337&authkey=AMzI92bJPx8Sd60) (~500MB)
Por otro lado, el dataset de test puede ser descargado de aquí:
[Test data](https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219341&authkey=ANnjK3Uq1FhuAe8) (~10MB)
Antes de empezar la práctica, se recomienda descargar las imágenes y echarlas un vistazo.
## Carga de los datos
```
import cv2
import os
import numpy as np
import keras
import matplotlib.pyplot as plt
import glob
# Primero, bajamos los datos de entrenamiento
keras.utils.get_file(fname="simpsons_train.tar.gz",
origin="https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219337&authkey=AMzI92bJPx8Sd60")
# Descomprimimos el archivo
!tar -xzf /root/.keras/datasets/simpsons_train.tar.gz -C /root/.keras/datasets
# Hacemos lo mismo con los datos de test
keras.utils.get_file(fname="simpsons_test.tar.gz",
origin="https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219341&authkey=ANnjK3Uq1FhuAe8")
!tar -xzf /root/.keras/datasets/simpsons_test.tar.gz -C /root/.keras/datasets
# Esta variable contiene un mapeo de número de clase a personaje.
# Utilizamos sólo los 18 personajes del dataset que tienen más imágenes.
MAP_CHARACTERS = {
0: 'abraham_grampa_simpson', 1: 'apu_nahasapeemapetilon', 2: 'bart_simpson',
3: 'charles_montgomery_burns', 4: 'chief_wiggum', 5: 'comic_book_guy', 6: 'edna_krabappel',
7: 'homer_simpson', 8: 'kent_brockman', 9: 'krusty_the_clown', 10: 'lisa_simpson',
11: 'marge_simpson', 12: 'milhouse_van_houten', 13: 'moe_szyslak',
14: 'ned_flanders', 15: 'nelson_muntz', 16: 'principal_skinner', 17: 'sideshow_bob'
}
# Vamos a standarizar todas las imágenes a tamaño 64x64
IMG_SIZE = 64
def load_train_set(dirname, map_characters, verbose=True):
"""Esta función carga los datos de training en imágenes.
Como las imágenes tienen tamaños distintas, utilizamos la librería opencv
para hacer un resize y adaptarlas todas a tamaño IMG_SIZE x IMG_SIZE.
Args:
dirname: directorio completo del que leer los datos
map_characters: variable de mapeo entre labels y personajes
verbose: si es True, muestra información de las imágenes cargadas
Returns:
X, y: X es un array con todas las imágenes cargadas con tamaño
IMG_SIZE x IMG_SIZE
y es un array con las labels de correspondientes a cada imagen
"""
X_train = []
y_train = []
for label, character in map_characters.items():
files = os.listdir(os.path.join(dirname, character))
images = [file for file in files if file.endswith("jpg")]
if verbose:
print("Leyendo {} imágenes encontradas de {}".format(len(images), character))
for image_name in images:
image = cv2.imread(os.path.join(dirname, character, image_name))
X_train.append(cv2.resize(image,(IMG_SIZE, IMG_SIZE)))
y_train.append(label)
return np.array(X_train), np.array(y_train)
def load_test_set(dirname, map_characters, verbose=True):
"""Esta función funciona de manera equivalente a la función load_train_set
pero cargando los datos de test."""
X_test = []
y_test = []
reverse_dict = {v: k for k, v in map_characters.items()}
for filename in glob.glob(dirname + '/*.*'):
char_name = "_".join(filename.split('/')[-1].split('_')[:-1])
if char_name in reverse_dict:
image = cv2.imread(filename)
image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
X_test.append(image)
y_test.append(reverse_dict[char_name])
if verbose:
print("Leídas {} imágenes de test".format(len(X_test)))
return np.array(X_test), np.array(y_test)
# Cargamos los datos. Si no estás trabajando en colab, cambia los paths por
# los de los ficheros donde hayas descargado los datos.
DATASET_TRAIN_PATH_COLAB = "/root/.keras/datasets/simpsons"
DATASET_TEST_PATH_COLAB = "/root/.keras/datasets/simpsons_testset"
X, y = load_train_set(DATASET_TRAIN_PATH_COLAB, MAP_CHARACTERS)
X_t, y_t = load_test_set(DATASET_TEST_PATH_COLAB, MAP_CHARACTERS)
# Vamos a barajar aleatoriamente los datos. Esto es importante ya que si no
# lo hacemos y, por ejemplo, cogemos el 20% de los datos finales como validation
# set, estaremos utilizando solo un pequeño número de personajes, ya que
# las imágenes se leen secuencialmente personaje a personaje.
perm = np.random.permutation(len(X))
X, y = X[perm], y[perm]
```
## Entregable
Utilizando Convolutional Neural Networks con Keras, entrenar un clasificador que sea capaz de reconocer personajes en imágenes de los Simpsons con una accuracy en el dataset de test de **85%**. Redactar un informe analizando varias de las alternativas probadas y los resultados obtenidos.
A continuación se detallan una serie de aspectos orientativos que podrían ser analizados en vuestro informe (no es necesario tratar todos ellos ni mucho menos, esto son ideas orientativas de aspectos que podéis explorar):
* Análisis de los datos a utilizar.
* Análisis de resultados, obtención de métricas de *precision* y *recall* por clase y análisis de qué clases obtienen mejores o peores resultados.
* Análisis visual de los errores de la red. ¿Qué tipo de imágenes o qué personajes dan más problemas a nuestro modelo?
* Comparación de modelos CNNs con un modelo de Fully Connected para este problema.
* Utilización de distintas arquitecturas CNNs, comentando aspectos como su profundidad, hiperparámetros utilizados, optimizador, uso de técnicas de regularización, *batch normalization*, etc.
* [ *algo más difícil* ] Utilización de *data augmentation*. Esto puede conseguirse con la clase [ImageDataGenerator](https://keras.io/preprocessing/image/#imagedatagenerator-class) de Keras.
Notas:
* Recuerda partir los datos en training/validation para tener una buena estimación de los valores que nuestro modelo tendrá en los datos de test, así como comprobar que no estamos cayendo en overfitting. Una posible partición puede ser 80 / 20.
* No es necesario mostrar en el notebook las trazas de entrenamiento de todos los modelos entrenados, si bien una buena idea seria guardar gráficas de esos entrenamientos para el análisis. Sin embargo, **se debe mostrar el entrenamiento completo del mejor modelo obtenido y la evaluación de los datos de test con este modelo**.
* Las imágenes **no están normalizadas**. Hay que normalizarlas como hemos hecho en trabajos anteriores.
* El test set del problema tiene imágenes un poco más "fáciles", por lo que es posible encontrarse con métricas en el test set bastante mejores que en el training set.
| github_jupyter |
```
import torch
import gym
import time
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
env = gym.make('Acrobot-v1')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
import torch
import torch.nn as nn
import torch.nn.functional as F
class Critic(nn.Module): #gives score of how bad or good the action is
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed= 12):
super(Critic, self).__init__()
self.seed = torch.manual_seed(seed)
"*** YOUR CODE HERE ***"
self.fc1 = nn.Linear(state_size, 32)
self.fc2 = nn.Linear(32, 32)
self.fc3 = nn.Linear(32, action_size)
# def forward(self, state):
# """Build a network that maps state -> action values."""
# x = self.fc1(state)
# x = torch.tanh(x)
# x = self.fc2(x)
# x = torch.tanh(x)
# x = self.fc3(x)
# x = torch.tanh(x) #using tanh for giving score of how good is action
# return x
def forward(self, state):
"""Build a network that maps state -> action values."""
x = self.fc1(state)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.relu(x) #using tanh for giving score of how good is action
return x
class Actor(nn.Module): #Policy Network
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed= 12):
super(Actor, self).__init__()
self.seed = torch.manual_seed(seed)
"*** YOUR CODE HERE ***"
self.fc1 = nn.Linear(state_size, 32)
self.fc2 = nn.Linear(32, 32)
self.fc3 = nn.Linear(32,action_size)
self.final = nn.Sigmoid()
def forward(self, state):
"""Build a network that maps state -> action values."""
x = self.fc1(state)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = self.final(x) #using sigmoid in an action
return x
device = 'cuda' if torch.cuda.is_available() else 'cpu'
actor = Actor(6,3,12).to(device)
critic = Critic(6,3,12).to(device)
import torch.optim as optim
optimizer = optim.Adam(actor.parameters(), lr=1e-4)
optimizer_critic = optim.Adam(critic.parameters(), lr=1e-4)
print(actor)
print(critic)
# Testing the network
for _ in range(5):
state = env.reset()
for i in range(100):
env.render()
state_tensor = torch.from_numpy(state).float().to(device)
prob = actor.forward(state_tensor)
action = prob.argmax()
prob = max(prob)
action_baseline = critic.forward(state_tensor)
next_state, reward, done, _ = env.step(action)
state = next_state
print('\rReward {} with action {} with score {}'.format(reward, action, action_baseline), end = ' ')
if done:
break
```
### Actual Making of Network using ppo Policy Network
```
def clipped_surrogate(policy, old_probs, states, actions, rewards, next_states,
discount=0.995,
epsilon=0.1, beta=0.01,
gamma = 0.1):
states = torch.from_numpy(np.array(states)).float().to(device)
next_states = torch.from_numpy(np.array(next_states)).float().to(device)
discount = discount**np.arange(len(rewards))
rewards_te = np.multiply(rewards, discount).reshape(len(rewards),1)
rewards_future = rewards_te[::-1].cumsum(axis=0)[::-1]
actions = np.array(actions, dtype=np.int8)
actions_final = torch.LongTensor(actions.reshape(len(actions),1))
# # adding contribution of actor
# f1 = critic.forward(next_states).argmax(1).reshape(len(next_states),1)
# f2 = torch.LongTensor(f1.cpu().reshape(f1.size()[0],1))
# f3 = torch.gather(f1,1,f2.to(device))
# # f1 = critic.forward(states).argmax(1).reshape(len(next_states),1)
# # f2 = torch.LongTensor(f1.cpu().reshape(f1.size()[0],1))
# # f4 = torch.gather(f1,1,f2.to(device))
# f1 = critic.forward(states)
# f4 = torch.gather(f1,1,actions_final.to(device))
# rewards_future = rewards_future + gamma*f3.detach().cpu().numpy() - f4.detach().cpu().numpy()
# ##end
mean = np.mean(rewards_future, axis = 0)
std = np.std(rewards_future, axis = 0)
rewards_normalized = (rewards_future - mean)/std
old_probs = torch.tensor(old_probs, dtype=torch.float, device=device).reshape(len(old_probs),1)
rewards = torch.tensor(rewards_normalized, dtype=torch.float, device=device)
g = actor.forward(states)
new_probs = torch.gather(g,1,actions_final.to(device))
ratio = new_probs/old_probs
# # clipped function
clip = torch.clamp(ratio, 1-epsilon, 1+epsilon)
clipped_surrogate = torch.min(ratio*rewards, clip*rewards)
# include a regularization term
# this steers new_policy towards 0.5
# add in 1.e-10 to avoid log(0) which gives nan
entropy = -(new_probs*torch.log(old_probs+1.e-10)+ \
(1.0-new_probs)*torch.log(1.0-old_probs+1.e-10))
return torch.mean(clipped_surrogate + beta*entropy)
def update_baseline(next_state, reward, state):
next_state = torch.from_numpy(np.array(next_state)).to(device).float()
reward = torch.from_numpy(np.array(reward)).to(device)
state = torch.from_numpy(np.array(state)).to(device).float()
Loss = F.mse_loss(critic.forward(state), reward + critic.forward(next_state))
optimizer_critic.zero_grad()
Loss.backward()
optimizer_critic.step()
def collect_trajectories(envs, policy, tmax=200):
state = env.reset()
states = []
actions = []
rewards = []
probs = []
next_states = []
for _ in range(tmax):
prob = actor(torch.from_numpy(state).float().to(device)) #for converting state to torch variable
prob_new = max(prob)
probs.append(prob_new)
states.append(state)
action = prob.argmax()
next_state, reward, done , _ = env.step(action)
# update_baseline(next_state, reward,state)
next_states.append(next_state)
rewards.append(reward)
actions.append(action)
state = next_state
if done:
break
return probs, states, actions, rewards, next_states
probs, states, actions, rewards, next_states = collect_trajectories(env, actor, tmax=200)
discount_rate = .99
epsilon = 0.1
beta = .01
tmax = 200
SGD_epoch = 4
episode = 1000
import progressbar as pb
widget = ['training loop: ', pb.Percentage(), ' ',
pb.Bar(), ' ', pb.ETA() ]
timer = pb.ProgressBar(widgets=widget, maxval=episode).start()
#following generate sim_nos instance of simulation
envs = gym.make('Acrobot-v1')
mean_rewards = []
for e in range(episode):
# collect trajectories
old_probs, states, actions, rewards, next_states = \
collect_trajectories(envs, actor, tmax=tmax)
total_rewards = np.sum(rewards, axis=0)
# this is the SOLUTION!
# use your own surrogate function
# L = -surrogate(policy, old_probs, states, actions, rewards, beta=beta)
for _ in range(SGD_epoch):
L = -1*clipped_surrogate(actor, old_probs, states, actions, rewards, next_states, epsilon=epsilon, beta=beta)
optimizer.zero_grad()
L.backward()
optimizer.step()
del L
epsilon*=0.999
# the regulation term also reduces
# this reduces exploration in later runs
beta*=.995
# get the average reward of the parallel environments
mean_rewards.append(np.mean(total_rewards))
# display some progress every 20 iterations
if (e+1)%20 ==0 :
print("Episode: {0:d}, score: {1:f}".format(e+1,np.mean(total_rewards)))
print(total_rewards)
# update progress widget bar
timer.update(e+1)
if(np.mean(total_rewards) == 200):
break
timer.finish()
plt.plot(mean_rewards)
```
### Testing
```
actor.forward(state_tensor)
# Testing the network
for _ in range(5):
state = env.reset()
for i in range(100):
env.render()
state_tensor = torch.from_numpy(state).float().to(device)
prob = actor.forward(state_tensor)
action_baseline = critic.forward(state_tensor)
action = prob.argmax()
next_state, reward, done, _ = env.step(action)
state = next_state
print('\rReward {} with action {} with critic baseline {} {}'.format(reward, action, action_baseline, prob), end = ' ')
if done:
break
env.close()
torch.save(a)
```
| github_jupyter |
```
#STA 663 Final Project
#Juncheng Dong, Xiaoqiao Xing
#May 2020
import numpy as np
import pandas as pd
import math
import matplotlib.pyplot as plt
from numpy import linalg as la
import random
from sklearn.cross_decomposition import PLSRegression
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
from sklearn import model_selection
from sklearn.model_selection import KFold
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
```
# Partial Least Square Function
```
def Normalize(X):
'''func to centerize and normalize the dataset,dataset should be numpy array'''
return (X - np.mean(X, axis = 0))/(np.std(X, axis = 0))
def norm(x):
sum=0
for i in x:
sum = sum + i**2
return np.sqrt(sum)
def PLS(X,Y,ncomponents,tol=1e-6):
E,F=X,Y
T = []
W = []
Q = []
U = []
P = []
B = []
rY, cY = Y.shape
rX, cX = X.shape
for i in range(ncomponents):
index=np.random.choice(range(Y.shape[1]))
#u=Y[:,index]
u=np.random.rand(rY)
counter = 0
while(True):
w = E.T@u
w = w/norm(w)
t = E@w
t = t/norm(t)
q = F.T@t
q = q/norm(q)
u = F@q
if counter==0:
tlast=t
elif norm(tlast-t)<tol:
break
else:
tlast=t
counter=counter+1
b = t.T@u
p = E.T@t
B.append(b)
T.append(t)
P.append(p)
W.append(w)
Q.append(q)
U.append(u)
E = E-t.reshape(-1,1)@p.reshape(1,-1)
F = F-b*t.reshape(-1,1)@q.reshape(1,-1)
return (np.array(T),np.array(P),np.array(W),np.array(Q),np.array(U),np.diag(B))
```
# Test Function on Wine Data
```
#Example1 Data : Wine
X1 = np.array([[7, 7, 13, 7],
[4, 3, 14, 7],
[10, 5, 12, 5],
[16, 7, 11, 3],
[13, 3, 10, 3]])
Y1 = np.array([[14, 7, 8],
[10, 7, 6],
[8, 5, 5],
[2, 4, 7],
[6, 2, 4]])
X1=Normalize(X1)
Y1=Normalize(Y1)
[T, P, W, Q, U, B] = PLS(X1,Y1,2)
P = P.T
Q = Q.T
P
BPLS = la.pinv(P.T)@B@Q.T
BPLS
```
# Compare OLS and PLS when there is only one solution
```
# OLS vs PLS
X_sim = np.random.randn(5, 5)
X_sim
Y_sim = np.random.randn(5,1)
Y_sim
X_sim = Normalize(X_sim)
Y_sim = Normalize(Y_sim)
from sklearn.linear_model import LinearRegression
OLS = LinearRegression()
B_O = OLS.fit(X_sim, Y_sim).coef_.T
B_O
[T, P, W, Q, U, B] = PLS(X_sim,Y_sim,5)
P = P.T
Q = Q.T
B_P = la.pinv(P.T)@B@Q.T
B_P
np.allclose(B_O, B_P)
pls = PLSRegression(n_components=5)
pls.fit(X_sim, Y_sim).coef_
```
# PLS Application & Comparison
```
#Import cars data
df = pd.read_excel("/Users/rachelxing/Desktop/STA663/cars_pls_regression.xls")
df.head()
X = df.iloc[:,:-3].to_numpy()
Y = df.iloc[:, -3:].to_numpy()
X.shape, Y.shape
#normalize X and Y
X = Normalize(X)
Y = Normalize(Y)
#PLS + leave one out (20 fold)
kf = KFold(n_splits=20, random_state=None, shuffle=False)
kf.get_n_splits(X)
y_predict_pls = []
y_test_pls = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
[T, P, W, Q, U, B] = PLS(X_train,y_train,7)
P = P.T
Q = Q.T
BPLS = la.pinv(P.T)@B@Q.T
y_test_pls.append(y_test)
y_predict_pls.append(X_test@BPLS)
y_predict_pls = np.array(y_predict_pls).reshape(20,3)
y_test_pls = np.array(y_test_pls).reshape(20,3)
mean_squared_error(y_test_pls, y_predict_pls)
#OLS on cars data + leave one out
y_predict_ols = []
y_test_ols = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg1 = LinearRegression().fit(X_train, y_train[:,0])
reg2 = LinearRegression().fit(X_train, y_train[:,1])
reg3 = LinearRegression().fit(X_train, y_train[:,2])
p1 = reg1.predict(X_test)
p2 = reg2.predict(X_test)
p3 = reg3.predict(X_test)
y_test_ols.append(y_test)
y_predict_ols.append([p1 ,p2, p3])
y_predict_ols = np.array(y_predict_ols).reshape(20,3)
y_test_ols = np.array(y_test_ols).reshape(20,3)
mean_squared_error(y_test_ols, y_predict_ols)
#Ridge Regression
#Select best parameter alpha
ridge = Ridge()
parameters = {'alpha' : [1e-10, 1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}
ridge_reg = GridSearchCV(ridge, parameters, scoring = 'neg_mean_squared_error', cv = 20)
ridge_reg.fit(X, Y)
print(ridge_reg.best_params_)
print(ridge_reg.best_score_)
#Ridge Regression
y_predict_ridge = []
y_test_ridge = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg = Ridge(alpha=5)
reg.fit(X_train, y_train)
y_test_ridge.append(y_test)
y_predict_ridge.append(reg.predict(X_test))
y_predict_ridge = np.array(y_predict_ridge).reshape(20,3)
y_test_ridge = np.array(y_test_ridge).reshape(20,3)
mean_squared_error(y_test_ridge, y_predict_ridge)
#Principal Component Regression
pca = PCA(n_components=7)
pca.fit(X.T)
print(pca.explained_variance_ratio_)
Z = pca.components_.T
X.shape, pca.components_.T.shape
#Regress on Principal components
y_predict_pcr = []
y_test_pcr = []
for train_index, test_index in kf.split(Z):
X_train, X_test = Z[train_index], Z[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg1 = LinearRegression().fit(X_train, y_train[:,0])
reg2 = LinearRegression().fit(X_train, y_train[:,1])
reg3 = LinearRegression().fit(X_train, y_train[:,2])
p1 = reg1.predict(X_test)
p2 = reg2.predict(X_test)
p3 = reg3.predict(X_test)
y_test_pcr.append(y_test)
y_predict_pcr.append([p1 ,p2, p3])
y_predict_pcr = np.array(y_predict_pcr).reshape(20,3)
y_test_pcr = np.array(y_test_pcr).reshape(20,3)
mean_squared_error(y_test_pcr, y_predict_pcr)
```
# Visualization
```
df_test =pd.DataFrame(Y, columns=['N_conscity', 'N_price', 'N_symboling'] )
df_test.head()
df_test[['PLS_conscity', 'PLS_price', 'PLS_symboling']] = pd.DataFrame(y_predict_pls)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('PLS Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["PLS_conscity"] , c = 'black')
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'black', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["PLS_price"] , c = 'black')
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'black', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["PLS_symboling"] , c = 'black')
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'black', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['OLS_conscity', 'OLS_price', 'OLS_symboling']] = pd.DataFrame(y_predict_ols)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('OLS Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["OLS_conscity"])
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["OLS_price"] )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["OLS_symboling"] )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['Ridge_conscity', 'Ridge_price', 'Ridge_symboling']] = pd.DataFrame(y_predict_ridge)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('Ridge Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["Ridge_conscity"], c = 'orange' )
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'orange', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["Ridge_price"], c = 'orange' )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'orange', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["Ridge_symboling"], c = 'orange' )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'orange', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['PCR_conscity', 'PCR_price', 'PCR_symboling']] = pd.DataFrame(y_predict_pcr)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('PCR Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["PCR_conscity"], c = 'navy' )
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'navy', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["PCR_price"], c = 'navy' )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'navy', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["PCR_symboling"], c = 'navy' )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'navy', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
```
| github_jupyter |
## 15.9.1 Loading the IMDb Movie Reviews Dataset (1 of 2)
* Contains **25,000 training samples** and **25,000 testing samples**, each **labeled** with its positive (1) or negative (0) sentiment
```
from tensorflow.keras.datasets import imdb
```
* **Over 88,000 unique words** in the dataset
* Can specify **number of unique words to import** when loading **training and testing data**
* We'll use top **10,000 most frequently occurring words**
* Due to **system memory limitations** and **training on a CPU** (intentionally)
* Most people don't have systems with Tensorflow-compatible **GPUs** or **TPUs**
* **More data** takes **longer to train**, but may produce **better models**
## 15.9.1 Loading the IMDb Movie Reviews Dataset (1 of 2)
* **`load_data`** **replaces** any words **outside the top 10,000** with a **placeholder** value (discussed shortly)
```
number_of_words = 10000
```
**NOTE:** Following cell was added to work around a **known issue with TensorFlow/Keras and NumPy**—this issue is already fixed in a forthcoming version. [See this cell's code on StackOverflow.](https://stackoverflow.com/questions/55890813/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-for-imdb-loa)
```
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
(X_train, y_train), (X_test, y_test) = imdb.load_data(
num_words=number_of_words)
# This cell completes the workaround mentioned above
# restore np.load for future normal usage
np.load = np_load_old
```
<hr style="height:2px; border:none; color:black; background-color:black;">
## 15.9.2 Data Exploration (1 of 2)
* Check sample and target dimensions
* **Note that `X_train` and `X_test` appear to be one-dimensional**
* They're actually **NumPy arrays of objects** (lists of integers)
```
X_train.shape
y_train.shape
X_test.shape
y_test.shape
```
<hr style="height:2px; border:none; color:black; background-color:black;">
## 15.9.2 Data Exploration (2 of 2)
* The **arrays `y_train` and `y_test`** are **one-dimensional** arrays containing **1s and 0s**, indicating whether each review is **positive** or **negative**
* `X_train` and `X_test` are **lists** of integers, each representing one review’s contents
* **Keras models require numeric data** — **IMDb dataset is preprocessed for you**
```
%pprint # toggle pretty printing, so elements don't display vertically
X_train[123]
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Movie Review Encodings (1 of 2)
* Because the **movie reviews** are **numerically encoded**, to view their original text, you need to know the word to which each number corresponds
* **Keras’s IMDb dataset** provides a **dictionary** that **maps the words to their indexes**
* **Each word’s value** is its **frequency ranking** among all words in the dataset
* **Ranking 1** is the **most frequently occurring word**
* **Ranking 2** is the **second most frequently occurring word**
* ...
<hr style="height:2px; border:none; color:black; background-color:black;">
### Movie Review Encodings (2 of 2)
* Ranking values are **offset by 3** in the training/testing samples
* **Most frequently occurring word has the value 4** wherever it appears in a review
* **0, 1 and 2** in each encoded review are **reserved**:
* **padding (0)**
* All training/testing samples **must have same dimensions**
* Some reviews may need to be padded with **0** and some shortened
* **start of a sequence (1)** — a **token** that Keras uses internally for learning purposes
* **unknown word (2)** — typically a word that was **not loaded**
* **`load_data`** uses **2** for words with **frequency rankings greater than `num_words`**
<hr style="height:2px; border:none; color:black; background-color:black;">
### Decoding a Movie Review (1 of 3)
* Must account for offset when **decoding reviews**
* Get the **word-to-index dictionary**
```
word_to_index = imdb.get_word_index()
```
* The word `'great'` might appear in a positive movie review:
```
word_to_index['great'] # 84th most frequent word
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Decoding a Movie Review (2 of 3)
* **Reverse `word_to_index` mapping**, so we can **look up words** by **frequency rating**
```
index_to_word = {index: word for (word, index) in word_to_index.items()}
```
* **Top 50 words**—**most frequent word** has the key **1** in the **new dictionary**
```
[index_to_word[i] for i in range(1, 51)]
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Decoding a Movie Review (3 of 3)
* Now, we can **decode a review**
* **`i - 3`** accounts for the **frequency ratings offsets** in the encoded reviews
* For `i` values `0`–`2`, `get` returns `'?'`; otherwise, `get` returns the word with the **key `i - 3`** in the **`index_to_word` dictionary**
```
' '.join([index_to_word.get(i - 3, '?') for i in X_train[123]])
```
* Can see from **`y_train[123]`** that this **review** is **classified as positive**
```
y_train[123]
```
<hr style="height:2px; border:none; color:black; background-color:black;">
## 15.9.3 Data Preparation (1 of 2)
* Number of words per review varies
* Keras **requires all samples to have the same dimensions**
* **Prepare data** for learning
* Restrict every review to the **same number of words**
* **Pad** some with **0s**, **truncate** others
* **`pad_sequences` function** reshapes samples and **returns a 2D array**
```
words_per_review = 200
from tensorflow.keras.preprocessing.sequence import pad_sequences
X_train = pad_sequences(X_train, maxlen=words_per_review)
X_train.shape
```
## 15.9.3 Data Preparation (2 of 2)
* Must also **reshape `X_test`** for evaluating the model later
```
X_test = pad_sequences(X_test, maxlen=words_per_review)
X_test.shape
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Splitting the Test Data into Validation and Test Data
* Split the **25,000 test samples** into **20,000 test samples** and **5,000 validation samples**
* We'll pass validation samples to the model’s `fit` method via **`validation_data`** argument
* Use **Scikit-learn’s `train_test_split` function**
```
from sklearn.model_selection import train_test_split
X_test, X_val, y_test, y_val = train_test_split(
X_test, y_test, random_state=11, test_size=0.20)
```
* Confirm the split by checking `X_test`’s and `X_val`’s shapes:
```
X_test.shape
X_val.shape
```
<hr style="height:2px; border:none; color:black; background-color:black;">
## 15.9.4 Creating the Neural Network
* Begin with a **`Sequential` model** and import the other layers
```
from tensorflow.keras.models import Sequential
rnn = Sequential()
from tensorflow.keras.layers import Dense, LSTM, Embedding
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Adding an Embedding Layer (1 of 2)
* RNNs that process **text sequences** typically begin with an **embedding layer**
* Encodes each word in a **dense-vector representation**
* These capture the **word’s context**—how a given word **relates to words around it**
* Help **RNN learn word relationships**
* **Predefined word embeddings**, such as **Word2Vec** and **GloVe**
* Can **load** into neural networks to **save training time**
* Sometimes used to **add basic word relationships** to a model when **smaller amounts of training data** are available
* **Improve model accuracy** by **building upon previously learned word relationships**, rather than trying to learn those relationships with insufficient data
<hr style="height:2px; border:none; color:black; background-color:black;">
### Adding an `Embedding` Layer (2 of 2)
```
rnn.add(Embedding(input_dim=number_of_words, output_dim=128,
input_length=words_per_review))
```
* **`input_dim=number_of_words`**—Number of **unique words**
* **`output_dim=128`**—Size of each word embedding
* If you [load pre-existing embeddings](https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) like **Word2Vec** and **GloVe**, you must set this to **match the size of the word embeddings you load**
* **`input_length=words_per_review`**—Number of words in each input sample
<hr style="height:2px; border:none; color:black; background-color:black;">
### Adding an LSTM Layer
```
rnn.add(LSTM(units=128, dropout=0.2, recurrent_dropout=0.2))
```
* **`units`**—**number of neurons** in the layer
* **More neurons** means **network can remember more**
* [**Guideline**](https://towardsdatascience.com/choosing-the-right-hyperparameters-for-a-simple-lstm-using-keras-f8e9ed76f046): Value between **length of the sequences** (200 in this example) and **number of classes to predict** (2 in this example)
* **`dropout`**—**percentage of neurons to randomly disable** when processing the layer’s input and output
* Like **pooling layers** in a **convnet**, **dropout** is a proven technique that **reduces overfitting**
* Yarin, Ghahramani, and Zoubin. “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.” October 05, 2016. https://arxiv.org/abs/1512.05287
* Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” _Journal of Machine Learning Research_ 15 (June 14, 2014): 1929-1958. http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf
* Keras also provides a **`Dropout`** layer that you can add to your models
* **`recurrent_dropout`**—**percentage of neurons to randomly disable** when the **layer’s output** is **fed back into the layer** again to allow the network to **learn from what it has seen previously**
* **Mechanics of how the LSTM layer performs its task are beyond scope**.
* Chollet says: “you don’t need to understand anything about the specific architecture of an LSTM cell; **as a human, it shouldn’t be your job to understand it**. Just keep in mind what the LSTM cell is meant to do: allow past information to be reinjected at a later time.”
* Chollet, François. _Deep Learning with Python_. p. 204. Shelter Island, NY: Manning Publications, 2018.
<hr style="height:2px; border:none; color:black; background-color:black;">
### Adding a Dense Output Layer
* Reduce the **LSTM layer’s output** to **one result** indicating whether a review is **positive** or **negative**, thus the value **`1` for the `units` argument**
* **`'sigmoid`' activation function** is preferred for **binary classification**
* Chollet, François. _Deep Learning with Python_. p.114. Shelter Island, NY: Manning Publications, 2018.
* Reduces arbitrary values into the range **0.0–1.0**, producing a probability
```
rnn.add(Dense(units=1, activation='sigmoid'))
```
<hr style="height:2px; border:none; color:black; background-color:black;">
### Compiling the Model and Displaying the Summary
* **Two possible outputs**, so we use the **`binary_crossentropy` loss function**:
```
rnn.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
```
* **Fewer layers** than our **convnet**, but nearly **three times as many parameters** (the network’s **weights**)
* **More parameters means more training time**
* The large number of parameters primarily comes from the **number of words in the vocabulary** (we loaded 10,000) **times the number of neurons in the `Embedding` layer’s output (128)**
```
rnn.summary()
```
<hr style="height:2px; border:none; color:black; background-color:black;">
## 15.9.5 Training and Evaluating the Model (1 of 2)
* For each **epoch** the **RNN model** takes **significantly longer to train** than our **convnet**
* Due to the **larger numbers of parameters** (weights) our **RNN model** needs to learn
```
rnn.fit(X_train, y_train, epochs=10, batch_size=32,
validation_data=(X_val, y_val))
```
<!--
```
Train on 25000 samples, validate on 20000 samples
WARNING:tensorflow:From /Users/pauldeitel/anaconda3/envs/tf_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/10
25000/25000 [==============================] - 297s 12ms/sample - loss: 0.4827 - acc: 0.7673 - val_loss: 0.3925 - val_acc: 0.8324
Epoch 2/10
25000/25000 [==============================] - 291s 12ms/sample - loss: 0.3327 - acc: 0.8618 - val_loss: 0.3614 - val_acc: 0.8461
Epoch 3/10
25000/25000 [==============================] - 272s 11ms/sample - loss: 0.2662 - acc: 0.8937 - val_loss: 0.3503 - val_acc: 0.8492
Epoch 4/10
25000/25000 [==============================] - 272s 11ms/sample - loss: 0.2066 - acc: 0.9198 - val_loss: 0.3695 - val_acc: 0.8623
Epoch 5/10
25000/25000 [==============================] - 271s 11ms/sample - loss: 0.1612 - acc: 0.9403 - val_loss: 0.3802 - val_acc: 0.8587
Epoch 6/10
25000/25000 [==============================] - 291s 12ms/sample - loss: 0.1218 - acc: 0.9556 - val_loss: 0.4103 - val_acc: 0.8421
Epoch 7/10
25000/25000 [==============================] - 295s 12ms/sample - loss: 0.1023 - acc: 0.9634 - val_loss: 0.4634 - val_acc: 0.8582
Epoch 8/10
25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0789 - acc: 0.9732 - val_loss: 0.5103 - val_acc: 0.8555
Epoch 9/10
25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0676 - acc: 0.9775 - val_loss: 0.5071 - val_acc: 0.8526
Epoch 10/10
25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0663 - acc: 0.9787 - val_loss: 0.5156 - val_acc: 0.8536
<tensorflow.python.keras.callbacks.History object at 0x141462e48>
```
-->
## 15.9.5 Training and Evaluating the Model (2 of 2)
* Function **`evaluate`** returns the **loss and accuracy values**
```
results = rnn.evaluate(X_test, y_test)
results
```
* **Accuracy seems low** compared to our **convnet**, but this is a **much more difficult problem**
* Many **IMDb sentiment-analysis binary-classification studies** show results **in the high 80s**
* We did **reasonably well** with our **small recurrent neural network** of only **three layers**
* We have not tried to tune our model
<hr style="height:2px; border:none; color:black; background-color:black;">
```
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
cd 'drive/My Drive/Colab Notebooks/machine_translation'
from dataset import MTDataset
from model import Encoder, Decoder
from language import Language
from utils import preprocess
from train import train
from eval import validate
from translate import translate
sentences_inp_train, sentences_trg_train = preprocess('datasets/train/train.en', 'datasets/train/train.vi', max_len=20)
sentences_inp_val, sentences_trg_val = preprocess('datasets/dev/tst2012.en', 'datasets/dev/tst2012.vi', max_len=20)
train_inp = Language(sentences_inp_train)
train_trg = Language(sentences_trg_train)
val_inp = Language(sentences_inp_val, train=False, word2id=train_inp.word2id, id2word=train_inp.id2word)
val_trg = Language(sentences_trg_val, train=False, word2id=train_trg.word2id, id2word=train_trg.id2word)
train_set = MTDataset(train_inp.wordvec, train_trg.wordvec)
val_set = MTDataset(val_inp.wordvec, val_trg.wordvec)
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
from torch.optim.lr_scheduler import StepLR
train_loader = DataLoader(train_set, batch_size=64, shuffle=True)
val_loader = DataLoader(val_set, batch_size=64)
Tx, Ty = train_inp.max_len, train_trg.max_len
vocab_size_inp, vocab_size_trg = train_inp.vocab_size, train_trg.vocab_size
embedding_dim = 256
hidden_size = 1024
if torch.cuda.is_available():
device='cuda'
else:
device='cpu'
encoder = Encoder(vocab_size_inp, embedding_dim, hidden_size).to(device=device)
decoder = Decoder(hidden_size, vocab_size_trg, embedding_dim).to(device=device)
optimizer = torch.optim.Adam(params=list(encoder.parameters()) + list(decoder.parameters()))
criterion = nn.CrossEntropyLoss()
scheduler = StepLR(optimizer, step_size=2, gamma=0.5)
train(encoder, decoder, train_loader, val_loader, optimizer, criterion, train_trg.id2word, scheduler, 10, 200, device)
torch.save(encoder.state_dict(), 'encoder.pth')
torch.save(decoder.state_dict(), 'decoder.pth')
import string
exclude = list(string.punctuation) + list(string.digits)
test_sen = 'hello i am a student'
test_sen = ''.join([char for char in test_sen if char not in exclude]).strip().lower()
test_sen = '<START> ' + test_sen + ' <END>'
length = len(test_sen.split())
diff = train_inp.max_len -length
test_sen = test_sen + ''.join([' <PAD>']*diff)
test_vec = [train_inp.word2id[s] for s in test_sen.split()]
test_tensor = torch.Tensor(test_vec).to(device='cuda', dtype=torch.long).unsqueeze(0)
with torch.no_grad():
encoder.eval()
decoder.eval()
enc_out, enc_hidden_backward, enc_hidden_forward = encoder(test_tensor)
dec_hidden = enc_hidden_backward
dec_input = torch.Tensor([train_trg.word2id['<START>']]).to(device='cuda', dtype=torch.long)
for t in range(1, Ty):
out, dec_hidden = decoder(dec_input, dec_hidden, enc_out)
dec_input = torch.max(out, dim=-1)[1].squeeze(1)
next_id = dec_input.squeeze().clone().cpu().numpy()
next_word = train_trg.id2word[next_id]
if next_word == '<END>':
break
print(next_word)
translate('i am a student', train_inp.word2id, train_trg.word2id, train_trg.id2word, encoder, decoder, 20, device)
decoder.load_state_dict(torch.load('decoder.pth'))
train_inp.id2word[4112]
train_trg.sentences[0]
from nltk.translate.bleu_score import corpus_bleu, SmoothingFunction
ref, hyp, bleu = validate()
hyp[0]
ref1 = 'the cat is on the mat'.split()
ref2 = 'there is a cat on the mat'.split()
hyp = 'the cat the cat on the mat'.split()
corpus_bleu([[ref1, ref2]], [hyp])
ref3 = 'i am student ngo anh tu'.split()
ref4 = 'my name is student ngo anh tu'.split()
hyp2 = 'there is a student ngo anh tu'.split()
corpus_bleu([[ref1, ref2], [ref3, ref4]], [hyp, hyp2])
sentence_bleu([ref1, ref2], hyp)
sentence_bleu([ref3, ref4], hyp2)
validate()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gathoni/hypothesis_testing/blob/master/Hypthesis_Testing_Redo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Autolib Dataset**
## **1.1 INTRODUCTION**
### **1.1.1 Defining the question**
Investigating the electric (bluecars) car usage in Paris during weekdays.
Test a Hypothesis: whether there is difference in the means of blue cars taken in two different postal codes selected randomly on weekdays.
### **1.1.2 Metric of Success**
Our metric for success will be based on the analysis of the number bluecars taken in different stations.
We will get two postal code areas using simple random samplinga and then compare their usage.
### **1.1.3 Understanding the context**
In this project we will seek to understand electric car usage by solving for another research question.
We will work as a Data Scientist for the Autolib electric car-sharing service company to investigate a claim about the blue cars from the provided Autolib dataset.
To do this, we need to identify some areas and periods of interest via sampling stating the reason to the choice of method, then perform hypothesis testing with regards to the claim that we will have made.
An example of claim to test would be "Is the number of Bluecars taken in area X different than in area Y? Is it greater in area X than in area Z? Etc”. The selected periods of interest be either weekdays or weekends but not a mix of both. We can also consider postal codes as some of the areas of interest.
### **1.1.4 Experimental Design**
Exploratory Data Analysis
Data Cleaning
Univariate, Bivariate Analysis
Visualizations
Testing a Hypothesis
Challenge our solution by providing insights on how we can make improvements.
### **1.1.5 Appropriateness of Data**
The dataset and glossary to use for this project can be found here [http://bit.ly/DSCoreAutolibDataset].
The provided dataset is a daily aggregation, by date and postal code, of the number of events on the Autolib network (car-sharing and recharging)
## **1.2 EXPLORATORY DATA ANALYSIS**
### **1.2.1 Importing Libraries**
```
# Import Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import pandas_profiling as pp
from scipy import stats
```
### **1.2.2 Loading the Dataset**
```
# call our dataset autolib
autolib = pd.read_csv("http://bit.ly/DSCoreAutolibDataset")
```
### **1.2.3 Viewing the dataset**
```
# Viewing the first 5 rows
autolib.head()
# Viewing the last 5 rows
autolib.tail()
# Checking the dataset shape i.e. number of rows and columns
print('The Autolib dataset has ' + str(autolib.shape[0]) +
' rows and ' + str(autolib.shape[1]) + ' columns' )
# Check the data types of each column
autolib.dtypes
# Checking the dataset information
autolib.info()
# Checking number of unique items in each column
autolib.nunique()
# Summary description of our dataset
autolib.describe()
# Using Pandas Profiling to get a detailed summary report of our dataset
pp.ProfileReport(autolib)
```
## **1.3 DATA CLEANING**
### **1.3.1 Fixing column names**
```
# Removing spaces in the columns names
autolib.columns = autolib.columns.str.lower().str.replace(" ", "")
# confirming the columns names
autolib.columns
# Dropping columns we do not need for this analysis
# We are only dealing with Blue cars only for this analysis.
autolib.drop(['utilib_taken_sum', 'utilib_returned_sum', 'utilib_14_taken_sum',
'utilib_14_returned_sum'], axis = 1, inplace = True)
# confirming that we only have the relevant columns
autolib.head()
```
### **1.3.2 Missing values**
```
# Missing values
autolib.isnull().sum()
```
We have no mising values in our dataset
### **1.3.3 Anomalies**
```
# Checking for Anomalies
# duplicates
autolib_duplicate = autolib[autolib.duplicated()]
autolib_duplicate.shape
```
There are no duplicated rows in the dataset
## **1.4 UNIVARIATE ANALYSIS**
```
#Description of all the numerical data columns
autolib.describe()
# mean,std,min,max and the IQR of Blue cars taken and returned
auto= autolib[['postalcode','bluecars_taken_sum', 'bluecars_returned_sum','day_type']].describe()
auto
# Variance, Kurtosis and Skewness
print('Variance, Kurtosis and Skewness for Blue cars taken')
print("The Variance: ",autolib.bluecars_taken_sum.var())
print("The Kurtosis: ",autolib.bluecars_taken_sum.kurt())
print("The Skewness: ",autolib.bluecars_taken_sum.skew())
print('Variance, Kurtosis and Skewness for Blue cars returned')
print("The Variance: ",autolib.bluecars_returned_sum.var())
print("The Kurtosis: ",autolib.bluecars_returned_sum.kurt())
print("The Skewness: ",autolib.bluecars_returned_sum.skew())
```
### **1.4.1 Visualizations**
#### **1.4.1.1 Boxplots**
```
# Boxplots
a = sns.boxplot(autolib['bluecars_taken_sum'],showmeans = True)
b = sns.boxplot(autolib['bluecars_returned_sum'],showmeans = True)
```
#### **1.4.1.1 Histogram**
```
#Plot histogram showing distribution of the BlueCars taken column
sns.set(style='ticks', color_codes=True)
bt_hist = sns.FacetGrid(autolib)
bt_hist.map(plt.hist, 'bluecars_taken_sum', bins=20)
#Plot histogram showing distribution of the BlueCars taken column
sns.set(style='ticks', color_codes=True)
bt_hist = sns.FacetGrid(autolib)
bt_hist.map(plt.hist, 'bluecars_returned_sum', bins=20)
```
## **1.5 BIVARIATE ANALYSIS**
```
sns.pairplot(autolib,hue = 'day_type')
# Using Matplotlib: Plotting our scatterplot to compare two numerical the variables
plt.figure(dpi = 100)
plt.scatter(autolib['bluecars_taken_sum'], autolib['bluecars_returned_sum'], color = 'purple')
plt.title('A scatter plot of Bluecars returned vs Bluecars taken', color = 'black')
plt.xlabel('bluecars_taken_sum')
plt.ylabel('bluecars_returned_sum')
plt.show()
```
There is strong positive correlation between Bluecars returned vs taken.
As the blue cars taken increases, the bluecar returned also increases.
## **1.7 MULTIVARIATE ANALYSIS**
Here, model will try to predict station type given ('postalcode', 'bluecars_taken_sum', 'bluecars_returned_sum' and 'day_type')
```
p=['postalcode','bluecars_taken_sum', 'bluecars_returned_sum','day_type']
t=[i for i in p]
df=pd.DataFrame(autolib[t])
df.head()
# label encoding
from sklearn.preprocessing import LabelEncoder
label_encoder= LabelEncoder()
df['postalcode']=label_encoder.fit_transform(df['postalcode'])
df['bluecars_taken_sum']=label_encoder.fit_transform(df['bluecars_taken_sum'])
df['bluecars_returned_sum']=label_encoder.fit_transform(df['bluecars_returned_sum'])
df.head()
#Separating features and labels
X = df.drop('postalcode', 1)
y = df['postalcode']
#Split the data into a training set and testing set.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
```
## **1.6 HYPOTHESIS TESTING**
### **Hypothesis Testing**
We would like to test and see whether there is a day on the weekend where more blue cars are taken.
**Null Hypothesis**
**Ho:** No of blue cars taken on *Saturday* are more than Sunday
**Alternative Hypothesis**
**Ha:** No of cars taken on Saturday are not more than the cars taken on Sunday
Our level of significance shall be 0.05
Research allows a 5% error This means there is a 5% risk that we will be rejecting null when its true.
### **Sampling**
Separate data into weekend entries
```
weekend=autolib[(autolib['day_type']=='weekend')]
weekend
# Simple Random Sampling
weekend_sample = weekend.sample(n = 10, replace="False")
weekend_sample
for i in weekend_sample["dayofweek"]:
if i == 5:
weekend_sample["day_5"]=weekend_sample['dayofweek']==5
else:
weekend_sample["day_6"]=weekend_sample['dayofweek']==6
weekend_sample
# Find sum of the blue cars taken for the different days
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_taken_sum.sum()
df2
# Sum of blur cars returned
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_returned_sum.sum()
df2
# Mean of blue cars taken
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_taken_sum.mean()
df2
# Mean of blue cars returned
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_returned_sum.mean()
df2
# Std deviation of blue cars taken
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_taken_sum.std()
df2
# Std deviation of blue cars returned
df2 = weekend_sample.groupby(weekend_sample["dayofweek"]).bluecars_returned_sum.std()
df2
```
### **Test Statistics**
The sample we are working with is less than 30. T-test will be used.
```
# Saturday Blue cars taken
x = (107.2 - 110)/89.510893
# Saturday blue cars returned
t = (106.6-110)/88.455073
# Sunday blue cars taken
y = (172.2 - 175)/ 244.060853
# Sunday blue cars returned
h = (173.0 - 175)/ 247.313566
```
### **P Value**
```
#Blue cars taken
from scipy import stats
from scipy.stats import norm
prob = stats.norm.cdf(x)
prob
prob = stats.norm.cdf(t)
prob
```
The p value is less than the level of significance. Therefore, we reject the null hypothesis
```
## P value
prob = stats.norm.cdf(y)
prob
prob = stats.norm.cdf(h)
prob
```
The p value is less than the level of significance. Therefore, we reject the null hypothesis
### **CONCLUSION**
We therefore reject the null hypothesis. We also agree that most blue cars are used on Sunday as compared to Saturday.
### **RECOMMENDATION**
The company should make the blue cars readily available for consumers on this day.This shall increase the profit margin for the company
```
```
| github_jupyter |
```
import warnings
import pprint
import skrebate
import imblearn
from imblearn import under_sampling, over_sampling, combine
from imblearn.pipeline import Pipeline as imbPipeline
from sklearn import (preprocessing, svm, linear_model, ensemble, naive_bayes,
tree, neighbors, decomposition, kernel_approximation, cluster)
from sklearn.pipeline import Pipeline
from sklearn.base import clone
from sklearn.compose import TransformedTargetRegressor
from sklearn.model_selection import (KFold, GroupKFold, StratifiedKFold,
LeaveOneGroupOut, cross_validate,
cross_val_predict, learning_curve,
GridSearchCV)
from sklearn.feature_selection import SelectKBest, f_regression, SelectFromModel, VarianceThreshold, f_classif
from sklearn.metrics import (r2_score, auc, roc_auc_score, balanced_accuracy_score,
average_precision_score, confusion_matrix, roc_curve,
precision_recall_curve)
from sklearn.metrics.scorer import roc_auc_scorer
from sklearn.preprocessing import QuantileTransformer, quantile_transform, StandardScaler, MinMaxScaler
from sklearn.utils.class_weight import compute_class_weight, compute_sample_weight
from sklearn.utils.validation import check_memory
from xgboost import XGBRegressor, XGBClassifier
from sklearn.ensemble import RandomForestClassifier
warnings.simplefilter('ignore')
import os
import sys
import numpy as np
import pandas as pd
import re
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
```
## result
```
work_dir = './drug_respond/results/smmart_proten_rna_tissue10/'
sub1 = 'Hyperparameter Search on collection 18 _ randomforest/'
sub2 = 'Hyperparameter Search on collection 18 _xgboost_2/'
sub3 = 'Hyperparameter Search on collection 19 _iraps/'
sub4 = 'Hyperparameter Search on collection 19 _xgbregressor'
def concate_best_result(folder, file_name, scorer, classifier, results):
path = os.path.join(folder, file_name)
res = pd.read_csv(path, sep='\t')
res_sort = res.sort_values(['mean_test_'+scorer, 'std_test_'+scorer], ascending=[False, True])
res_best = res_sort[['mean_test_'+scorer, 'std_test_'+scorer,'params']].head(1).reset_index(drop=True)
res_best.insert(loc=0, column='dataset', value=file_name[:-11])
res_best.insert(loc=0, column='classifier', value=classifier)
if results is None:
results = res_best
else:
results = results.append(res_best, ignore_index=True)
return results
# best AP scores
files1 = os.listdir(work_dir+sub1)
files2 = os.listdir(work_dir+sub2)
files3 = os.listdir(work_dir+sub3)
files4 = os.listdir(work_dir+sub4)
results = None
scorer = 'binarize_average_precision_scorer'
for fl in files1:
results = concate_best_result(work_dir+sub1, fl, scorer, 'RandomForestClassifier', results)
for fl in files2:
results = concate_best_result(work_dir+sub2, fl, scorer, 'XGBClassifier', results)
for fl in files3:
results = concate_best_result(work_dir+sub3, fl, scorer, 'IRAPSClassifier', results)
for fl in files4:
results = concate_best_result(work_dir+sub4, fl, scorer, 'XGBRegressor', results)
results = results.sort_values(['classifier', 'dataset'])
results
# best AP scores
files1 = os.listdir(work_dir+sub1)
files2 = os.listdir(work_dir+sub2)
files3 = os.listdir(work_dir+sub3)
files4 = os.listdir(work_dir+sub4)
results_auc = None
scorer = 'binarize_auc_scorer'
for fl in files1:
results_auc = concate_best_result(work_dir+sub1, fl, scorer, 'RandomForestClassifier', results_auc)
for fl in files2:
results_auc = concate_best_result(work_dir+sub2, fl, scorer, 'XGBClassifier', results_auc)
for fl in files3:
results_auc = concate_best_result(work_dir+sub3, fl, scorer, 'IRAPSClassifier', results_auc)
for fl in files4:
results_auc = concate_best_result(work_dir+sub4, fl, scorer, 'XGBRegressor', results_auc)
results_auc = results_auc.sort_values(['classifier', 'dataset'])
results_auc
data1 = go.Bar(
x = results[results['classifier'] == 'IRAPSClassifier']['dataset'],
y = results[results['classifier'] == 'IRAPSClassifier']['mean_test_binarize_average_precision_scorer'],
name = 'IRAPS_AP'
)
data2 = go.Bar(
x = results[results['classifier'] == 'RandomForestClassifier']['dataset'],
y = results[results['classifier'] == 'RandomForestClassifier']['mean_test_binarize_average_precision_scorer'],
name = 'RF_AP'
)
data3 = go.Bar(
x = results[results['classifier'] == 'XGBClassifier']['dataset'],
y = results[results['classifier'] == 'XGBClassifier']['mean_test_binarize_average_precision_scorer'],
name = 'XGBC_AP'
)
data4 = go.Bar(
x = results[results['classifier'] == 'XGBRegressor']['dataset'],
y = results[results['classifier'] == 'XGBRegressor']['mean_test_binarize_average_precision_scorer'],
name = 'XGBRegr_AP'
)
data5 = go.Bar(
x = results_auc[results_auc['classifier'] == 'IRAPSClassifier']['dataset'],
y = results_auc[results_auc['classifier'] == 'IRAPSClassifier']['mean_test_binarize_auc_scorer'],
name = 'IRAPS_ROC-AUC'
)
data6 = go.Bar(
x = results_auc[results_auc['classifier'] == 'RandomForestClassifier']['dataset'],
y = results_auc[results_auc['classifier'] == 'RandomForestClassifier']['mean_test_binarize_auc_scorer'],
name = 'RF_ROC-AUC'
)
data7 = go.Bar(
x = results_auc[results_auc['classifier'] == 'XGBClassifier']['dataset'],
y = results_auc[results_auc['classifier'] == 'XGBClassifier']['mean_test_binarize_auc_scorer'],
name = 'XGBC_ROC-AUC'
)
data8 = go.Bar(
x = results_auc[results_auc['classifier'] == 'XGBRegressor']['dataset'],
y = results_auc[results_auc['classifier'] == 'XGBRegressor']['mean_test_binarize_auc_scorer'],
name = 'XGBRegr_ROC-AUC'
)
layout = go.Layout(
xaxis=dict(
title='Dataset'
),
yaxis=dict(
title='Performance score'
),
barmode = 'group'
)
fig = go.Figure(data=[data1, data2, data3, data4], layout=layout)
iplot(fig)
fig = go.Figure(data=[data5, data6, data7,data8], layout=layout)
iplot(fig)
# To show plot, paste the link to this GitHub notebook into http://nbviewer.jupyter.org/
trace1 = {
"type": 'violin',
"x": results['classifier'],
"y": results['mean_test_binarize_average_precision_scorer'],
"legendgroup": 'AP',
"scalegroup": 'AP',
"name": 'AP',
"box": {
"visible": True
},
"meanline": {
"visible": True
},
"line": {
"color": 'blue'
}
}
trace2 = {
"type": 'violin',
"x": results_auc['classifier'],
"y": results_auc['mean_test_binarize_auc_scorer'],
"legendgroup": 'ROC-AUC',
"scalegroup": 'ROC-AUC',
"name": 'ROC-AUC',
"box": {
"visible": True
},
"meanline": {
"visible": True
},
"line": {
"color": 'pink'
}
}
layout = {
"yaxis": {
"zeroline": False,
},
"violinmode": 'group'
}
fig = go.Figure(data=[trace1, trace2], layout=layout)
iplot(fig)
# To show plot, paste the link to this GitHub notebook into http://nbviewer.jupyter.org/
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/FractionMultiplication/FractionMultiplication.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
```
import uiButtons
%uiButtons
```
# Fractions and Multiplication
## Visualizing Fraction Multiplication
## Introduction
An important skill to have when it comes to fractions is knowing how to multiply them together.<br>
As we know, fractions are of the form $\frac{a}{b}$ with $a$ and $b$ integers and $b\neq 0$. <br>
You can think of $\frac{a}{b}$ as the number you get when you do $a\div b$. <br>
If we think of a fraction as a division problem then it makes sense that it works well with multiplication.<br>
Unlike addition, multiplying fractions is easy and straightforward. <br>
In this notebook we will look into two forms of fraction multiplication:
- multiplying two fractions together (e.g. $\dfrac{4}{7} \times \dfrac{2}{3}$ )
- multiplying a fraction by an integer (e.g. $\dfrac{4}{7} \times 3$ )
## Procedure
As mentioned earlier, multiplying two fractions together is simple.<br>
Let's say we want to multiply the fractions $\dfrac{4}{7}$ and $\dfrac{2}{3}$.<br>
All we have to do is multiply the numerators (top numbers) together, then multiply the denominators (bottom numbers) together. Let's take a look:
$$
\frac{4}{7} \times \frac{2}{3}=\frac{4\times 2}{7\times 3}=\frac{8}{21}
$$
Let's try another example. Take the fractions $\dfrac{3}{5}$ and $\dfrac{2}{3}$. To multiply them we multiply the numerators together and the denominators together:
$$
\frac{3\times 2}{5\times 3}=\frac{6}{15}
$$
In this example, you might notice that the result is not in lowest terms: both 6 and 15 are divisible by 3, so we get $\dfrac{6}{15} = \dfrac25$. In a later notebook, we'll focus on mechanics like this. For now, we want to focus on a visual understanding of the problem.
Now that we know how to multiply two fractions, let's think about what it actually means.<br>
Recall that a fraction simply represents a part of something. We can think of multiplying fractions together as taking a part of another part. In other words $\dfrac{1}{2}\times\dfrac{1}{2}$ is like saying $\dfrac{1}{2}$ of $\dfrac{1}{2}$ (one half **of** one half). If we have $\dfrac{1}{2}$ of a pizza and we want $\dfrac{1}{2}$ of that half what do we end up with?<br>
<img src="./images/pizza.png" width="400px">
We get $\dfrac{1}{4}$ because $\dfrac{1}{2}\times\dfrac{1}{2}=\dfrac{1}{4}$.<br>
Watch the video below to help us further visualize this concept.
```
%%html
<div align="middle">
<iframe id="vid1" width="640" height="360" src="https://www.youtube.com/embed/hr_mTd-oJ-M" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p><a href="https://www.youtube.com/channel/UC4a-Gbdw7vOaccHmFo40b9g" target="_blank">Click here</a> for more videos by Khan Academy</p>
</div>
<script>
$(function() {
var reachable = false;
var myFrame = $('#vid1');
var videoSrc = myFrame.attr("src");
myFrame.attr("src", videoSrc)
.on('load', function(){reachable = true;});
setTimeout(function() {
if(!reachable) {
var ifrm = myFrame[0];
ifrm = (ifrm.contentWindow) ? ifrm.contentWindow : (ifrm.contentDocument.document) ? ifrm.contentDocument.document : ifrm.contentDocument;
ifrm.document.open();
ifrm.document.write('If the video does not start click <a href="' + videoSrc + '" target="_blank">here</a>');
ifrm.document.close();
}
}, 2000)
});
</script>
```
## Interactive visualization
The widget below allows you to visualize fraction multiplication as shown in the video. To begin, enter a fraction in the boxes below.
```
%%html
<script src="./d3/d3.min.js"></script>
<!-- <script src="https://d3js.org/d3.v3.min.js"></script> -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script src="https://code.jquery.com/jquery-1.10.2.js"></script>
<style>
.fractionInput {
max-width: 40px;
}
.fractionBar {
width: 40px;
height: 3px;
background-color: #000000;
}
.ingredientsInput {
margin-left: 10px;
margin-right: 10px;
max-width: 40px;
/* float: right; */
}
#speech {
margin: 50px;
font-size: 150%;
}
li {
margin-bottom: 15px;
}
</style>
%%html
<div class="fractionInputs" style="margin:20px">
<h1 id="leftInputFractionText" style="float: left; display: none"></h1>
<div id="opperandInput" style="float: left; display: block">
<input type="text" class="fractionInput form-control form-control-sm" id="oppNumerator" placeholder="0" style="margin-bottom: -10px;">
<hr align="left" class="fractionBar">
<input type="text" class="fractionInput form-control form-control-sm" id="oppDenominator" placeholder="1" style="margin-top: -10px;">
</div>
<button type="button" id="continueBtn" class="btn btn-primary buttons" style="margin: 30px">Continue</button>
</div>
<div class="canvasDiv" style="clear: left">
<svg height="500" width="500" viewbox="0 0 500 500" mlns="http://www.w3.org/2000/svg" id="mainCanvas" style="float: left">
<rect id="mainBox" height="480" width="480" x="10" y="10" style="outline: solid #000000 3px; fill:#ffffff"></rect>
<rect id="leftOpperand" height="480" width="0" x="10" y="10"></rect>
<rect id="rightOpperand" height="0" width="480" x="10" y="10"></rect>
</svg>
</div>
<div>
<p id="speech">Enter a fraction inside the boxes provided then click continue.</p>
</div>
<div style="clear: left; margin-left: 10px">
<button type="button" id="resetFractionBoxBtn" class="btn btn-primary buttons">Reset</button>
</div>
```
## Multiplying a fraction by an integer
In this section we will talk about multiplying a fraction like $\dfrac{4}{7}$, with an integer such as $3$. A good example of when this could be useful is when you need to double a recipe. <br>
Doing multiplication of this form is simply a special case of multiplying two fractions together since any integer, such as $3$ in this case, can be rewritten as $\dfrac{3}{1}$. On a calculator, try inputting any number divided by $1$, and you will always get back the original number. <br>
Let's demonstrate this with an example. To multiply the fraction $\dfrac{4}{7}$ and the integer $3$, remember that we can write $3$ as $\dfrac31$. We get
$$
\frac{4}{7}\times\frac{3}{1} = \frac{4\times 3}{7\times 1}= \frac{12}{7}
$$
**Note that $\dfrac{3}{1}$ is an improper fraction. Improper fractions follow all the same rules for multiplication as proper fractions.**
The big take away from this is that the denominator does not change as it is simply multiplied by $1$. This means we did not change the "whole", we only changed how many parts of the "whole" we have (the numerator). In effect all we did was triple our fraction, since our constant was 3. <br>
Let's practice what we just learned with a recipe example. Below you will see the ingredient list for the famous **Fresh Tomato and Basil Pasta Salad** recipe. This recipe makes enough for 4 servings, but we would like to double the recipe in order to serve 8 people. Apply what we have learned so far to double the ingredients list for the **tomato and basil pasta salad** in order to make 8 servings.
(Enter your answer in the provided boxes. Fractions should be written using the _forward slash_ key "/" eg. 5/8. When your done click _check answer_ to see if you are correct!)
```
%%html
<div class="ingredientsList">
<h1>Fresh Tomato and Basil Pasta Salad</h1>
<img src="./images/pastaSalad.jpg" width=250 style="float: left; margin-right: 50px; box-shadow: 5px 6px 25px 3px grey">
<ul style="max-width: 700px; margin-bottom">
<li><label>3 medium ripe tomatoes, chopped --></label><input id="tomatoes" class="ingredientsInput"></input><label>tomatoes</label></li>
<li><label>1/3 cup thinly sliced fresh basil --></label><input id="basil" class="ingredientsInput"></input><label>cup</label></li>
<li><label>2 Tbsp. olive oil --></label><input id="olivOil" class="ingredientsInput"></input><label>Tbsp.</label></li>
<li><label>1 clove garlic, minced --></label><input id="garlic" class="ingredientsInput"></input><label>clove</label></li>
<li><label>1/2 tsp. salt --></label><input id="salt" class="ingredientsInput"></input><label>tsp.</label></li>
<li><label>1/4 tsp. pepper --></label><input id="pepper" class="ingredientsInput"></input><label>tsp.</label></li>
<li><label>8 oz. rotini pasta pasta, uncooked --></label><input id="pasta" class="ingredientsInput"></input><label>oz.</label></li>
<li><label>3/4 cup Parmesan Style Grated Topping --></label><input id="parmesan" class="ingredientsInput"></input><label>cup</label></li>
</ul>
<button type="button" id="checkAnswerBtn">Check Answers</button>
<button type="button" id="resetBtn">Reset</button>
</div>
<div>
<h2 id="answerStatus"></h2>
</div>
```
## Conclusion
Throughout this notebook we looked at how easy multiplying fractions together really is. We also looked at how to work with a fraction multiplied by a constant. Lets recap what we have learned:
- When multiplying two fractions together we multiply the numerators together and the denominators together: $\dfrac{a}{b}\times\dfrac{c}{d}=\dfrac{a \times c}{b \times d} = \dfrac{ac}{bd}$
- A constant can always be rewritten as the constant over 1: $c = \dfrac{c}{1}$
- Multiplying a fraction with a constant, multiply the numerator by the constant and keep the denominator the same: $\dfrac{a}{b}\times c=\dfrac{a\times c}{b}=\dfrac{ac}{b}$
- Multiplying two fractions together is the same as saying _a part of a part_: $\dfrac{a}{b}\times\dfrac{c}{d}$ is like saying $\dfrac{a}{b}$ **of** $\dfrac{c}{d}$ (The equation $\dfrac{3}{5}\times\dfrac{1}{4}$ is the same as _three fifths **of** one quarter_)
```
%%html
<script>
var leftOpperand = {
id: 'leftOpperand',
numerator: Number(0),
denominator: Number(0),
colour: '#ff0066'
};
var rightOpperand = {
id: 'rightOpperand',
numerator: Number(0),
denominator: Number(0),
colour: '#0000ff'
};
var currentState = 0;
var getOpperandInput = function(numeratorInput, denominatorInput, opperand) {
opperand.numerator = document.getElementById(numeratorInput).value;
opperand.denominator = document.getElementById(denominatorInput).value;
}
var verticalDivide = function(xVal, lineNum) {
var i = xVal;
while(lineNum > 0){
addLine(Number(i + 10), Number(i + 10), 10, Number(d3.select('#mainBox').attr('height')) + 10);
i += xVal;
lineNum --;
}
};
var horizontalDivide = function(xVal, lineNum) {
var i = Number(xVal);
while(lineNum > 0){
addLine(10, Number(d3.select('#mainBox').attr('width')) + 10, Number(i + 10), Number(i +10));
i += xVal;
lineNum --;
}
};
var addLine = function (x1, x2, y1, y2,) {
var dashed = '0,0';
var stroke = 2;
d3.select('#mainCanvas').append('line')
.attr('class', 'divLine ')
.attr('x1', x1)
.attr('x2', x2)
.attr('y1', y1)
.attr('y2', y2)
.style('stroke', 'black')
.style('stroke-width', stroke);
};
var fillBox = function(box, width, height, colour, opacity) {
d3.select('#' + box.id)
.style('fill', colour)
.style('opacity', opacity)
.transition().delay(function (d, i) {
return i * 300;
}).duration(500)
.attr('width', width)
.attr('height', height);
};
var changeOpacity = function(box, opacity) {
d3.select('#' + box.id).transition().delay(function (d, i) {
return i * 300;
}).duration(500)
.style('opacity', opacity);
d3.selectAll('.divLine').transition().delay(function (d, i) {
return i * 100;
}).duration(200)
.style('opacity', opacity);
};
var resetInputs = function() {
d3.select('#continueBtn').attr('disabled', null);
d3.selectAll('.divLine').remove();
d3.select('#leftOpperand').attr('width', 0);
d3.select('#rightOpperand').attr('height', 0);
d3.select('#leftInputFractionText').text('').style('display', 'none');
clearInput('oppNumerator');
clearInput('oppDenominator');
leftOpperand.numerator = Number(0);
leftOpperand.denominator = Number(0);
rightOpperand.numerator = Number(0);
rightOpperand.denominator = Number(0);
};
var isValid = function(numerator, denominator) {
if (numerator < 0 || numerator > 12) {
return false;
}
if (denominator <= 0 || denominator > 12) {
return false;
}
return (numerator < denominator);
};
var updateMathJax = function() {
MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
};
var showInputBox = function(inputId) {
d3.select('#' + inputId).style('display', 'block');
};
var hideInputBox = function(inputId) {
d3.select('#' + inputId).style('display', 'none');
};
var clearInput = function(inputId) {
document.getElementById(inputId).value = '';
}
var stateControler = function(state) {
currentState = state;
setSpeech(state);
switch(state) {
case 0 :
resetInputs();
showInputBox('opperandInput');
break;
case 1 :
getOpperandInput('oppNumerator', 'oppDenominator', leftOpperand);
d3.select('#leftInputFractionText')
.text('$\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\times$')
.style('display', 'block');
updateMathJax();
verticalDivide(Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator), Number(leftOpperand.denominator - 1));
hideInputBox('opperandInput');
break;
case 2 :
fillBox(leftOpperand, Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator) * leftOpperand.numerator, Number(d3.select('#mainBox').attr('height')), leftOpperand.colour, 1);
clearInput('oppNumerator');
clearInput('oppDenominator');
showInputBox('opperandInput');
break;
case 3 :
getOpperandInput('oppNumerator', 'oppDenominator', rightOpperand);
d3.select('#leftInputFractionText')
.text('$\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\times$' + '$\\frac{'+rightOpperand.numerator+'}{'+rightOpperand.denominator+'}$');
updateMathJax();
changeOpacity(leftOpperand, 0);
horizontalDivide(Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator), Number(rightOpperand.denominator - 1));
hideInputBox('opperandInput');
break;
case 4 :
fillBox(rightOpperand, Number(d3.select('#mainBox').attr('width')), Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator) * rightOpperand.numerator, rightOpperand.colour, 0.5);
break;
case 5 :
changeOpacity(leftOpperand, 1);
d3.select('#continueBtn').attr('disabled', true);
break;
default:
console.log('not a valid of state, returning to state 0');
stateControler(0);
}
};
var speech = [
"Enter a fraction in the boxes provided, then click continue.",
"Great! Now we see that the square has been divided into rectangles of equal size. The number of rectangles is given by the denominator. Click continue when ready.",
"Some of the equal parts have been filled in with pink. The numerator equals the number of pink rectangles. The ratio of the area in pink to the total area is our fraction. Enter another fraction to multiply then click continue.",
"Let’s focus on the second fraction. The first one is temporarily hidden for clarity. As before, the number of rectangles we see equals the denominator. Click continue when ready.",
"Now we have a blue section representing the numerator of the second fraction. Click continue to multiply these two fractions.",
"Awesome! The first fraction is back and overlaid with the second fraction. The number of rectangles in the purple section is the numerator of our answer. Notice that this is the product of the numerators. The total number of rectangles is the denominator of the product, and this is just the product of the two denominators!"
];
function setSpeech(state) {
d3.select('#speech').text(speech[state]);
};
document.getElementById('continueBtn').onclick = function() {
if(!isValid(Number(document.getElementById('oppNumerator').value), Number(document.getElementById('oppDenominator').value))){
alert('Make sure your factions are proper and the denominators less than or equal to 12');
}
else {
stateControler(currentState + 1);
}
};
document.getElementById('resetFractionBoxBtn').onclick = function() {
console.log("hello");
resetInputs();
stateControler(0);
};
</script>
%%html
<script type="text/javascript">
var x = 2; //Recipie multiplyer
getInput('checkAnswerBtn').onclick = function() {
if(checkAnswers()) {
d3.select('#answerStatus').text('Correct!! Good job.');
} else {
d3.select('#answerStatus').text('Not quite, keep trying!');
}
};
getInput('resetBtn').onclick = function() {
var inputs = document.getElementsByClassName('ingredientsInput');
for(var i = 0; i < inputs.length; i++) {
inputs[i].value = '';
}
d3.selectAll('.ingredientsInput').style('background-color', '#ffffff');
d3.select('#answerStatus').text('');
};
function checkAnswers() {
var isCorrect = true;
if(!checkAnswer('tomatoes', x*3))
isCorrect = false;
if(!checkAnswer('basil', x*(1/3)))
isCorrect = false;
if(!checkAnswer('olivOil', x*2))
isCorrect = false;
if(!checkAnswer('garlic', x*1))
isCorrect = false;
if(!checkAnswer('salt', x*(1/2)))
isCorrect = false;
if(!checkAnswer('pepper', x*(1/4)))
isCorrect = false;
if(!checkAnswer('pasta', x*8))
isCorrect = false;
if(!checkAnswer('parmesan', x*(3/4)))
isCorrect = false;
return isCorrect;
};
function checkAnswer(id, ans) {
if(eval(getInput(id).value) === ans) {
return answerCorrect(id);
}
return answerIncorrect(id);
};
function answerCorrect(id) {
d3.select('#' + id).style('background-color', '#76D177');
return true;
}
function answerIncorrect(id) {
d3.select('#' + id).style('background-color', '#BB4646');
return false;
}
function getInput(id) {
return document.getElementById(id);
};
</script>
```
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3-jitter1v2:cores -> oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 256],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["jitter_256_1", "lowpass_+/-10MHz", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["jitter_256_1", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 500,
"dataset_seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Web_Crawling
## 하루 시작을 알리는 크롤링 프로젝트
- 하루를 시작하면서 자동으로 내가 원하는 정보를 모아서 메세지로 보내주는 서비스가 있으면 좋겠다 생각했습니다. 기존의 서비스는 제가 원하지 않는 정보가 있어 더이상 찾거나 결재를 하지 않았지만 이번 기회로 직접 만들자 생각이 들어 시작하게 되었습니다.

## 크롤링 사이트
### 다음 뉴스
1. [media.daum.net](https://media.daum.net/)

### 케이웨더
2. [www.kweather.co.kr](http://www.kweather.co.kr/main/main.html)

### 다음 사전
3. [dic.daum.net/word](https://dic.daum.net/word/view.do?wordid=ekw000132285&q=project)

## GitHub
[Web-Crawling-repo](https://github.com/LeeJaeKeun14/Web_Crawling#need-install)
## 웹 크롤링
### - 다음 뉴스
- scrapy

### - 케이웨더
- selenium

### - 다음 사전
- selenium


## 패키지 구성

##### Web_Crawling
> Make_Module.ipynb : 모듈파일을 만드는 주피터 노트북 파일
>
> \_\_init__.py :
>
>>```python
>>__all__ = ["weather", "slack_file", "slack_msg_1", "diction", "mongodb", "make_msg"]
>>```
>
>
> **\_\_pycache__** : 패키지 실행시 저장되는 캐시 데이터
>
> diction.csv : 크롤링한 영어단어 csv 파일
>
> diction.py :
>
>>```python
>> import os
>> import pandas as pd
>> from selenium import webdriver
>> def open_driver(driver, name): ## driver를 입력하는 단어로 url을 이동
>> def find_dic(driver, dic): ## driver에 dic에 영단어의 정보를 하나하나 저장
>>```
>
> eng.csv : 웹 크롤링 할 영단어가 저장된 csv파일
>
> make_msg.py :
>>```python
>> def make_msg(df, col): ## 웹 크롤링한 영단어 정보를 출력할 하나의 str 형식으로 변환하여 반환
>>```
>
> mongodb.py :
>>```python
>> import pymongo
>>
>> client = pymongo.MongoClient("mongodb:// server : ip")
>> db = client.diction
>> collection = db.english
>>```
>
> **news** : scrapy startproject
>> items.py : 뉴스 카테고리, 뉴스 타이틀, 링크
>>
>> settings.py :
>>> 다음 뉴스 robots.txt 가 Disallow 이므로
>>> ROBOTSTXT_OBEY = False 로 수정
>>
>> spider.py. : 각 카테고리 별로 가장 상위 5개 뉴스 타이틀, 링크를 각각 수집
>>
>
> slack_file.py :
>>```python
>> import os
>> import slack
>>
>> def send_file(): ## 웹 크롤링한 날씨 이미지 데이터를 슬랙으로 전송
>>```
>
> slack_msg.py :
>>```python
>> import requests
>> import json
>>
>> def send_msg(msg): ## 웹 크롤링한 str정보를 슬랙으로 전송
>>```
>
> weather.png : 웹 크롤링한 이미지를 저장한 PNG 파일
>
> weather.py :
>>```python
>> from selenium import webdriver
>> import time
>> import os
>>
>> def weather(): ## 날씨 정보를 이미지로 가져와서 저장
>>```
>
>
>
## 프로젝트 진행하면서의 문제점
### crontab 에서의 경로 문제
#### run.sh
- run.sh\
> rm -rf ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/news.csv\
> cd ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/\
> scrapy crawl News -o news.csv\
/home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: 3: /home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: scrapy: not found
# 느낀점
인터넷에 있는 데이터를 사져오고, 가공한 다음,\
이 데이터를 데이터 베이스에 저장한 뒤,\
데이터를 자신에게 직접 제공하는 패키지를 자동화 하는 프로젝트를 진행하였습니다.\
이러한 프로젝트를 처음부터 끝까지 했다는 완성감과 성취감을 느끼고 평소 유용한 데이터를 사용하며 지금까지 진행했던 프로젝트중 가장 재미있는 프로젝트라 생각합니다.
## 이후 진행 계획
### 1. AWS Lambda 서비스 이용
- boto3를 이용하여 항상 서버를 이용하지 않고 특정한 시간대에만 서버를 열어서 크롤링 한다
- Lambda에 시간 트리거를 설정하여 boto3 함수를 사용한다
- 최종적으로 매일 아침 자동으로 메일을 보내주는 서비스를 완성한다
### 2. 워드 클라우드 또는 분석 모형 사용
- 웹 크롤링 한 데이터를 필요한 데이터를 한번 더 가공하여 보내는 서비스 까지 계발
```
import WC
WC.show()
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science
## Homework 7: Generative Models - Variational Autoencoders and GANs [100 pts]
**Harvard University**<br/>
**Spring 2021**<br/>
**Instructors**: Pavlos Protopapas, Mark Glickman and Chris Tanner<br/>
**DISCLAIMER**: No public reproduction of this homework nor its solution is allowed without the explicit consent of their authors.
**Due Date**: <font color="red">April 21 (11:59pm EST), 2021</font><br/>
<hr style="height:2pt">
---
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML, display
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
### INSTRUCTIONS
- To submit your assignment follow the instructions given in Canvas.
- Please restart the kernel and run the entire notebook again before you submit.
- Running cells out of order is a common pitfall in Jupyter Notebooks. To make sure your code works restart the kernel and run the whole notebook again before you submit.
- We have tried to include all the libraries you may need to do the assignment in the imports cell provided below. **Please use only the libraries provided in those imports.**
- Please use .head() when viewing data. Do not submit a notebook that is **excessively long**.
- In questions that require code to answer, such as "calculate the $R^2$", do not just output the value from a cell. Write a `print()` function that clearly labels the output, includes a reference to the calculated value, and rounds it to a reasonable number of digits. **Do not hard code values in your printed output**. For example, this is an appropriate print statement:
```python
print(f"The R^2 is {R:.4f}")
```
- Your plots should be clearly labeled, including clear labels for the $x$ and $y$ axes as well as a descriptive title ("MSE plot" is NOT a descriptive title; "95% confidence interval of coefficients of polynomial degree 5" on the other hand is descriptive).
<hr style="height:2pt">
<a id="contents"></a>
## Notebook Contents
- [**Part 0 (Set Up Notebook)**](#part0)
- [**PART 1 [ 20 pts ]: Preprocess and Visualize data**](#part1)
- [Overview](#part1intro)
- [Questions](#part1questions)
- [Solutions](#part1solutions)
- [**PART 2 [ 20 pts ]: Set-up an AutoEncoder**](#part2)
- [Overview](#part2intro)
- [Questions](#part2questions)
- [Solutions](#part2solutions)
- [**PART 3 [ 20 pts ]: Set-up a Convolutional Variational Autoencoder**](#part3)
- [Overview](#part3intro)
- [Questions](#part3questions)
- [Solutions](#part3solutions)
- [**PART 4 [ 20 pts ]: Set-up a Conditional VAE**](#part4)
- [Overview](#part4intro)
- [Questions](#part4questions)
- [Solutions](#part4solutions)
- [**PART 5 [ 20 pts ]: GANs**](#part5)
- [Overview](#part5intro)
- [Questions](#part5questions)
- [Solutions](#part5solutions)
---
<a id="part0"></a>
## Overview
We are going to compare autoencoders (AEs), variational autoencoders (VAEs) and generative adversarial networks (GANs). The goal is to understand the particularities of each model and to learn how to build them.
In addition to standard VAEs, we will also study conditional VAEs. Conditional VAEs incorporate input attributes on the latent representation of an input, providing some structure in the latent space. We will analyze how conditional VAEs are capable of generating new photos that depend on specified attributes.
We are going to train our networks using [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html), which is a large-scale face attributes dataset with more than 200K celebrity images and 40 different attribute annotations.
Run the following cell to load important libraries.
```
# DO NOT DELETE THIS CELL
# Load useful libraries
import numpy as np
import pandas as pd
import zipfile
import os
import tqdm
import pathlib
import time
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras import losses
from tensorflow.keras import optimizers
from tensorflow.keras import initializers
from tensorflow.keras.metrics import Accuracy
# Plotting libraries
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
plt.gray() #set colormap to gray
```
**Check availability of GPU**
Run this line to verify your environment lists an available GPU.
```
# DO NOT DELETE THIS CELL
tf.config.experimental.list_physical_devices('GPU')
```
---
```
# DO NOT DELETE THIS CELL
# Run this cell to define our download_celeb function
def download_celeb(
url,
filename,
filepath,
dirname,
dirpath,
chunk_size=1204,
overwrite=False,
):
"""Downloads and extracts CelebA dataset from CS109B S3 bucket"""
# Do not download if data already exists and overwrite==False
if not overwrite and os.path.isdir(os.path.join(dirpath, "2.0.1")):
print(
"Congratulations...the CelebA dataset already exists "
"locally!\nNo new downloads are required :o)\n"
)
# Download and extract CelebA if it doesn't already exist
else:
print("Downloading CelebA dataset to {}\n".format(filepath))
with requests.get(url, stream=True) as r:
chunk_size = 1024
length = int(r.headers['content-length'])
print(
"...downloading a {:.2f} GB file."
"This is going to take a while!".format(length/1e9)
)
time.sleep(0.5)
with open(filepath, 'wb') as f:
for chunk in tqdm.tqdm(
r.iter_content(chunk_size=chunk_size),
total=int(length/chunk_size),
unit="KB"
):
f.write(chunk)
print("...{} download complete :o)".format(filename))
if not os.path.isdir(dirpath):
os.makedirs(dirpath)
print(
"...extracting {}. This will take a while too :o(\n"
"".format(filename)
)
with zipfile.ZipFile(filepath, 'r') as zipobj:
zipobj.extractall(dirpath)
print(
"The CelebA dataset has been extracted to:"
"\n\n\t{}\n".format(dirpath)
)
# DO NOT DELETE THIS CELL
# RUN THIS CELL
working_dir = pathlib.Path().absolute()
# Uncomment line below to debug if images don't show
#print(working_dir)
os.chdir(working_dir)
%%time
# DO NOT DELETE THIS CELL
# Download the CelebA dataset from the CS109B S3 bucket
url = "https://cs109b-course-data.s3.amazonaws.com/CelebA/2.0.1.zip"
filename = "2.0.1.zip"
dirname = "data/celeb_a"
dirpath = os.path.join(working_dir, dirname)
filepath = os.path.join(working_dir, filename)
download_celeb(url, filename, filepath, dirname, dirpath)
# DO NOT DELETE THIS CELL
# Run this cell
# Assumes CelebA has been manually downloaded and is available in `~/tensorflow_datasets/celeb_a/2.0.1/`.
import tensorflow_datasets as tfds
train_celeb, val_celeb = tfds.load('celeb_a',
split=['train', 'validation'],
shuffle_files=False,
data_dir = os.path.join(working_dir, "data"),
download=False)
# DO NOT DELETE THIS CELL
# Global variables to define training/loading models.
# Modify as required. These are only suggested parameters.
train = True
epochs = 5 # number of epochs to train models
batch_size = 32
input_size = (64, 64, 3) # images will be cropped and resized to `input_size`.
```
---
<a id="part1"></a>
# PART 1. Preprocess and Visualize the data [20 pts]
[Return to contents](#contents)
<a id="part1intro"></a>
## Overview
CelebA has 202,599 face images of various celebrities and training on the whole set requires large computational resources to fit your models. For this reason we recommend cropping the images and resizing them to reduce the computational costs. Feel free to adjust the image resolution depending on your computation capabilities. We recommend using `image_size = (64,64,3)`, but feel free to use a larger resolution, or smaller (but no smaller than `image_size = (32,32,3))`.
We provide the function `tf_norm_crop_resize_image` to normalize image pixels between `[0,1]`, to crop the height and width of images to `150x150` pixels, and to [resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) images to the indicated size in the function call. Follow the intructions below to format your data for the different models you will need to train:
<a id="part1questions"></a>
## PART 1: Questions
<a id="q11"></a>
**[1.1:](#s11)** Create training and validation Dataset pipelines `train_ds` and `val_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `(image, image)` which you will use to train your models with an MSE loss criteria: the first element is the input fed to the model, the second element is used to compute the loss of the model.
Make sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)
<a id="q12"></a>
**[1.2:](#s12)** Create training and validation Dataset pipelines `train_cond_ds` and `val_cond_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `((image, attributes), image)` to train your conditional VAE model. The first element of the tuple corresponds to the input of the model and consists of two tensors: the image and 2 selected attributes of your choice (for example, `Male` and `Smiling` attributes). You can choose your attributes from the ones [available](https://www.tensorflow.org/datasets/catalog/celeb_a). Make sure the attributes you use are easily identifiable in the images because you will need to alter them and expect visual changes (see Question 4.3). Convert the boolean attributes to `tf.float32` using [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast).
Make sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)
<a id="q13"></a>
**[1.3:](#s13)** Pick 5 random images from the train dataset and plot them. Clearly label each plot with your chosen attributes and provide a written confirmation that they are correct.
```
# DO NOT DELETE THIS CELL
# Use this function to normalize, crop and resize your images.
def tf_norm_crop_resize_image(image, resize_dim):
"""Normalizes image to [0.,1.], crops to dims (150, 150, 3)
and resizes to `resize_dim`, returning an image tensor."""
image = tf.cast(image, tf.float32)/255.
image = tf.image.resize_with_crop_or_pad(image, 150, 150)
image = tf.image.resize(image, resize_dim)
image.set_shape(resize_dim + (3,))
return image
```
<a id="part1solutions"></a>
## PART 1: Solutions
[Return to contents](#contents)
<a id="s11"></a>
<div class='exercise-r'>
**[1.1:](#q11)**
Create training and validation Dataset pipelines `train_ds` and `val_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `(image, image)` which you will use to train your models with an MSE loss criteria: the first element is the input fed to the model, the second element is used to compute the loss of the model.
Make sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)
</div>
```
# 1.1
# your code here
```
<a id="s12"></a>
<div class='exercise-r'>
**[1.2:](#q12)**
Create training and validation Dataset pipelines `train_cond_ds` and `val_cond_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `((image, attributes), image)` to train your conditional VAE model. The first element of the tuple corresponds to the input of the model and consists of two tensors: the image and 2 selected attributes of your choice (for example, `Male` and `Smiling` attributes). You can choose your attributes from the ones [available](https://www.tensorflow.org/datasets/catalog/celeb_a). Make sure the attributes you use are easily identifiable in the images because you will need to alter them and expect visual changes (see Question 4.3). Convert the boolean attributes to `tf.float32` using [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast).
Make sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)
</div>
```
# 1.2
# your code here
```
<a id="s13"></a>
<div class='exercise-r'>
**[1.3:](#q13)**
Pick 5 random images from the train dataset and plot them. Clearly label each plot with your chosen attributes and provide a written confirmation that they are correct.
</div>
```
# 1.3
# your code here
```
*your answer here*
---
<a id="part2"></a>
# PART 2. Set-up an AutoEncoder [20 points]
[Return to contents](#contents)
<a id="part2intro"></a>
## Overview
**Define custom convolutional layers**
For the following models, you will need to utilize custom Keras layers. Below we have provided a class skeleton which you must first complete.You should read the Keras [guidelines](https://www.tensorflow.org/guide/keras/custom_layers_and_models) on how to build custom layers. You are required to fill the specific methods indicated below on each part.
You will then construct an autoencoder using both custom layers, and visualize the AE image reconstruction and latent spaces.
<a id="part2questions"></a>
## PART 2: Questions
<a id="q21"></a>
**[2.1:](#s21)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvEncoder` class. We recommend to use 4 convolutional layers; 9, 18, 32, and 64 filters in each consecutive layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 2x2. The intention is to halve the spatial dimensions on each convolutional layer while augmenting the number of filters on deeper layers.
You will use this layer repeatedly when building your subsequent models.
<a id="q22"></a>
**[2.2:](#s22)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvDecoder` class. We will refer to the input dimension of this layer as `latent_dim`. Make sure the output dimension of this layer is equal to the input dimension of your images, i.e., `(64,64,3)` if you followed our recommendation.
We recommend to use 4 `UpSampling2D` layers; each followed by a `Conv2D` layer with 64, 32, 18, and 3 filters in each consecutive convolutional layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 1x1. Adjust activations as appropriate.
<a id="q23"></a>
**[2.3:](#s23)** Create a Keras model `AE`. Use the previously defined `ConvEncoder` and `ConvDecoder` layer classes you just completed to build your autoencoder. Between these layers, [flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the input and incorporate two intermediate [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense), and [reshape](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape) layers. More precisely, use the following architecture:
- `Input` image
- `ConvEncoder` layer
- `Flatten` layer
- **`Dense` layer with linear activation** and `bottleneck_dim=128` units (recommended dimension)
- **`Dense` layer with ReLU activation**
- `Reshape` layer to `latent_dim`
- `ConvDecoder` layer
<a id="q24"></a>
**[2.4:](#s24)** Why do we suggest that the first dense layer after the `ConvEncoder` layer use linear activation in the `AE` model? Is this a necessary requirement or not? Explain your answer.
<a id="q25"></a>
**[2.5:](#s25)** Train the `AE` model using MSE as the loss and an optimizer of your choice. We found 5 epochs sufficient for training, but feel free to adjust this value. Print a summary of the model.
**We recommend [saving](https://www.tensorflow.org/tutorials/keras/save_and_load) the trained model**.
<a id="q26"></a>
**[2.6:](#s26)** Visualize 5 random original and reconstructed images fed to the autoencoder from the validation data.
<a id="q27"></a>
**[2.7:](#s27)** Visualize the first 2 [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) components and a 2-dimensional [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) projection onto the plane of the latent representation for the validation images. Use the representation after the first dense layer where `bottleneck_dim=128` to compute the PCA and t-SNE projections. Retrieve at least `1024` images and color each input by class type (for example, `Male` and `Smiling` if these where your chosen attributes), for **each scatter plot visualization** and attribute. You need to present 4 scatter plots in total. Explain your results.
<a id="part2solutions"></a>
## PART 2: Solutions
[Return to contents](#contents)
<a id="s21"></a>
<div class='exercise-r'>
**[2.1:](#q21)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvEncoder` class. We recommend to use 4 convolutional layers; 9, 18, 32, and 64 filters in each consecutive layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 2x2. The intention is to halve the spatial dimensions on each convolutional layer while augmenting the number of filters on deeper layers.
You will use this layer repeatedly when building your subsequent models.
</div>
```
# 2.1
class ConvEncoder(layers.Layer):
"""
Convolutional Encoder Layer Class.
Converts an input into a latent representation.
"""
def __init__(self, input_shape, dropout_rate=0.0, name='encoder', **kwargs):
"""
Initializes the encoder layers and saves them as local attribute.
Input:
-input_dim: 3D-tuple with (rows, cols, channels) input image dimensions.
-dropout_rate: if dropout layers present.
Returns nothing.
"""
super(ConvEncoder, self).__init__(name=name, input_shape=input_shape, **kwargs)
## your code here
# end of your code here
def call(self, inputs, training=None):
"""
Runs the encoding inference for `inputs`.
Inputs:
-inputs: 4D-tensor with dimension (batch_size, self.input_dim).
"""
## your code here
# end of your code here
return z
```
<a id="s22"></a>
<div class='exercise-r'>
**[2.2:](#q22)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvDecoder` class. We will refer to the input dimension of this layer as `latent_dim`. Make sure the output dimension of this layer is equal to the input dimension of your images, i.e., `(64,64,3)` if you followed our recommendation.
We recommend to use 4 `UpSampling2D` layers; each followed by a `Conv2D` layer with 64, 32, 18, and 3 filters in each consecutive convolutional layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 1x1. Adjust activations as appropriate.
</div>
```
# 2.2
class ConvDecoder(layers.Layer):
"""
Convolutional Decoder Layer Class.
Converts z, the encoded digit vector, back into a readable digit.
"""
def __init__(self, input_shape, dropout_rate=0.0, name='decoder', **kwargs):
"""
Initializes the decoder architecture and saves it as a local attribute.
Input:
-input_shape: 3D-tuple with (rows, cols, channels) input representation.
-dropout_rate: if dropout layers present.
Returns nothing.
"""
super(ConvDecoder, self).__init__(name=name, input_shape=input_shape, **kwargs)
self.dropout_rate = dropout_rate
# your code here
# end your code here
def call(self, z, training=None):
# your code here
# end your code here
return x
```
<a id="s23"></a>
<div class='exercise-r'>
**[2.3:](#q23)** Create a Keras model `AE`. Use the previously defined `ConvEncoder` and `ConvDecoder` layer classes you just completed to build your autoencoder. Between these layers, [flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the input and incorporate two intermediate [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense), and [reshape](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape) layers. More precisely, use the following architecture:
- `Input` image
- `ConvEncoder` layer
- `Flatten` layer
- **`Dense` layer with linear activation** and `bottleneck_dim=128` units (recommended dimension)
- **`Dense` layer with ReLU activation**
- `Reshape` layer to `latent_dim`
- `ConvDecoder` layer
</div>
```
# 2.3
# your code here
```
<a id="s24"></a>
<div class='exercise-r'>
**[2.4:](#q24)**
Why do we suggest that the first dense layer after the `ConvEncoder` layer use linear activation in the `AE` model? Is this a necessary requirement or not? Explain your answer.
</div>
*Your answer here*
<a id="s25"></a>
<div class='exercise-r'>
**[2.5:](#q25)** Train the `AE` model using MSE as the loss and an optimizer of your choice. We found 5 epochs sufficient for training, but feel free to adjust this value. Print a summary of the model.
**We recommend [saving](https://www.tensorflow.org/tutorials/keras/save_and_load) the trained model**.
</div>
```
# 2.5
# your code here
```
<a id="s26"></a>
<div class='exercise-r'>
**[2.6:](#q26)**
Visualize 5 random original and reconstructed images fed to the autoencoder from the validation data.
</div>
```
# 2.6
# your code here
```
<a id="s27"></a>
<div class='exercise-r'>
**[2.7:](#q27)**
Visualize the first 2 [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) components and a 2-dimensional [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) projection onto the plane of the latent representation for the validation images. Use the representation after the first dense layer where `bottleneck_dim=128` to compute the PCA and t-SNE projections. Retrieve at least `1024` images and color each input by class type (for example, `Male` and `Smiling` if these where your chosen attributes), for **each scatter plot visualization** and attribute. You need to present 4 scatter plots in total. Explain your results.
</div>
```
# 2.7 (PCA visualization)
# your code here
```
**Explanation of PCA:**
*your answer here*
```
# 2.7 (t-SNE visualization)
# your code here
```
**Explanation of t-SNE:**
*your answer here*
---
<a id="part3"></a>
# PART 3. Set-up a Convolutional Variational Autoencoder [20 points]
[Return to contents](#contents)
<a id="part3intro"></a>
## Overview
In this exercise you will code a standard Convolutional Variational Autoencoder. You will first create a custom layer `Sampling` that takes the mean and log-variance of a Gaussian distribution as inputs, and returns a sample from that distribution. You will use this sample as a latent representation of your probabilistic encoder conditioned on the input image, and use it to reconstruct an image. You will build the complete VAE architecture and study its properties.
You will need to minimize the negative ELBO function formed by a reconstruction loss and a regularization term over the mean and variance of the probabilistic encoder. You will train two VAE models, one with no regularization, and a second with regularization.
<a id="part3questions"></a>
## PART 3: Questions
<a id="q31"></a>
**[3.1:](#s31)** Complete the `call` method of our `Sampling` keras layer class. This method takes as input the mean and log-variance vectors of a multivariate Gaussian distribution and returns a sampled tensor from this distribution.
<a id="q32"></a>
**[3.2:](#s32)** Create two Variational AutoEncoder models named `VAE1` and `VAE2`. Use the `ConvEncoder` and `ConvDecoder` layer classes you completed in Question 2 and the `Sampling` layer class from 3.1. Both VAEs should have the following architecture:
- `Input` image
- `ConvEncoder`
- `Flatten` layer
- `Dense` layer with linear activation and 128 units to predict the mean of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Dense` layer with linear activation and 128 units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Sampling` layer you completed in Question 3.1
- `Dense` layer with ReLU activation
- `Reshape` layer: reshapes the output of the `Dense` layer into `latent_dim`
- `ConvDecoder`
Finally, `VAE1` should not use any regularization of the probabilistic encoder (from the prior).
Instead, `VAE2` should incorporate a KL loss to regularize the probabilistic encoder to normal Gaussian of zero mean and unit variance acting as prior, as explained in class.
You may use the following expression: `kl_loss = - reg * 0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)`, where a reasonable value for `reg = 0.1` (feel free to adjust).
To include the intermediate loss in `VAE2`, you may use the function `add_loss` from keras models/layers as explained in the [documentation](https://www.tensorflow.org/guide/keras/train_and_evaluate).
**We recommend saving your trained models.**
<a id="q33"></a>
**[3.3:](#s33)** Why do we use linear activation values to encode the mean and log-variance of the probabilistic encoder? Explain your answer.
<a id="q34"></a>
**[3.4:](#s34)** Visualize 1 original image from the validation data and 5 reconstructions of that image using `VAE1` and 5 using `VAE2`. Comment on the 10 reconstructed images. Notice that you may need to tune the penalty regularization term to observe differences between `VAE1` and `VAE2` (there should be differences!).
<a id="q35"></a>
**[3.5:](#s35)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data on both `VAE1` and `VAE2` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for a given input). Color the datapoints depending on the input's attributes of your choice (e.g. `Male` and `Smiling` if these were your choice). Draw 8 separate scatterplots in total (4 with PCA and 4 with t-SNE). Explain what you observe.
<a id="part3solutions"></a>
## PART 3: Solutions
[Return to contents](#contents)
<a id="s31"></a>
<div class='exercise-r'>
**[3.1:](#q31)** Complete the `call` method of our `Sampling` keras layer class. This method takes as input the mean and log-variance vectors of a multivariate Gaussian distribution and returns a sampled tensor from this distribution.
</div>
```
# 3.1
class Sampling(layers.Layer):
"""
Sampling layer in latent space.
Uses (z_mean, z_log_var) to sample z.
"""
def call(self, inputs):
"""Rturns a random sample from a Gaussian with mean and
log-variance indicated in inputs.
Inputs:
-inputs: tuple (z_mean, z_log_var)
Returns a sample z drawn from Gaussian.
"""
z_mean, z_log_var = inputs
# your code here
# end your code here
return z
```
<a id="s32"></a>
<div class='exercise-r'>
**[3.2:](#q32)** Create two Variational AutoEncoder models named `VAE1` and `VAE2`. Use the `ConvEncoder` and `ConvDecoder` layer classes you completed in Question 2 and the `Sampling` layer class from 3.1. Both VAEs should have the following architecture:
- `Input` image
- `ConvEncoder`
- `Flatten` layer
- `Dense` layer with linear activation and 128 units to predict the mean of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Dense` layer with linear activation and 128 units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Sampling` layer you completed in Question 3.1
- `Dense` layer with ReLU activation
- `Reshape` layer: reshapes the output of the `Dense` layer into `latent_dim`
- `ConvDecoder`
Finally, `VAE1` should not use any regularization of the probabilistic encoder (from the prior).
Instead, `VAE2` should incorporate a KL loss to regularize the probabilistic encoder to normal Gaussian of zero mean and unit variance acting as prior, as explained in class.
You may use the following expression: `kl_loss = - reg * 0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)`, where a reasonable value for `reg = 0.1` (feel free to adjust).
To include the intermediate loss in `VAE2`, you may use the function `add_loss` from keras models/layers as explained in the [documentation](https://www.tensorflow.org/guide/keras/train_and_evaluate).
**We recommend saving your trained models.**
</div>
```
# 3.2
# your code here
```
<a id="s33"></a>
<div class='exercise-r'>
**[3.3:](#q33)** Why do we use linear activation values to encode the mean and log-variance of the probabilistic encoder? Explain your answer.
</div>
*your answer here*
<a id="s34"></a>
<div class='exercise-r'>
**[3.4:](#q34)** Visualize 1 original image from the validation data and 5 reconstructions of that image using `VAE1` and 5 using `VAE2`. Comment on the 10 reconstructed images. Notice that you may need to tune the penalty regularization term to observe differences between `VAE1` and `VAE2` (there should be differences!).
</div>
```
# 3.4
# your code here
```
**Explanation:**
*your answer here*
<a id="s35"></a>
<div class='exercise-r'>
**[3.5:](#q35)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data on both `VAE1` and `VAE2` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for a given input). Color the datapoints depending on the input's attributes of your choice (e.g. `Male` and `Smiling` if these were your choice). Draw 8 separate scatterplots in total (4 with PCA and 4 with t-SNE). Explain what you observe.
</div>
```
# 3.5
# your code here
```
**Explanation of PCA visualization:**
*your answer here*
```
# 3.5
# your code here
```
**Explanation of t-SNE decomposition:**
*your answer here*
<a id="part4"></a>
# PART 4. Set-up a Conditional VAE [20 points]
[Return to contents](#contents)
<a id="part4intro"></a>
## Overview
Conditional VAEs are similar to standard VAEs, except they allow us to also incorporate an attribute label into the latent space. When the model is trained in this form, the model learns to distinguish between the specific features associated with that label. This allows you to then "activate" labeled attributes in the latent space manually and explore the space of those representations in an explicit manner. We point you to [one](https://wiseodd.github.io/techblog/2016/12/17/conditional-vae/) and [two](https://ijdykeman.github.io/ml/2016/12/21/cvae.html) short tutorials on conditional VAEs. Additionally, you may be interested in reading the [original paper](http://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf) and the [continuation paper](https://papers.nips.cc/paper/7880-learning-latent-subspaces-in-variational-autoencoders.pdf).
In this exercise you are going to build a conditional VAE, and reconstruct images by altering their attributes. For example, you could pick a set of non-smiling men and transform them by changing the label conditions in the latent space associated with 'Smiling' and/or 'Male'. You can choose whatever attributes you want, as long as the reconstructed latent space shows reasonable success when changing the attribute labels.
<a id="part4questions"></a>
## PART 4: Questions
<a id="q41"></a>
**[4.1:](#s41)** Create a conditional VAE keras model named `CVAE`. The conditional VAE should have the following architecture:
- `Input` for image
- `Input` for attributes
- `ConvEncoder` layer
- `Flatten` layer: flattens the output of the `ConvEncoder`
- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: concatenates the latent representation of dimension `latent_dim[0]*latent_dim[1]*latent_dim[2]` with two attribute codes of your choice (`tf.float32` representations)
- `Dense` layer with linear activation and `bottleneck_dim` units to predict the mean of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Dense` layer with linear activation and `bottleneck_dim` units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Sampling` layer you completed in Question 3.1
- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: that combines your sample with the two attribute codes of your choice (`tf.float32` representations)
- `Dense` layer with ReLU activation
- `Reshape` layer
- `ConvDecoder`
- Output image of same size as input image
<a id="q42"></a>
**[4.2:](#s42)** Train the model using the data generator you completed in Question 1.2 (use mean squared error loss and an optimizer of your choice). Print a summary of your model.
**We recommend saving your trained models**.
<a id="q43"></a>
**[4.3:](#s43)** Select 5 photos with common attributes from the validation data and reconstruct these images after feeding them to the conditional variational autoencoder `CVAE`. Change the attributes to form the other three possible combinations and visualize all compositions. Comment on your compositions.
For example, if your choice of attributes were 'Male' and 'Smiling', you should reconstruct these images with all possible attribute combinations.
<a id="q44"></a>
**[4.4:](#s44)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data of `CVAE` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for at least 1024 input images). Color the datapoints depending on the input's attributes (e.g. `Male` and `Smiling` if these were your choice). Draw 4 separate scatterplots in total. Explain what you observe.
<a id="part4solutions"></a>
## PART 4: Solutions
[Return to contents](#contents)
<a id="s41"></a>
<div class='exercise-r'>
**[4.1:](#q41)** Create a conditional VAE keras model named `CVAE`. The conditional VAE should have the following architecture:
- `Input` for image
- `Input` for attributes
- `ConvEncoder` layer
- `Flatten` layer: flattens the output of the `ConvEncoder`
- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: concatenates the latent representation of dimension `latent_dim[0]*latent_dim[1]*latent_dim[2]` with two attribute codes of your choice (`tf.float32` representations)
- `Dense` layer with linear activation and `bottleneck_dim` units to predict the mean of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Dense` layer with linear activation and `bottleneck_dim` units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\mu,\sigma)$
- `Sampling` layer you completed in Question 3.1
- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: that combines your sample with the two attribute codes of your choice (`tf.float32` representations)
- `Dense` layer with ReLU activation
- `Reshape` layer
- `ConvDecoder`
- Output image of same size as input image
</div>
```
# 4.1
# your code here
```
<a id="s42"></a>
<div class='exercise-r'>
**[4.2:](#q42)** Train the model using the data generator you completed in Question 1.2 (use mean squared error loss and an optimizer of your choice). Print a summary of your model.
**We recommend saving your trained models**.
</div>
```
# 4.2
# your code here
```
<a id="s43"></a>
<div class='exercise-r'>
**[4.3:](#q43)** Select 5 photos with common attributes from the validation data and reconstruct these images after feeding them to the conditional variational autoencoder `CVAE`. Change the attributes to form the other three possible combinations and visualize all compositions. Comment on your compositions.
For example, if your choice of attributes were 'Male' and 'Smiling', you should reconstruct these images with all possible attribute combinations.
</div>
```
# 4.3
# your code here
```
**Comments on generated images:**
*your answer here*
<a id="s44"></a>
<div class='exercise-r'>
**[4.4:](#q44)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data of `CVAE` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for at least 1024 input images). Color the datapoints depending on the input's attributes (e.g. `Male` and `Smiling` if these were your choice). Draw 4 separate scatterplots in total. Explain what you observe.
</div>
```
# 4.4
# your code here
```
**Explanation of PCA visualization:**
*your answer here*
```
# 4.4
# your code here
```
**Explanation of t-SNE visualization:**
*your answer here*
---
<a id="part5"></a>
# PART 5. Generative Adversarial Networks [20 points]
[Return to contents](#contents)
<a id="part5intro"></a>
## Overview
For the final exercise we are going to create a standard GAN composed of a generator network, and a discriminator network. GANs are tricky to train, so we encourage you to follow the given instructions for the deep convolutional GAN (DCGAN) when building your architecture and training your models.
However, feel completely free to explore and present other architectures if they present better results. For instance, you can instead build a Wasserstein GAN (WGAN), as was illustrated in section. Just be certain to split the different components of your GAN (i.e. generator, discriminator, composition, and training) among the appropriate parts of Question 5 below.
<a id="part5questions"></a>
## PART 5: Questions
<a id="q51"></a>
**[5.1:](#s51)** Create a convolutional keras generator model. We recommend the follow architecture.
- Input to the generator is a noise vector of dimension `bottleneck_dim` (you can rename to `noise_dim` for more corresponding terminology if you prefer)
- `Dense` layer with `latent_dim[0]*latent_dim[1]*latent_dim[2]` units, and `LeakyReLU`
- `Reshape` to `latent_dim`
- 3 `UpSampling2D` layers each followed by a `Conv2D` layer with 128 filters, 4x4 kernels, 1x1 strides, `'same'` padding, followed by `LeakyReLU`. Adjust the `Conv2D` parameters and activation appropriately in the final layer.
Print a summary of your model.
<a id="q52"></a>
**[5.2:](#s52)** Create a convolutional discriminator model. Our recommended setup is to use 3 `Conv2D` layers with filters of size `(4,4)`, `'same'` padding, strides 2x2, and `LeakyReLU` activations. Compile the model with binary cross-entropy loss and an optimizer of your choice. Print a summary of the model.
<a id="q53"></a>
**[5.3:](#s53)** Create a DCGAN model that is a composition of the generator and the discriminator. The DCGAN model takes a Gaussian vector as input into the generator, and then the discriminator decides whether the output comes from the generator or from the true distribution. The DCGAN is composed of the trainable weights of the generator, and fixed discriminator weights. You can accomplish this behavior by fixing the discriminator training weights using `discriminator.trainable = False` before constructing the model. Once you have instantiated the DCGAN model, compile it with a binary cross-entropy loss and optimizer of your choice.
<a id="q54"></a>
**[5.4:](#s54)** Train your model (both DCGAN and discriminator) on the train images of the CelebA dataset. We recommend you display images after every train epoch to visualize performance. You should observe "sensible" images at 5 or fewer epochs, specially if you train on the full dataset. Consider training on a subset of the full dataset if it takes too long.
To train your DCGAN model, you will not be able to use the model's [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit) function. Instead, you should consider using [`train_on_batch`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#train_on_batch) method, where you can manually feed an input and training labels, and alternate between the DCGAN and the discriminator. Datasets are iterable, so you can use them directly in a for-loop to obtain mini-batches. You need to run these three steps inside the for-loop:
1. `train_on_batch` the discriminator on real images with labels equal to 1 (optionally, minus a small smoother) The smoother may help the generator train faster than the discriminator.
2. `train_on_batch` the discriminator on generated images obtained from random Gaussian input and labels equal to 0
3. `train_on_batch` the DCGAN by feeding noise inputs and labels of 1
**Show at least 8 generated images from your final trained DCGAN model for submission**. How do these images compare in quality to the faces generated via VAE? Explain.
<a id="q55"></a>
**[5.5:](#s55)** Standard GANs are composed as a generator and discriminator, as you just coded them. Could we substitute the discriminator with something else, like a KL loss with the empirical distribution? Why or why not? Explain your answer.
<a id="part5solutions"></a>
## PART 5: Solutions
[Return to contents](#contents)
<a id="s51"></a>
<div class='exercise-r'>
**[5.1:](#q51)** Create a convolutional keras generator model. We recommend the follow architecture.
- Input to the generator is a noise vector of dimension `bottleneck_dim` (you can rename to `noise_dim` for more corresponding terminology if you prefer)
- `Dense` layer with `latent_dim[0]*latent_dim[1]*latent_dim[2]` units, and `LeakyReLU`
- `Reshape` to `latent_dim`
- 3 `UpSampling2D` layers each followed by a `Conv2D` layer with 128 filters, 4x4 kernels, 1x1 strides, `'same'` padding, followed by `LeakyReLU`. Adjust the `Conv2D` parameters and activation appropriately in the final layer.
Print a summary of your model.
</div>
```
# 5.1
# your code here
```
<a id="s52"></a>
<div class='exercise-r'>
**[5.2:](#q52)** Create a convolutional discriminator model. Our recommended setup is to use 3 `Conv2D` layers with filters of size `(4,4)`, `'same'` padding, strides 2x2, and `LeakyReLU` activations. Compile the model with binary cross-entropy loss and an optimizer of your choice. Print a summary of the model.
</div>
```
# 5.2
# your code here
```
<a id="s53"></a>
<div class='exercise-r'>
**[5.3:](#q53)** Create a DCGAN model that is a composition of the generator and the discriminator. The DCGAN model takes a Gaussian vector as input into the generator, and then the discriminator decides whether the output comes from the generator or from the true distribution. The DCGAN is composed of the trainable weights of the generator, and fixed discriminator weights. You can accomplish this behavior by fixing the discriminator training weights using `discriminator.trainable = False` before constructing the model. Once you have instantiated the DCGAN model, compile it with a binary cross-entropy loss and optimizer of your choice.
</div>
```
# 5.3
# your code here
```
<a id="s54"></a>
<div class='exercise-r'>
**[5.4:](#q54)** Train your model (both DCGAN and discriminator) on the train images of the CelebA dataset. We recommend you display images after every train epoch to visualize performance. You should observe "sensible" images at 5 or fewer epochs, specially if you train on the full dataset. Consider training on a subset of the full dataset if it takes too long.
To train your DCGAN model, you will not be able to use the model's [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit) function. Instead, you should consider using [`train_on_batch`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#train_on_batch) method, where you can manually feed an input and training labels, and alternate between the DCGAN and the discriminator. Datasets are iterable, so you can use them directly in a for-loop to obtain mini-batches. You need to run these three steps inside the for-loop:
1. `train_on_batch` the discriminator on real images with labels equal to 1 (optionally, minus a small smoother). The smoother may help the generator train faster than the discriminator
2. `train_on_batch` the discriminator on generated images obtained from random Gaussian input and labels equal to 0
3. `train_on_batch` the DCGAN by feeding noise inputs and labels of 1
**Show at least 8 generated images from your final trained DCGAN model for submission**. How do these images compare in quality to the faces generated via VAE? Explain.
</div>
```
# 5.4
# your code here
```
*your answer here*
<a id="s55"></a>
<div class='exercise-r'>
**[5.5:](#q55)** Standard GANs are composed as a generator and discriminator, as you just coded them. Could we substitute the discriminator with something else, like a KL loss with the empirical distribution? Why or why not? Explain your answer.
</div>
*your answer here*
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_07 import *
```
## Layerwise Sequential Unit Variance (LSUV)
### paper: https://arxiv.org/pdf/1511.06422.pdf
Getting the MNIST data and a CNN
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=235)
```
x_train,y_train,x_valid,y_valid = get_data()
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
mnist_view = view_tfm(1,28,28)
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, mnist_view)]
nfs = [8,16,32,64,64]
class ConvLayer(nn.Module):
def __init__(self, ni, nf, ks=3, stride=2, sub=0., **kwargs):
super().__init__()
self.conv = nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True)
self.relu = GeneralRelu(sub=sub, **kwargs)
def forward(self, x): return self.relu(self.conv(x))
@property
def bias(self): return -self.relu.sub
@bias.setter
def bias(self,v): self.relu.sub = -v
@property
def weight(self): return self.conv.weight
learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)
```
Now we're going to look at the paper [All You Need is a Good Init](https://arxiv.org/pdf/1511.06422.pdf), which introduces *Layer-wise Sequential Unit-Variance* (*LSUV*). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional layers. We can then rescale the weights according to the actual variance we observe on the activations, and subtract the mean we observe from the initial bias. That way we will have activations that stay normalized.
We repeat this process until we are satisfied with the mean/variance we observe.
Let's start by looking at a baseline:
```
run.fit(2, learn)
```
Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!
```
learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)
```
Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.
```
#export
def get_batch(dl, run):
run.xb,run.yb = next(iter(dl))
for cb in run.cbs: cb.set_runner(run)
run('begin_batch')
return run.xb,run.yb
xb,yb = get_batch(data.train_dl, run)
```
We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use `sum(list, [])` to concatenate the lists the function finds (`sum` applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).
```
#export
def find_modules(m, cond):
if cond(m): return [m]
return sum([find_modules(o,cond) for o in m.children()], [])
def is_lin_layer(l):
lin_layers = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear, nn.ReLU)
return isinstance(l, lin_layers)
mods = find_modules(learn.model, lambda o: isinstance(o,ConvLayer))
mods
```
This is a helper function to grab the mean and std of the output of a hooked layer.
```
def append_stat(hook, mod, inp, outp):
d = outp.data
hook.mean,hook.std = d.mean().item(),d.std().item()
mdl = learn.model.cuda()
```
So now we can look at the mean and std of the conv layers of our model.
```
with Hooks(mods, append_stat) as hooks:
mdl(xb)
for hook in hooks: print(hook.mean,hook.std)
```
We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The `mdl(xb) is not None` clause is just there to pass `xb` through `mdl` and compute all the activations so that the hooks get updated.
```
#export
def lsuv_module(m, xb):
h = Hook(m, append_stat)
while mdl(xb) is not None and abs(h.mean) > 1e-3: m.bias -= h.mean
while mdl(xb) is not None and abs(h.std-1) > 1e-3: m.weight.data /= h.std
h.remove()
return h.mean,h.std
```
We execute that initialization on all the conv layers in order:
```
for m in mods: print(lsuv_module(m, xb))
```
Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight.
Then training is beginning on better grounds.
```
%time run.fit(2, learn)
```
LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer.
## Export
```
!python notebook2script.py 07a_lsuv.ipynb
```
| github_jupyter |
# 1 - Sequence to Sequence Learning with Neural Networks
In this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.
In this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper.
## Introduction
The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.

The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:
$$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$
We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit).
Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.
Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.
Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:
$$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$
Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.
In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$.
$$\hat{y}_t = f(s_t)$$
The words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.
Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model.
## Preparing Data
We'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
```
We'll set the random seeds for deterministic results.
```
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
spaCy has model for each language ("de_core_news_sm" for German and "en_core_web_sm" for English) which need to be loaded so we can access the tokenizer of each model.
**Note**: the models must first be downloaded using the following on the command line:
```
python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm
```
We load the models as such:
```
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
```
Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.
In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens.
```
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
```
torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61).
We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
```
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
```
Next, we download and load the train, validation and test data.
The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence.
`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
```
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
```
We can double check that we've loaded the right number of examples:
```
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
```
We can also print out an example, making sure the source sentence is reversed:
```
print(vars(train_data.examples[0]))
```
The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.
Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.
Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
```
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
```
The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
We also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us!
We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
## Building the Seq2Seq Model
We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
### Encoder
First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers.
For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:
$$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$
The hidden states in the second layer are given by:
$$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$
Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.
Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.
$$\begin{align*}
h_t &= \text{RNN}(e(x_t), h_{t-1})\\
(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})
\end{align*}$$
We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.
Extending our multi-layer equations to LSTMs, we get:
$$\begin{align*}
(h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\
(h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.
So our encoder looks something like this:

We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:
- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.
- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions.
- `hid_dim` is the dimensionality of the hidden and cell states.
- `n_layers` is the number of layers in the RNN.
- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.
We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/).
The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.
One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.
In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros.
The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).
As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`.
The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
```
### Decoder
Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.

The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.
$$\begin{align*}
(s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\
(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.
We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$.
$$\hat{y}_{t+1} = f(s_t^L)$$
The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.
Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.
**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
```
### Seq2Seq
For the final part of the implemenetation, we'll implement the seq2seq model. This will handle:
- receiving the input/source sentence
- using the encoder to produce the context vectors
- using the decoder to produce the predicted output/target sentence
Our full model will look like this:

The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).
For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.
Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence.
The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.
We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.
The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder.
During each iteration of the loop, we:
- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder
- receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder
- place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`
- decide if we are going to "teacher force" or not
- if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`
- if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor
Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.
**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Later on when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
```
# Training the Seq2Seq Model
Now we have our model implemented, we can begin training it.
First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
```
Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
```
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
```
We also define a function that will calculate the number of trainable parameters in the model.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
```
optimizer = optim.Adam(model.parameters())
```
Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
```
Next, we'll define our training loop.
First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Here, when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
At each iteration:
- get the source and target sentences from the batch, $X$ and $Y$
- zero the gradients calculated from the last batch
- feed the source and target into the model to get the output, $\hat{Y}$
- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
- we slice off the first column of the output and target tensors as mentioned above
- calculate the gradients with `loss.backward()`
- clip the gradients to prevent them from exploding (a common issue in RNNs)
- update the parameters of our model by doing an optimizer step
- sum the loss value to a running total
Finally, we return the loss that is averaged over all batches.
```
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
```
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Next, we'll create a function that we'll use to tell us how long an epoch takes.
```
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
We can finally start training our model!
At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
```
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
```
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.
| github_jupyter |
```
conda install pandas
conda install numpy
conda install matplotlib
pip install plotly
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
from scipy import stats
import warnings
%matplotlib inline
warnings.filterwarnings("ignore")
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score, confusion_matrix, classification_report, accuracy_score
from sklearn.linear_model import ElasticNet, LogisticRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor, AdaBoostRegressor, RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from scipy.stats import chi2_contingency
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
import pickle
df = pd.read_csv("heart.csv")
df
```
1. Load in the data. The target column should be considered as whether a patient will develop heart disease or not.
```
X_df = df.drop("target", axis=1)
X_df.shape
y_df = df["target"]
y_df.shape
```
2. Explore the data. Notice all columns are numerical. Therefore separate the continuous from the discrete features.
```
df.info()
df.describe()
df.nunique()
df.shape
numerical_continuous = []
for column in df.columns:
if df[column].dtypes != "object":
if df[column].nunique() >= 10:
numerical_continuous.append(column)
numerical_continuous
numerical_discreet = []
for column in df.columns:
if df[column].dtypes != "object":
if df[column].nunique() < 10:
numerical_discreet.append(column)
numerical_discreet
```
3. Identify any presence of outliers in the continuous features and resolve them using the IQR method.
```
for column in numerical_continuous:
(df[column].value_counts()/df.shape[0]).plot(kind = "box")
plt.title(column)
plt.show()
def remove_outlier(df, numerical_continuous):
q1 = df[numerical_continuous].quantile(0.25)
q3 = df[numerical_continuous].quantile(0.75)
iqr = q3-q1
fence_low = q1-1.5 * iqr
fence_high = q3+1.5 * iqr
df = df.loc[(df[numerical_continuous] > fence_low) & (df[numerical_continuous] < fence_high)]
return df
re_dat = remove_outlier(stepframe, stepframe.columns)
for column in numerical_continuous:
lower, upper = remove_outlier(df[column])
df = df.loc[(df[column] > lower) & (df[column] < upper)]
```
4. Binned the continuous column values apart from the column ‘oldpeak’.
```
le = LabelEncoder()
for column in numerical_continuous[:-1]:
df[column] = pd.qcut(df[column], q = [0, 0.25, 0.50, 0.75, 1])
df[column] = le.fit_transform(df[column])
df
```
5.Separate the features from the labels and use the most appropriate feature selection technique(s).
```
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
feature_sel_df = df.drop(["target"], axis = 1)
feature_sel_df[numerical_continuous] = feature_sel_df[numerical_continuous]
selector = SelectKBest(score_func=chi2, k=3)
selected_df = selector.fit_transform(feature_sel_df, df["target"])
selected_df
```
6. Slice the data and scale the features.
```
scaled_df = df[[numerical_continuous]]
print("mean:", scaled_df[numerical_continuous].mean())
print("standard deviation:", scaled_df[numerical_continuous].std())
```
7. Identify the data if the data is balanced. If not, sample the data using the most appropriate method keeping the size of the data in mind.
```
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(clf, X, y)
from sklearn.metrics import roc_curve
y_prob = clf.predict_proba (X_test)
y_probs = y_probs[:,1]
Fpr, tpr, thresholds = roc_curve(y_test, y_prob)
Fpr
import matplotlib.pyplot as plt
def plot_roc_curve(Fpr, tpr)
```
8. Using at least 4 classification methods, identify the best machine learning model using their training and testing accuracy scores.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.33, random_state=42)
log_reg = LogisticRegression(random_state = 0)
svm_clf = SVC(random_state = 0)
knn_clf = KNeighborsClassifier()
rf_clf = RandomForestClassifier(random_state = 0)
models = {'LogisticRegression': log_reg, 'SVC': svm_clf, 'KNeighborsClassifier': knn_clf, 'RandomForestClassifier': rf_clf}
def model_training_testing(models):
for model_name, model in models.items():
model.fit(X_train, y_train)
y_predict_trian = model.predict(X_train)
y_predict_test = model.predict(X_test)
print(f'{model_name} Training Accuracy:', accuracy_score(y_train, np.round(y_predict_trian)))
print(f'{model_name} Testing Accuracy:', accuracy_score(y_test, np.round(y_predict_test)))
print('\n')
model_training_testing(models)
```
9. Hyper parameter tune the best model using grid search to identify the best performing model.
```
params = {'n_estimators': np.arange(10, 100, 10), 'random_state': [0], 'n_jobs': [1, -1]}
grid_search = GridSearchCV(RandomForestClassifier(), params, n_jobs = -1, cv = 5)
grid_search.fit(X_train, y_train)
grid_search.best_estimator_
```
10. Redefine the model instance based on the grid search results, train it and evaluate it using:
a. A classification report.
b. A visual representation and well labelled confusion matrix.
c. AUC score. (Explain the score in a markdown cell.)
d. ROC curve.
```
def model_evaluation(model, X, y, model_name):
y_predict = model.predict(X)
print(f'Model: {model_name} \n \n Classification Report: {classification_report(y, y_predict)}')
cnf_matrix = confusion_matrix(y, y_predict)
class_names = [0, 1]
tick_marks = np.arange(len(class_names))
plt.figure(figsize = (9, 7))
sns.heatmap(pd.DataFrame(cnf_matrix), annot = True, cmap = "YlGnBu", fmt = 'g')
plt.title(f'{model_name} Confusion Matrix', y = 1.1, fontsize = 22)
plt.ylabel('Actual Label', fontsize = 15)
plt.xlabel('Predicted Label', fontsize = 15)
model_evaluation(rf_clf_tuned, X_test, y_test, model_name = 'Random Forest Classifier Tuned')
from sklearn.metrics import roc_auc_score, roc_curve
y_pred_prob = rf_clf_tuned.predict_proba(X_test)[:, 1]
print(f'Area Under the Curve Score: {roc_auc_score(y_test, y_pred_prob)}')
fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)
df_roc = pd.DataFrame([fpr, tpr]).T
df_roc.columns = ['False Positive Ratio', 'True Positive Ratio']
import plotly.express as px
fig = px.line(df_roc, x = 'False Positive Ratio', y = 'True Positive Ratio')
fig.update_layout(title = dict(text = "ROC Curve.", y = 0.95, x = 0.5,
xanchor = 'center', yanchor = 'top', font = dict(size = 20)))
```
11. Based on the results on the ROC curve, which threshold would be ideal given the nature of the data? (Explain in a markdown cell.)
12. Save the model as ‘classification_model’.
```
pickle.dump(rf_clf_tuned, open("classification_model.pkl", "wb"))
```
| github_jupyter |
# Cleaning up the academy awards dataset and creating a SQLite table
```
import pandas as pd
academy_awards = pd.read_csv("academy_awards.csv", encoding = "ISO-8859-1")
academy_awards.head()
for column in academy_awards.columns:
print("No. of unique values in '{0}' are".format(column),len(academy_awards[column].value_counts()),"\n")
won_not_yes_no = academy_awards[(academy_awards['Won?'] != 'YES') & (academy_awards['Won?'] != 'NO')]
print(won_not_yes_no)
for i in range(5,11):
print(academy_awards.iloc[:,i].value_counts())
academy_awards = academy_awards.iloc[:,:5]
academy_awards.head()
for index in won_not_yes_no.index:
academy_awards.loc[index,'Won?'] = 'YES'
print(academy_awards['Won?'].value_counts())
academy_awards['Year'].value_counts()
import re
split_year = academy_awards['Year'].map(lambda x: (re.search("[/]", x)) is not None)
print(academy_awards[split_year]['Year'].unique())
academy_awards[academy_awards['Year'].map(lambda x: (re.search("1934", x)) is not None)].head()
split_year_dict = {
'1932/33 (6th)': '1933 (6th)',
'1931/32 (5th)': '1932 (5th)',
'1930/31 (4th)': '1931 (4th)',
'1929/30 (3rd)': '1930 (3rd)',
'1928/29 (2nd)': '1929 (2nd)',
'1927/28 (1st)': '1928 (1st)'
}
for key, value in split_year_dict.items():
academy_awards[academy_awards['Year']==key].loc[:,'Year'] = value
print(academy_awards[split_year].loc[:,'Year'].unique())
academy_awards['Category'].value_counts()
academy_awards['Nominee'].value_counts()
academy_awards['Additional Info'].value_counts()
academy_awards["Year"] = academy_awards["Year"].str[0:4].astype("int64")
academy_awards["Year"]
later_than_2000 = academy_awards[academy_awards["Year"] > 2000]
later_than_2000['Year'].value_counts()
award_categories = ["Actor -- Leading Role","Actor -- Supporting Role","Actress -- Leading Role",\
"Actress -- Supporting Role"]
nominations = later_than_2000[later_than_2000["Category"].isin(award_categories)]
nominations["Category"].value_counts()
replace_dict = { "YES": 1, "NO": 0 }
nominations["Won?"] = nominations["Won?"].map(replace_dict)
nominations["Won?"].value_counts()
nominations["Won"] = nominations["Won?"]
final_nominations = nominations.drop("Won?", axis=1)
final_nominations.head()
additional_info_1 = final_nominations["Additional Info"].str.rstrip("'}")
additional_info_2 = additional_info_1.str.split(" {'")
movie_names = additional_info_2.str[0]
characters = additional_info_2.str[1]
final_nominations["Movie"] = movie_names
final_nominations["Character"] = characters
final_nominations.head()
final_nominations = final_nominations.drop("Additional Info", axis=1)
final_nominations.head()
import sqlite3
conn = sqlite3.connect("nominations.db")
final_nominations.to_sql("nominations", conn, index = False)
def query(query_str):
result = conn.execute(query_str).fetchall()
return result
query("PRAGMA table_info(nominations);")
query("SELECT * FROM nominations LIMIT 10;")
conn.close()
```
## Next Steps
Explore the rest of our original dataset academy_awards.csv and brainstorm how to fix the rest of the dataset:
* The awards categories in older ceremonies were different than the ones we have today. What relevant information should we keep from older ceremonies?
* What are all the different formatting styles that the Additional Info column contains. Can we use tools like regular expressions to capture these patterns and clean them up?
* The nominations for the Art Direction category have lengthy values for Additional Info. What information is useful and how do we extract it?
* Many values in Additional Info don't contain the character name the actor or actress played. Should we toss out character name altogether as we expand our data? What tradeoffs do we make by doing so?
* What's the best way to handle awards ceremonies that included movies from 2 years?
E.g. see 1927/28 (1st) in the Year column.
| github_jupyter |
```
import pandas as pd
import numpy as np
# Tools
from collections import Counter
import pickle
# Preprocessing & Selections
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_selection import SelectKBest, chi2, f_classif
from sklearn.model_selection import train_test_split
# Sampling
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import NearMiss
from imblearn.under_sampling import RandomUnderSampler
# Load dataframe
df = pd.read_pickle('../data/02_df_pre_model_2018.pkl')
# # Convert to Dask dataframe
# df = dd.from_pandas(df_pd, npartitions=16)
df.head()
# Train and test splitting
# Columns to exclude
exclude_cols = [
'target', # Target variable
'case_id',
'opened', # Feature Eng
'closed', # Feature Eng
'status',
'status_notes', # Needs NLP
'request_details', # Needs NLP
'address', # Needs NLP
# 'street',
'point',
# New items
'responsible_agency',
'category', # Need to choose 'category' or 'request_type' NOT BOTH
# 'request_type', # Needs NLP
'opened_year',
# 'opened_month_sin',
# 'opened_month_cos',
# 'opened_week_sin',
# 'opened_week_cos',
# 'opened_day_sin',
# 'opened_day_cos',
# 'opened_hour_sin',
# 'opened_hour_cos',
'police_district',
'supervisor_district',
# 'latitude',
'longitude',
]
# Predictor variables
X = df.drop(columns=exclude_cols, axis=0, inplace=False)
# Get dummies for categorical variables
X = pd.get_dummies(X, drop_first=True)
# Target variable
y = df['target']
# Split train and test
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=2020,
stratify=y, # Stratify to keep same class ratios
shuffle=True # Shuffle data since it's ordered chronologically
)
X_train.head()
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
#Medium
# scaler = StandardScaler()
# scaler.fit(X_train)
# X_train = scaler.transform(X_train)
# X_test = scaler.transform(X_test)
# StackOverflow
# scaler = MinMaxScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# model = SVC()
# model.fit(X_train_scaled, y_train)
# X_test_scaled = scaler.transform(X_test)
# y_pred = model.predict(X_test_scaled)
# # Pickle for later use
# with open('../data/03_X.pkl', 'wb') as f:
# pickle.dump(X, f)
# f.close()
# X_train.to_pickle('../data/X_train.pkl')
# X_test.to_pickle('../data/X_test.pkl')
# y_train.to_pickle('../data/y_train.pkl')
# y_test.to_pickle('../data/y_test.pkl')
```
# Feature Selection
```
def select_features(X_train, y_train, X_test):
'''Returns X_train, X_test, and feature selection function'''
fs = SelectKBest(score_func=chi2, k='all')
fs.fit(X_train, y_train)
# X_train_fs = fs.transform(X_train)
# X_test_fs = fs.transform(X_test)
# return X_train_fs, X_test_fs, fs
return fs
# Feature selection
# X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)
fs = select_features(X_train_scaled, y_train, X_test_scaled)
# # Feature scores
# features_df = pd.DataFrame(data=[X_train.columns, fs.scores_.astype(int)]).transpose()
# features_df.rename(columns={0: 'Feature', 1: 'ANOVA F-Value'}, inplace=True)
# features_df.sort_values(by='ANOVA F-Value', ascending=False, inplace=True)
# features_df.reset_index(drop=True, inplace=True)
# features_df
# Feature scores
features_df = pd.DataFrame(data=[X_train.columns, fs.scores_.astype(int)]).transpose()
features_df.rename(columns={0: 'Feature', 1: 'Chi2'}, inplace=True)
features_df.sort_values(by='Chi2', ascending=False, inplace=True)
features_df.reset_index(drop=True, inplace=True)
features_df
# Select features above threshold
threshold = 50
# best_features_df = features_df[(features_df['ANOVA F-Value'] > threshold)]
best_features_df = features_df[(features_df['Chi2'] > threshold)]
best_features_df
# best_features_df.to_pickle('../data/best_features_df.pkl')
# best_features_df = pd.read_pickle('../data/best_features_df.pkl')
# # Filter X_train & X_test with selected features
# X_train = X_train.filter(items=best_features_df['Feature'])
# X_test = X_test.filter(items=best_features_df['Feature'])
# # Clean column names
# X_train.columns = X_train.columns.str.strip().str.lower().str.replace(
# ' ', '_').str.replace('(', '').str.replace(')', '')
# X_test.columns = X_test.columns.str.strip().str.lower().str.replace(
# ' ', '_').str.replace('(', '').str.replace(')', '')
# Filter X_train & X_test with selected features
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns).filter(items=best_features_df['Feature'])
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_train.columns).filter(items=best_features_df['Feature'])
# Clean column names
X_train_scaled.columns = X_train_scaled.columns.str.strip().str.lower().str.replace(
' ', '_').str.replace('(', '').str.replace(')', '')
X_test_scaled.columns = X_test_scaled.columns.str.strip().str.lower().str.replace(
' ', '_').str.replace('(', '').str.replace(')', '')
print('df\t', df.shape)
print('X_train\t', X_train_scaled.shape)
print('X_test\t', X_test_scaled.shape)
print('y_train\t', y_train.shape)
print('y_test\t', y_test.shape)
```
# Class Balancing
```
# Target variable
target_count = df['target'].value_counts()
# Print class balance
print(f'Class 0: {target_count[0]}')
print(f'Class 1: {target_count[1]}')
print(f'Proportion: {round(target_count[0] / target_count[1], 2)} : 1')
print(f'Percentage of Majority Class: {round(target_count[0] / sum(target_count), 3)*100}')
```
## Oversampling
```
# # Define the oversampling method – SMOTE
# smote = SMOTE(random_state=2020)
# X_train_smote, y_train_smote = smote.fit_sample(X_train, y_train)
# # Summarize the new class distribution
# Counter(y_train_smote)
```
## Undersampling
```
# Define the undersampling method – RandomUnderSampler
rndm_under = RandomUnderSampler(random_state=2020)
# Transform the dataset
# X_train_under, y_train_under = rndm_under.fit_sample(X_train, y_train)
X_train_under, y_train_under = rndm_under.fit_sample(X_train_scaled, y_train)
# New class distribution
Counter(y_train_under)
# Pickle dataframes
df.to_pickle('../data/df.pkl')
X_train_under.to_pickle('../data/03_X_train_under.pkl')
X_test_scaled.to_pickle('../data/03_X_test.pkl')
y_train_under.to_pickle('../data/03_y_train_under.pkl')
y_test.to_pickle('../data/03_y_test.pkl')
# # Transform to Dask dataframes
# X_train_under = dd.from_pandas(X_train_under, npartitions=16)
# X_test = dd.from_pandas(X_test, npartitions=16)
# y_train_under = dd.from_pandas(y_train_under, npartitions=16)
# y_test = dd.from_pandas(y_test, npartitions=16)
```
# Appendix
```
# Dask
# cat /proc/cpuinfo
# from dask.distributed import Client, progress
# from sklearn.externals.joblib import parallel_backend
# client = Client(processes=False)
# # client = Client(processes=False, n_workers=4, threads_per_worker=8)
# client
# # client.close()
# # Define the undersampling method – NearMiss
# # Selects the closest examples from the majority class for each minority class.
# undersample = NearMiss(version=3, n_neighbors_ver3=3)
# # Transform the dataset
# X_train_under, y_train_under = undersample.fit_resample(X_train, y_train)
# # Summarize the new class distribution
# Counter(y_train_under)
```
| github_jupyter |
# mlforecast
> Scalable machine learning based time series forecasting.
**mlforecast** is a framework to perform time series forecasting using machine learning models, with the option to scale to massive amounts of data using remote clusters.
[](https://github.com/Nixtla/mlforecast/actions/workflows/ci.yaml)
[](https://github.com/Nixtla/mlforecast/actions/workflows/lint.yaml)
[](https://pypi.org/project/mlforecast/)
[](https://pypi.org/project/mlforecast/)
[](https://anaconda.org/conda-forge/mlforecast)
[](https://codecov.io/gh/Nixtla/mlforecast)
[](https://github.com/Nixtla/mlforecast/blob/main/LICENSE)
## Install
### PyPI
`pip install mlforecast`
#### Optional dependencies
If you want more functionality you can instead use `pip install mlforecast[extra1,extra2,...]`. The current extra dependencies are:
* **aws**: adds the functionality to use S3 as the storage in the CLI.
* **cli**: includes the validations necessary to use the CLI.
* **distributed**: installs [dask](https://dask.org/) to perform distributed training. Note that you'll also need to install either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).
For example, if you want to perform distributed training through the CLI using S3 as your storage you'll need all three extras, which you can get using: `pip install mlforecast[aws,cli,distributed]`.
### conda-forge
`conda install -c conda-forge mlforecast`
Note that this installation comes with the required dependencies for the local interface. If you want to:
* Use s3 as storage: `conda install -c conda-forge s3path`
* Perform distributed training: `conda install -c conda-forge dask` and either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).
## How to use
The following provides a very basic overview, for a more detailed description see the [documentation](https://nixtla.github.io/mlforecast/).
### Programmatic API
```
#hide
import os
import shutil
from pathlib import Path
from IPython.display import display, Markdown
os.chdir('..')
def display_df(df):
display(Markdown(df.to_markdown()))
```
Store your time series in a pandas dataframe with an index named **unique_id** that identifies each time serie, a column **ds** that contains the datestamps and a column **y** with the values.
```
from mlforecast.utils import generate_daily_series
series = generate_daily_series(20)
display_df(series.head())
```
Then create a `TimeSeries` object with the features that you want to use. These include lags, transformations on the lags and date features. The lag transformations are defined as [numba](http://numba.pydata.org/) *jitted* functions that transform an array, if they have additional arguments you supply a tuple (`transform_func`, `arg1`, `arg2`, ...).
```
from mlforecast.core import TimeSeries
from window_ops.expanding import expanding_mean
from window_ops.rolling import rolling_mean
ts = TimeSeries(
lags=[7, 14],
lag_transforms={
1: [expanding_mean],
7: [(rolling_mean, 7), (rolling_mean, 14)]
},
date_features=['dayofweek', 'month']
)
ts
```
Next define a model. If you want to use the local interface this can be any regressor that follows the scikit-learn API. For distributed training there are `LGBMForecast` and `XGBForecast`.
```
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(random_state=0)
```
Now instantiate your forecast object with the model and the time series. There are two types of forecasters, `Forecast` which is local and `DistributedForecast` which performs the whole process in a distributed way.
```
from mlforecast.forecast import Forecast
fcst = Forecast(model, ts)
```
To compute the features and train the model using them call `.fit` on your `Forecast` object.
```
fcst.fit(series)
```
To get the forecasts for the next 14 days call `.predict(14)` on the forecaster. This will update the target with each prediction and recompute the features to get the next one.
```
predictions = fcst.predict(14)
display_df(predictions.head())
```
### CLI
If you're looking for computing quick baselines, want to avoid some boilerplate or just like using CLIs better then you can use the `mlforecast` binary with a configuration file like the following:
```
!cat sample_configs/local.yaml
```
The configuration is validated using `FlowConfig`.
This configuration will use the data in `data.prefix/data.input` to train and write the results to `data.prefix/data.output` both with `data.format`.
```
data_path = Path('data')
data_path.mkdir()
series.to_parquet(data_path/'train')
!mlforecast sample_configs/local.yaml
list((data_path/'outputs').iterdir())
#hide
shutil.rmtree(data_path)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import HTML
from datetime import datetime
# General
import os
# Drawing
import cartopy
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io import shapereader
from matplotlib.cm import get_cmap
import matplotlib.cm as cm
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from math import floor
from matplotlib import patheffects
import matplotlib
if os.name == 'nt':
matplotlib.rc('font', family='Arial')
else: # might need tweaking, must support black triangle for N arrow
matplotlib.rc('font', family='DejaVu Sans')
from datetime import date
plt.ioff()
from IPython.display import display, Javascript
Javascript('document.title="{}"'.format("Coronavirus Enforcement"))
DATA_URL = 'https://www.scotland.police.uk/spa-media/ewloducq/coronavirus-enforcement-information-to-30-june-2021.xlsx'
def datesFromData(url):
raw_data = pd.read_excel(url, sheet_name=1)
earlyDate = (min(raw_data["Date"]).strftime("%d %B %Y"))
lateDate = (max(raw_data["Date"]).strftime("%d %B %Y"))
return earlyDate, lateDate
today = date.today()
date_formatted = today.strftime("%d %B %Y")
earliestDate, latestDate = datesFromData(DATA_URL)
EXPLANATION = """\
<div class="app-sidebar">
<p><em>Compare the prevalence of different intervention results - geospatially.</em><p>
<p>As a result of the 2020 introduction of the: <a href="https://www.legislation.gov.uk/ssi/2020/103/contents/made">The Health Protection (Coronavirus) (Restrictions) (Scotland) Regulations 2020</a>
and <a href="https://www.legislation.gov.uk/ukpga/2020/7/contents/enacted">Coronavirus Act 2020</a>,
Police Scotland were mandated to develop a ‘Coronavirus Interventions’ (CVI) recording system.</p>
<p>Police Scotland gather data in reference to the public co-operation levels with the new legislation.
However, <b>it should be noted</b>, the system relies on Police officers manually updating the system - with the specific co-operation level they <i>"experienced"</i> when they encounter a contravention of the legislation.</p>
<p>As such, the CVI data is indicative only and actual figures may be higher. CVI data is published <a href="https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/">weekly</a>
and broken down by date, Police Scotland division, subdivision and the following five categories of CVI:
<ul>
<li>Total number of people dispersed when informed</li>
<li>Total number of people dispersed but only when instructed</li>
<li>Total number of people removed from place or premise</li>
<li>Total number of people issued a fixed penalty notice (FPN)</li>
<li>Total number of people arrested</li>
</ul></p>
<p> The map can display CVI data from """ + earliestDate + """ to """ + latestDate + """, for each of the above categories,
in terms of: total numbers, numbers per 100,000 people, <a href="https://github.com/groegercesg/CovidEnforcementScotland#officer-numbers">numbers per 100 officers*</a> and average daily arrests within a Police Scotland division.</p>
</div>
"""
CREATED = """ \
<em>Created by: <a href="https://callumgroeger.com">Callum Groeger</a> | """ + date_formatted + """ </em>
<br>
"""
PROJECTION = """ \
<em>Projection: British National Grid (BNG) | License: MIT </em>
<br>
"""
DATA = """ \
<em>Data: Coronavirus Interventions (<a href="https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/">Police Scotland</a>),
Population Estimates 2019 (<a href="https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019">National Records of Scotland</a>),
Police Divisions (<a href="https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231">Scottish Government</a>),
Police Staffing Q1 2021 (<a href="https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/">Police Scotland</a>)
</em>
"""
GIF_ADDRESS = 'gif.gif'
HTML("""\
<style>
.app-title {
font-size: 2.5em;
}
.app-subtitle {
font-size: 1.5em;
}
.app-subtitle a {
color: #106ba3;
}
.app-subtitle a:hover {
text-decoration: underline;
}
.app-sidebar p {
margin-bottom: 1em;
line-height: 1.7;
}
.app-sidebar a {
color: #106ba3;
}
.app-sidebar a:hover {
text-decoration: underline;
}
</style>
""")
class App:
def __init__(self, df):
self._df = df
self._dfBASE = df.copy(deep=True)
# Get dropdown options, cut out the first five - as this is just Divisions
available_indicators = list(self._df)
del available_indicators[0:4]
# Loading GIF
with open(GIF_ADDRESS, 'rb') as f:
img = f.read()
# create loading bar widget, ready to display when running long function
self.loading_bar = widgets.Image(value=img)
self.loading_bar.layout.object_fit = 'contain'
self._dropdown1 = self._create_indicator_dropdown(available_indicators, 0)
self._dropdown2 = self._create_indicator_dropdown([("Total", 0), ("Per 100,000", 1), ("Per 100 officers", 2), ("Daily Average", 3)], 0)
self._plot_container = widgets.Output()
self._date_slider, date_slider_box = self._create_date_slider(
df, 'Date'
)
self._app_container = widgets.VBox([
widgets.HBox([
self._dropdown1,
self._dropdown2
]),
self._plot_container,
date_slider_box
], layout=widgets.Layout(align_items='center', flex='3 0 auto'))
# flex: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Properties-of-the-items
self.container = widgets.VBox([
widgets.HTML(
(
'<h1 class="app-title">Police Scotland Coronavirus Interventions 2020-1</h1>'
'<h2 class="app-subtitle"><a href="https://github.com/groegercesg/CovidEnforcementScotland">Link to Github</a></h2>'
),
layout=widgets.Layout(margin='0 0 2em 0')
# margin: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Shorthand-CSS-properties
),
widgets.HBox([
self._app_container,
widgets.HTML(EXPLANATION, layout=widgets.Layout(margin='0 0 0 2em')) # 0
], layout=widgets.Layout(margin='0 0 2em 0')),
# layout options for center: align_items='center', align_content='center'
widgets.HTML(
(
'<hr>'
)),
widgets.HBox([
widgets.HTML(CREATED),
widgets.HTML(PROJECTION),
widgets.HTML(DATA)
], layout=widgets.Layout(display='flex', flex_flow='column', align_items='center', width='100%'))
], layout=widgets.Layout(flex='1 1 auto', margin='0 auto 0 auto', max_width='1024px'))
self._update_app()
def _create_date_slider(self, df, column_name):
dates = df[column_name]
options = [(date.strftime(' %d %b %Y '), date) for date in dates]
index = (0, len(options)-1)
date_slider_label = widgets.Label('Date range: ')
date_slider = widgets.SelectionRangeSlider(
options=options,
index=index,
orientation='horizontal',
continuous_update=False,
layout=widgets.Layout(width='500px')
)
date_slider.observe(self._on_change, names=['value'])
date_slider_box = widgets.HBox([date_slider_label, date_slider],
layout=widgets.Layout(flex='1 1 auto', width='auto'))
# We need to manually set the description of our SelectionRangeSlider
# We can do this physically with Inspect Element
# .widget-inline-hbox .widget-readout {
# text-align: center;
# max-width: 200px;
# Discussion at: https://github.com/jupyter-widgets/ipywidgets/issues/2318
return date_slider, date_slider_box
def groupByDailyAverage(self, df, days):
df['Daily Average Asked / Informed'] = df.apply (lambda row: row['Asked / Informed']/days if days > 0 else 0, axis=1)
df['Daily Average Warned / Instructed'] = df.apply (lambda row: row['Warned / Instructed']/days if days > 0 else 0, axis=1)
df['Daily Average Removed from Place or Premises'] = df.apply (lambda row: row['Removed from Place or Premises']/days if days > 0 else 0, axis=1)
df['Daily Average FPN'] = df.apply (lambda row: row['FPN']/days if days > 0 else 0, axis=1)
df['Daily Average Arrested'] = df.apply (lambda row: row['Arrested']/days if days > 0 else 0, axis=1)
return df
def groupByDivision(self, df):
division_grouped = df.groupby('Division Letter', as_index=False
).agg(
{"Asked / Informed": "sum",
"Warned / Instructed": "sum",
"Removed from Place or Premises": "sum",
"FPN": "sum",
"Arrested": "sum",
})
return division_grouped
def groupByOfficerNumber(self, df):
# Process data of police numbers
# Data from: https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/
officer_dict = {'A': 1115,
'C': 626,
'D': 919,
'E': 1099,
'G': 2434,
'J': 902,
'K': 613,
'L': 553,
'N': 661,
'P': 759,
'Q': 1388,
'U': 818,
'V': 382
}
div_officer_data = pd.DataFrame(officer_dict.items(), columns=['Division Letter', 'Officer Numbers'])
# Merge Data
dfMerge = pd.merge(df, div_officer_data, on='Division Letter')
dfMerge['Asked / Informed per 100 officers'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Warned / Instructed per 100 officers'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Removed from Place or Premises per 100 officers'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['FPN per 100 officers'] = dfMerge.apply (lambda row: row['FPN']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Arrested per 100 officers'] = dfMerge.apply (lambda row: row['Arrested']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
return dfMerge
def groupByPopulation(self, df):
# Process Population Data
# Data from: https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019
raw_pop_data = pd.read_csv(os.path.join(os.getcwd(), 'datasets', 'Population', 'mid-year-pop-est-19-data_Table 2.csv'))
# Keep only the specific columns
raw_pop_data = raw_pop_data[['Unnamed: 1','Unnamed: 2']]
# Rename them inplace
raw_pop_data.rename(columns={'Unnamed: 1': 'Council areas', 'Unnamed: 2': 'Population'}, inplace=True)
# Drop upper rows that are bad
raw_pop_data = raw_pop_data.drop(raw_pop_data.index[[0,1,2,3,4]]).reset_index(drop=True)
# Drop from certain row, minus 1 for the row above position
raw_pop_data = raw_pop_data[:(raw_pop_data[raw_pop_data['Council areas'] == 'NHS Board areas'].index[0] - 1)]
# Strip out all the commas in Objects of the Population column
raw_pop_data["Population"].replace(',','', regex=True, inplace=True)
# Convert string to int
raw_pop_data["Population"] = raw_pop_data["Population"].astype(str).astype(int)
# Group Pop Data
# We group the council areas into our police divisions
# First, set our index
raw_pop_data.set_index('Council areas')
# Create our division dictionary
div_dict = {'A': ["Moray", "Aberdeenshire", "Aberdeen City"],
'C': ["Stirling", "Clackmannanshire", "Falkirk"],
'D': ["Angus", "Dundee City", "Perth and Kinross"],
'E': ["City of Edinburgh"],
'G': ["East Renfrewshire", "Glasgow City", "East Dunbartonshire"],
'J': ["Scottish Borders", "East Lothian", "Midlothian", "West Lothian"],
'K': ["Inverclyde", "Renfrewshire"],
'L': ["Argyll and Bute", "West Dunbartonshire"],
'N': ["Na h-Eileanan Siar", "Orkney Islands", "Highland", "Shetland Islands"],
'P': ["Fife"],
'Q': ["South Lanarkshire", "North Lanarkshire"],
'U': ["South Ayrshire", "East Ayrshire", "North Ayrshire"],
'V': ["Dumfries and Galloway"]
}
div_pop = {}
def divisionPopulation(row):
incomingRow = row.tolist()
for div, councils in div_dict.items():
for council in councils:
if (council == incomingRow[0]):
if div in div_pop:
div_pop[div] += incomingRow[1]
else:
div_pop[div] = incomingRow[1]
raw_pop_data.apply(lambda row: divisionPopulation(row), axis=1)
div_pop_data = pd.DataFrame(div_pop.items(), columns=['Division Letter', 'Population'])
# Merge Data
dfMerge = pd.merge(df, div_pop_data, on='Division Letter')
dfMerge['Asked / Informed per 100k'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Warned / Instructed per 100k'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Removed from Place or Premises per 100k'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['FPN per 100k'] = dfMerge.apply (lambda row: row['FPN']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Arrested per 100k'] = dfMerge.apply (lambda row: row['Arrested']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
return dfMerge
# The class method, we use this to gather the data then pre-process it
@classmethod
def from_url(cls, url):
raw_data = pd.read_excel(url, sheet_name=1)
raw_data.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13', 'Unnamed: 14', 'Unnamed: 15', 'Unnamed: 16', 'Unnamed: 17'], axis=1, inplace=True)
# Taking account of NaNs
# Explanation:
# The xlsx to pandas dataframe conversion seems to have taken "NA" for a division "N" and an Area Command "Inverness"
# and interpret that "NA" as actually: "NaN". Which is very annoying. So the below overwrites the SD letter of area commands
# that are inverness and turns them back to "NA"
raw_data.loc[raw_data["Area Commands"] == "Inverness", "SD Letter"] = raw_data["SD Letter"].fillna("NA")
if (raw_data.isnull().sum().sum() != 0):
raise ValueError("We have NaNs in our dataframe")
return cls(raw_data)
def _create_indicator_dropdown(self, indicators, initial_index):
# Handling for the two different types of Dropdown options storage
if isinstance(indicators[initial_index], tuple):
valuePos = initial_index
elif isinstance(indicators[initial_index], str):
valuePos = indicators[initial_index]
else:
raise ValueError("Unknown dropdown input type")
dropdown = widgets.Dropdown(options=indicators, value=valuePos)
dropdown.observe(self._on_change, names=['value'])
return dropdown
def utm_from_lon(self, lon):
"""
utm_from_lon - UTM zone for a longitude
Not right for some polar regions (Norway, Svalbard, Antartica)
:param float lon: longitude
:return: UTM zone number
:rtype: int
"""
return floor( ( lon + 180 ) / 6) + 1
def scale_bar(self, ax, proj, length, location=(0.5, 0.05), linewidth=3,
units='km', m_per_unit=1000):
"""
http://stackoverflow.com/a/35705477/1072212
ax is the axes to draw the scalebar on.
proj is the projection the axes are in
location is center of the scalebar in axis coordinates ie. 0.5 is the middle of the plot
length is the length of the scalebar in km.
linewidth is the thickness of the scalebar.
units is the name of the unit
m_per_unit is the number of meters in a unit
"""
# find lat/lon center to find best UTM zone
x0, x1, y0, y1 = ax.get_extent(proj.as_geodetic())
# Projection in metres
utm = ccrs.UTM(self.utm_from_lon((x0+x1)/2))
# Get the extent of the plotted area in coordinates in metres
x0, x1, y0, y1 = ax.get_extent(utm)
# Turn the specified scalebar location into coordinates in metres
sbcx, sbcy = x0 + (x1 - x0) * location[0], y0 + (y1 - y0) * location[1]
# Generate the x coordinate for the ends of the scalebar
bar_xs = [sbcx - length * m_per_unit/2, sbcx + length * m_per_unit/2]
# buffer for scalebar
buffer = [patheffects.withStroke(linewidth=5, foreground="w")]
# Plot the scalebar with buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, path_effects=buffer)
# buffer for text
buffer = [patheffects.withStroke(linewidth=3, foreground="w")]
# Plot the scalebar label
t0 = ax.text(sbcx, sbcy, str(length) + ' ' + units, transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
left = x0+(x1-x0)*0.05
# Plot the N arrow
t1 = ax.text(left, sbcy, u'\u25B2\nN', transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
# Plot the scalebar without buffer, in case covered by text buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, zorder=3)
def _create_plot(self, indicator, scaling):
fig = plt.figure(figsize=(6,8), dpi=100)
projectionPARAM = ccrs.TransverseMercator(central_longitude=-2.0, central_latitude=49.0, false_easting=400000.0, false_northing=-100000.0, scale_factor=0.9996012717, approx=False)
ax = fig.add_subplot(1, 1, 1, projection=projectionPARAM)
ax.set_extent([-8, 0, 54.5, 61]) # Ideal coordinate map range for plotting Scotland
# Process the input from the second dropdown
if scaling == 0:
indicator = indicator
elif scaling == 1:
indicator = indicator + " per 100k"
elif scaling == 2:
indicator = indicator + " per 100 officers"
elif scaling == 3:
indicator = "Daily Average " + indicator
else:
raise ValueError("Bizarre dropdown option achieved, investigation needed!")
police_dict = (self._df[['Division Letter', indicator]].set_index('Division Letter').T.to_dict('records'))[0]
# Downloaded from: https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231
area_file = os.path.join(os.getcwd(), 'datasets', 'ScottishPoliceDivisions', 'SG_ScottishPoliceDivisions_2019.shp')
police_divisions = shapereader.Reader(area_file)
norm = colors.Normalize(vmin=0., vmax=max(police_dict.values()))
cmap = get_cmap('PuBu')
for record in police_divisions.records():
code = record.attributes['AdminCode']
police_entry = police_dict.get(code, -1)
if police_entry == -1:
police_color = "Silver"
else:
police_color = cmap(police_entry/max(police_dict.values()))
ax.add_geometries(
[record.geometry],
#facecolor=numpy.random.rand(3,),
facecolor=police_color,
linewidth=0,
crs=projectionPARAM,
)
# following https://matplotlib.org/2.0.2/mpl_toolkits/axes_grid/users/overview.html#colorbar-whose-height-or-width-in-sync-with-the-master-axes
# we need to set axes_class=plt.Axes, else it attempts to create
# a GeoAxes as colorbar
divider = make_axes_locatable(ax)
ax_cb = divider.new_horizontal(size="5%", pad=0.1, axes_class=plt.Axes)
fig.add_axes(ax_cb)
sm = plt.cm.ScalarMappable(norm=norm, cmap=cmap)
cb = plt.colorbar(sm, cax=ax_cb)
cb.set_label(indicator)
#self.scale_bar(ax, projectionPARAM, 100, location=(0.85, 0.05)) # 100 km scale bar
plt.plot()
def _on_change(self, _):
self._update_app()
def trimToDateRange(self, df, date_range):
# We want to trim the data, so that it's range is inline with date range
# First we replace _df with our base df, so we can then correctly apply the range
self._df = self._dfBASE.copy(deep=True)
# Then we cut it to only within our date range
df = self._df[self._df['Date'].between(*date_range)]
return df
def _process_data(self, date_range):
numberOfDays = (date_range[1] - date_range[0]).days
self._df = self.trimToDateRange(self._df, date_range)
self._df = self.groupByDivision(self._df)
self._df = self.groupByPopulation(self._df)
self._df = self.groupByOfficerNumber(self._df)
self._df = self.groupByDailyAverage(self._df, numberOfDays)
def _update_app(self):
# Pull in widget attributes for passing to plot function
indicator = self._dropdown1.value
scaling = self._dropdown2.value
date_range = self._date_slider.value
# Process data
self._process_data(date_range)
self._plot_container.clear_output()
# wait=True
with self._plot_container:
#self.loading_bar.layout.visibility = 'visible'
self.loading_bar.layout.display = 'block'
display(self.loading_bar)
self._create_plot(indicator, scaling)
plt.show()
#self.loading_bar.layout.visibility = 'hidden'
self.loading_bar.layout.display = 'none'
app = App.from_url(DATA_URL)
app.container
```
| github_jupyter |
# Intro to neural networks: Regression
This notebook is based on the SEG Geophysical Tutorial from August 2018 by Graham Ganssle: https://github.com/seg/tutorials-2018.
The idea is to introduce the based components of an artificial neural network and implement a simple version of one using Numpy.
We'll use a regression task — predicting a DT log from other logs.
```
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm # gives progress bar on iterable
# demonstrate TDQM
for n in tqdm(range(5_000_000)):
pass
```
## Activation functions
A neural network is nothing but a nonlinear system of equations like $\mathbf{y} = \sigma(\mathbf{W}\mathbf{x} + \mathbf{b})$.
There are multiple functions $\sigma$ that are used to introduce the non-linear component. One of the earliest was the *sigmoid* (aka *logistic*) function is given by:
$$ \sigma(z) = \frac{1}{1 + \operatorname{e}^{-z}} $$
Its derivative is:
$$ \frac{\mathrm{d} \sigma(z)}{\mathrm{d} z} = \sigma(z) (1 - \sigma(z)) $$
We need the derivative for the _backpropagation_ process that enables neural networks to learn efficiently. Backpropagation adjusts the parameters of the neural network by injecting an error signal backwards through the network's layers, from the last to the first.
We can implement the logistic function like this in Python:
```
def logistic(z, derivative=False):
if not derivative:
return 1 / (1 + np.exp(-z))
else:
return z * (1 - z) # In the implementation, 'z' will actually be sigma(z).
```
The function transforms, or 'squeezes', numbers into the range [0, 1] and looks like this:
```
# function is in cyan and derivative is in red
from utils import plot_activation
plot_activation(logistic)
```
In practice, while this function is sometimes useful for handling probabilities, there are some problems with it.
- The maximum value of the derivative is 0.25, which tends to reduce the learning rate, especially in deeper layers.
- Large activations input result in 'saturation' and a gradient of 0 ('vanishing gradient'), which will halt learning.
- The exponentials are expensive to compute.
The $\operatorname{tanh}$ function solves some of these issues — for example, it has a maximum gradient of 1.
```
def tanh(z, derivative=False):
"""
Compute a tanh transformation for a given input.
"""
if not derivative:
return (np.exp(z) - np.exp(-z)) / (np.exp(z) + np.exp(-z))
else:
return 1 - z**2 # In the implementation, we'll get tanh(z) coming at us.
plot_activation(tanh)
```
But it still suffers from the saturation issue, and the expense of computation.
Both of these issues are solved by the ReLU, or rectified linear unit, function.
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
The **rectified linear unit** (ReLU) and its derivative are given by:
$$ f(z) = \begin{cases}
z & \text{if } z > 0, \\
0 & \text{otherwise}.
\end{cases} $$
$$ \frac{\mathrm{d}f(z)}{\mathrm{d}z} = \begin{cases}
1 & \text{if } z > 0, \\
0 & \text{otherwise}.
\end{cases} $$
The main problem with the ReLU is that, depending on how the weights are initialized, some units in the network might 'die' as they get into negative activations and never fire. Accordingly, a common variant of the ReLU is the 'parametric' ReLU, which has $f(z) = \alpha z$, when $Z \leq 0$ (the corresponding derivative is then just $\alpha$). The parameter $\alpha$ can be tuned like other hyperparameters. A typical value is 0.01.
The parametric ReLU is also called a 'leaky' ReLU, but that term implies that the value of $\alpha$ is not being considered as a hyperparameter or tuned in any way.
Can you implement a ReLU? (Or, if you prefer, a parametric ReLU?)
</div>
```
# Note that if you use `if z > 0` in your code, then
# the plot_activation function won't work, because it
# defines z as an array to make its plots. In general,
# it's a good idea to write functions that work for
# both scalars and arrays, where possible.
# YOUR CODE HERE
def relu(z, derivative=False):
"""
compute RELU
"""
if not derivative:
return z * (z > 0)
else:
return 1 * (z > 0)
assert (relu(-1), relu(0), relu(1)) == (0, 0, 1)
# matt solution for both
def prelu(z, derivative=False, alpha=0.1):
"""A parametric ReLU."""
if not derivative:
return np.maximum(alpha * z, z) # alpha must be < 1
else:
return alpha * (z <= 0) + (z > 0)
def relu(z, derivative=False):
"""
Compute a ReLU transformation for a given input.
"""
return prelu(z, derivative=derivative, alpha=0)
plot_activation(relu)
```
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>Stretch exercise</h3>
Some people prefer the exponential linear unit, because it has a smooth derivative. Can you implement it?
$$ f(z) = \begin{cases} z & \text{if } z > 0 \\ \alpha(e^z-1) & \text{otherwise} \end{cases} $$
The derivative is given by:
$$ \frac{\mathrm{d} f}{\mathrm{d} z} = \begin{cases} 1 & \text{if } z > 0 \\ \alpha e^z & \text{otherwise} \end{cases} $$
Again, $\alpha$ is a hyperparameter.
</div>
```
# YOUR CODE HERE
def prelu(z, derivative=False, alpha=0.1):
"""
A parametric RELU
"""
if not derivative:
return np.maximum(alpha * z, z) # alpha must be less than one
else:
return alpha * (z <= 0) + (z > 0)
```
Check the [Intro_to_neural_network_regression.ipynb](../master/Intro_to_neural_network_regression.ipynb) master notebook for a solution to this problem.
There are still other rectifiers — e.g. the GELU and SiLU — read about them [on Wikipedia](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)). Why not try implementing some of them?
## Loss
We're going to need a way to tell when we're doing well. The **loss function** is some measure of error. We'll use the mean squared error, where the error is the difference between a known value of the target and our estimate of the target.
We're going to need a function for that too:
```
def loss(y, y_hat):
"""
Compute half the mean squared error. The factor of 0.5 gets cancelled by the
squared term in the derivative, so it's common to see it in the loss function.
"""
return 0.5 * np.mean((np.array(y_hat) - np.array(y))**2)
```
## Defining a network
A typical neural network consist of three or more *layers*: an input layer, one or more _hidden_ layers, and an output layer.
Let's implement a network with one hidden layer. The layers are as follows:
$$ \text{Input layer:}\ \ \mathbf{x}^{(i)} $$
$$ \text{Hidden layer:}\ \ \mathbf{a}_1^{(i)} = \sigma ( \mathbf{W}_1 \mathbf{x}^{(i)} + \mathbf{b}_1) $$
$$ \text{Output layer:}\ \ \hat{\mathbf{y}}^{(i)} = \mathbf{W}_2 \mathbf{a}_1^{(i)} + \mathbf{b}_2 $$
where $\mathbf{x}^{(i)}$ is the $i$-th sample of the input data $\mathbf{X}$. $\mathbf{W}_1, \mathbf{b}_1, \mathbf{W}_2, \mathbf{b}_2$ are the weight matrices and bias vectors for layers 1 and 2 respectively, and $\sigma$ is our nonlinear function. Applying the nonlinearity to $\mathbf{W}_1 \mathbf{x}^{(i)} + \mathbf{b}_1$ in layer 1 results in the _activation_ $\mathbf{a}_1$. The output layer yields $\hat{\mathbf{y}}^{(i)}$, the $i$-th estimate of the desired output. We're not going to apply the nonlinearity to the output, but people often do. The weights are randomly initialized and the biases start at zero; during training they will be iteratively updated to encourage the network to converge on an optimal approximation to the expected output.
Note that these are vector operations. In `Numpy` we can easily deal with this because the library understands proper matrix operations. For example, matrix multiplication is done through the `@` operator.
A forward pass of the data through the network looks like this:
```
def forward(xi, W1, b1, W2, b2, activation):
z1 = W1 @ xi + b1
a1 = activation(z1)
z2 = W2 @ a1 + b2 # n.b. z2 is y_hat
return z2, a1
```
Below is a picture of a neural network similar to the one we're building:

## How does a neural net learn?
The short version is that we show the system a bunch of corresponding input/output pairs we want it to learn, and we show it these pairs thousands of times. Every time we do so, we move the **W**'s and **b**'s in whatever direction makes the outputs of the network more similar to the known output we're trying to teach it.
For each training example:
For each layer:
- Calculate the error.
- Calculate weight gradient.
- Update weights.
- Calculate the bias gradient.
- Update biases.
What's all this about gradients?
In order to learn, the network will have to find the parameters (weights and biases) that result in the smallest loss. We'll use gradient descent for this.
<img src="../images/gradient_descent.png" width="800px" />
This is straightforward for the output layer. That's why we needed the derivative in the activation functions, and we need to know the derivative for the `loss()` function.
The error on the output layer for a given instance (data record) looks like this:
$$ E = \frac{1}{2} \left[ \hat{y}^{(i)} - y^{(i)} \right]^2 $$
where
$$ \hat{y}^{(i)} = \mathbf{w}_2 \mathbf{a}_1^{(i)} + b_2 $$
The derivative (gradient, or slope) of this function, with respect to the weight **w**<sub>2</sub>, is:
$$ \frac{\mathrm{d}E}{\mathrm{d}\mathbf{w_2}} = \frac{\mathrm{d}E}{\mathrm{d}\hat{y}}\frac{\mathrm{d}\hat{y}}{\mathrm{d}\mathbf{w_2}} = (\hat{y} - y) \ \mathbf{a}_1$$
To calculate the gradient at the hidden layer, we need to compute the gradient of the error with respect to the weights and biases of the hidden layer.
Let's implement this as a Python function:
```
def backward(xi, yi,
a1, z2,
params,
learning_rate,
activation
):
err_output = z2 - yi # Derivative of loss function
grad_W2 = err_output * a1
params['W2'] -= learning_rate * grad_W2
grad_b2 = err_output
params['b2'] -= learning_rate * grad_b2
derivative = activation(a1, derivative=True)
err_hidden = err_output * derivative * params['W2']
grad_W1 = err_hidden[:, None] @ xi[None, :]
params['W1'] -= learning_rate * grad_W1
grad_b1 = err_hidden
params['b1'] -= learning_rate * grad_b1
return params
```
The trick with the `None` indexing is the same as reshaping the array. We have to do this to produce a 2D array for the `W1` gradients.
To demonstrate this backpropagation workflow, and thus that our system can learn, let's try to get the above neural network to learn the relationship between a DT log and some other logs. We're going to need some data.
## Get some data
```
import welly
w = welly.Well.from_las('../data/R-90.las', index='original')
data = w.data_as_matrix(keys=['GR', 'NPHISS', 'RHOB', 'DT'], start=1000, stop=3500, step=0.2)
data[:10]
from sklearn.preprocessing import StandardScaler
X_val = data[6500:6750, :3].reshape(-1, 3)
X_train = data[6750:7750, :3].reshape(-1, 3)
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_val = scaler.transform(X_val)
X_train.shape, X_val.shape
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(15, 5))
ax0.plot(X_train)
ax1.plot(X_val)
import seaborn as sns
sns.displot(X_train)
```
In many situations, we do not need to scale the target variable. But when using gradient descent for optimization — essentially in all neural nets — we might need to worry about it.
Very large errors may lead to exploding gradients in training and/or result in floating point overflows — especially if you're using GPUs, which use single-precision floats.
```
y_val_ = data[6500:6750, -1] # Keep the unscaled data.
y_train_ = data[6750:7750, -1]
target_scaler = StandardScaler().fit(y_train_.reshape(-1, 1))
y_train = target_scaler.transform(y_train_.reshape(-1, 1))
y_val = target_scaler.transform(y_val_.reshape(-1, 1))
X_train.shape, y_train.shape
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(15, 5))
ax0.plot(y_train)
ax1.plot(y_val)
```
## Initialize network parameters
Now we can initialize the weights and biases for our network. A common approach is to initialize the weights with small random numbers (with NumPy's `randn()` function) and the biases with zeros.
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
Finish the `initialize_params()` function:
</div>
```
def initialize_params(features, units, seed=42):
np.random.seed(seed)
params = {
"W1": np.random.randn(units, features),
"b1": np.zeros(shape=units),
# YOUR CODE HERE
# Initialize W2 (shape is just `units`) and b2 (shape is `1`)
"W2": np.random.randn(units),
"b2": np.zeros(shape=1)
# ===============
}
return params
features = X_train.shape[-1]
units = 5 # Units in hidden layer.
params = initialize_params(features, units, seed=33)
params
```
Now we have a network! It just doesn't know anything.
## Prediction
To apply this (untrained) network to some data, we're going to need a `predict` function, to make inferences from the trained network. This mode of application is called **inference**.
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
Finish the `predict()` function.
</div>
```
def predict(X, forward, params, activation):
"""
Make a prediction for a given 2D input ``X``,
using function ``forward``.
"""
y_hats = []
for xi in X:
# YOUR CODE HERE
# You need to call `forward` to set a value for `y_hat`.
y_hat, _ = forward(xi, **params, activation=activation)
# ==============
y_hats.append(y_hat.item()) # gets floating point number out
return np.array(y_hats)
```
Let's make a prediction for our untrained network — it should be essentially random:
```
y_pred = predict(X_train, forward, params, activation=relu)
plt.plot(y_train[:200])
plt.plot(y_pred[:200])
```
## Training
During training, we expose the network to the input/output pairs one at a time. These pairs are called `xi` and `yi` respectively in the code. According to our diagram above, the input goes into the green slots and we adjust the orange neurons to make the red slot output from the network a tiny bit closer to the true DT result.
We do this many times. Every time we do, we calculate the mean squared error between the network's prediction and the ground-truth output. After many iterations, or *epochs*, we draw a plot which shows the total error, or loss, at each step. If the network is learning anything, we expect the loss to decrease, as the predictions are getting closer to the ground truth.
```
# Hyperparameters.
num_epochs = 30
learning_rate = 0.001
activation = relu
# Intitialize.
data = list(zip(X_train, y_train, y_train_)) #y_train_ is unscaled data. helps us get to more meaningful error. TW.
params = initialize_params(features, units)
loss_history = []
for i in tqdm(range(num_epochs)):
# Shuffle and prepare.
np.random.shuffle(data)
y_, y_hat = [], []
for xi, yi, y_raw in data:
# Optionally do a pass for validation (omitted here).
# Forward pass.
z2, a1 = forward(xi, **params, activation=activation)
# Back propagation.
params = backward(xi, yi,
a1, z2.item(),
params,
learning_rate,
activation=activation
)
# Capture actual prediction at correct scale.
y_.append(y_raw)
y_hat.append(target_scaler.inverse_transform(z2))
# Compute training loss for this epoch.
loss_history.append(loss(y_, y_hat))
```
The parameters of the model are now no longer random.
```
params
```
They do look kind of random though. It's usually hard to 'see' what neural networks have learned. Let's look at the W1 weights only:
```
W1 = params['W1']
plt.figure(figsize=(5, 3))
_ = plt.imshow(W1.T, aspect='auto', vmin=0.1)
```
If the network learned anything **useful** then the loss should have decreased during training. The loss is our measure of whatever it is we care about.
```
fig, ax = plt.subplots(figsize=(10,3))
ax.semilogy(loss_history, label='Training loss')
ax.set_title('Mean squared error vs epoch number', fontsize=16)
ax.tick_params(axis='both', which='major', labelsize=14)
ax.grid()
plt.tight_layout()
plt.show()
y_pred = predict(X_val, forward, params, activation)
```
The loss decreased dramatically over the course of relatively few epochs, so presumably the network has learned something. To test this theory, let's plot the outputs after training (orange) and compare them to the expected result (blue):
```
plt.figure(figsize=(15, 3))
plt.plot(y_val)
plt.plot(y_pred)
plt.grid(c='k', alpha=0.2)
```
## Compare using RMS error
It's fine for the network to learn using MSE, but it's easier for humans to understand RMS error, because it has the same units as the target.
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
Implement an equation for the RMS error.
$$ E_\mathrm{RMS} = \sqrt{ \frac{1}{N} \sum_{i=0}^{N} (\hat{y} - y)^2 } $$
</div>
```
def rmse(y_true, y_pred):
mse = np.sum((y_pred - y_true)**2) / y_true.size
rmse = np.sqrt(mse)
return rmse
rmse(y_val_, target_scaler.inverse_transform(y_pred))
plt.figure(figsize=(15, 3))
plt.plot(y_val_)
plt.plot(target_scaler.inverse_transform(y_pred))
plt.grid(c='k', alpha=0.2)
```
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>Exercise: how does this network look in `scikit-learn`?</h3>
Replicate this neural network with `sklearn.neural_network.MLPRegressor`.
You will have to read the documentation carefully. In particular, pay attention to `solver`, `activation`, `max_iter`, and `batch_size`.
Get started with this:
</div>
```
from sklearn.neural_network import MLPRegressor
mlp = MLPRegressor(hidden_layer_sizes=(5,),
tol=1e-12, # Turn off early stopping.
momentum=0, # Turn off momentum
activation='relu',
solver='sgd',
learning_rate_init=0.001,
max_iter=30,
random_state=33,
alpha=0,
batch_size=1
# YOUR CODE HERE
)
mlp.fit(X_train, y_train)
y_pred_skl = mlp.predict(X_val)
plt.figure(figsize=(15, 3))
plt.plot(y_val_)
plt.plot(target_scaler.inverse_transform(y_pred))
plt.plot(target_scaler.inverse_transform(y_pred_skl))
plt.grid(c='k', alpha=0.2)
print("Scratch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred)))
print()
print("Sklearn NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))
```
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
Can you change the hyperparameters to get a better result?
</div>
```
# Copy the solution from the last example here.
# Then change some of the parameters and see how it affects the result.
mlp = MLPRegressor(hidden_layer_sizes=(10,),
tol=1e-12, # Turn off early stopping.
momentum=0, # Turn off momentum
activation='relu',
solver='adam',
learning_rate_init=0.001,
max_iter=500,
random_state=33,
alpha=0,
batch_size=1 # this will really speed things up by increasing, and fine to do. more efficient way of feeding the model data rather than sample by sample
# YOUR CODE HERE
)
mlp.fit(X_train, y_train)
y_pred_skl = mlp.predict(X_val)
print("Scratch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred)))
print()
print("Sklearn NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))
```
## Compare with PyTorch
```
import torch
from torch import nn
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train_pt = torch.tensor(X_train, dtype=torch.float32).to(device)
y_train_pt = torch.tensor(y_train.reshape(-1, 1), dtype=torch.float32).to(device)
traindata = torch.utils.data.TensorDataset(X_train_pt, y_train_pt)
trainloader = torch.utils.data.DataLoader(traindata)
# tensor is pretty much the pytorch equivalent of an ndarray. difference is 'they remember where they came from'
```
There's a high-level approach:
```
net = nn.Sequential(
nn.Linear(3, 5),
nn.ELU(),
nn.Linear(5, 1),
).to(device)
```
And a low-level approach that gives you fine-tuned control:
```
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(3, 5) # aka "Fully-connected"
self.output = nn.Linear(5, 1)
# Optional.
nn.init.xavier_uniform_(self.hidden.weight)
nn.init.zeros_(self.hidden.bias)
nn.init.xavier_uniform_(self.output.weight)
nn.init.zeros_(self.output.bias)
def forward(self, x):
z1 = self.hidden(x)
a1 = torch.nn.functional.elu(z1)
z2 = self.output(a1)
return z2
net = Net().to(device)
```
Training the network:
```
lr = 0.005
weight_decay = 0.0 # L2 regularization
optimizer = torch.optim.SGD(net.parameters(), lr=lr, weight_decay=weight_decay)
criterion = nn.MSELoss()
net.train()
epochs = 100
for epoch in range(epochs):
epoch_loss = 0.0
for xi, yi in trainloader:
optimizer.zero_grad()
y_hat = net(xi) # Forward pass
loss = criterion(y_hat, yi) # get the loss
loss.backward() # backprop
optimizer.step() # step optimmizer, wont do anything in this example
epoch_loss += loss.item() # capture loss
print(f"# {epoch+1} Loss {epoch_loss}")
print('Finished training')
```
### Evaluate the model
```
X_val_pt = torch.tensor(X_val, dtype=torch.float).to(device)
y_val_pt = torch.tensor(y_val.reshape(-1, 1), dtype=torch.float).to(device)
valdata = torch.utils.data.TensorDataset(X_val_pt, y_val_pt)
valloader = torch.utils.data.DataLoader(valdata)
net.eval()
with torch.no_grad():
y_pred_torch = [float(net(xi)) for xi, yi in valloader]
plt.figure(figsize=(15, 3))
plt.plot(y_val_)
plt.plot(target_scaler.inverse_transform(y_pred))
plt.plot(target_scaler.inverse_transform(y_pred_skl))
plt.plot(target_scaler.inverse_transform(y_pred_torch))
plt.grid(c='k', alpha=0.2)
print("Scratch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred)))
print()
print("Sklearn NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))
print()
print("PyTorch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_torch)))
```
### Saving a PyTorch model
It is possible to save the mode with `torch.save(model, PATH)`, but this is not recommended because it depends on the exact structure of the project (files, directories, etc). Instead, PyTorch docs recommend saving the model class
We can save the model's parameters to disk:
```
fname = "dt_model.pth"
torch.save(net.state_dict(), fname)
```
...and read them into a new model:
```
saved_net = Net()
saved_net.load_state_dict(torch.load(fname))
net.eval()
with torch.no_grad():
y_pred_torch_ = [float(saved_net(xi)) for xi, yi in valloader]
# Check it's the same as before.
np.all(y_pred_torch == y_pred_torch_)
```
## Compare with linear regression
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>EXERCISE</h3>
Make a prediction using `sklearn.linear_model.Ridge`. How does it compare to the neural networks?
</div>
```
from sklearn.linear_model import Ridge
# YOUR CODE HERE
est = Ridge()
est.fit(X_train, y_train_)
# End with...
y_pred_linreg = est.predict(X_val)
plt.figure(figsize=(15, 5))
plt.plot(y_val_)
plt.plot(target_scaler.inverse_transform(y_pred))
plt.plot(target_scaler.inverse_transform(y_pred_skl))
plt.plot(target_scaler.inverse_transform(y_pred_torch))
plt.plot(y_pred_linreg)
plt.grid(c='k', alpha=0.2)
print("Scratch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred)))
print()
print("Sklearn NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))
print()
print("PyTorch NN")
print(rmse(y_val_, target_scaler.inverse_transform(y_pred_torch)))
print()
print("Linear regression")
print(rmse(y_val_, y_pred_linreg))
```
---
<div style="background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen">
<h3>Optional exercises</h3>
Try to do these exercises on the NumPy implementation. But if that proves too difficult, use the `sklearn` implementation.
- Try changing the model parameters, for example using fewer units in the hidden layer. Does this help?
- Add another layer to the model. Does this help?
- Try using other activation functions than the logistic function we're currently using.
- Implement batches, RMSprop, and momentum.
<h3>Stretch</h3>
If you've taken the Mastery class, or know about object oriented programming, write a Python `class` to hold the NumPy implementation. Copy the `keras`/`sklearn` interface as closely as possible. Related: [this awesome video from Joel Grus](https://www.youtube.com/watch?v=o64FV-ez6Gw).
</div>
## Other types of neural networks

---
© 2021 Agile Scientific and Graham Ganssle — Content is CC-BY-SA
| github_jupyter |
# Baseball Analysis
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
player_path = "./player.csv"
batting_path = "./batting.csv"
pitching_path = "./pitching.csv"
fielding_path = "./fielding.csv"
# Read the baseball data and the study results
player_data = pd.read_csv(player_path)
player_data.head()
# Clean player data
player_clean = player_data[["player_id", "birth_country", "birth_state",
"birth_city", "name_given", "weight", "height",
"bats", "throws", "debut", "final_game"]]
player_clean.head()
```
## Bar Graph of Player Birth States
```
# Generate a bar graph of players born in each state(exclude non-US born athletes).
player_us = player_data[player_data["birth_country"] == "USA"]
player_us_year = player_us[player_us["birth_year"] >= 1950]
player_us_year
# Filter the DataFrame down only to those columns to chart
player_us_state = player_us_year[["birth_state","player_id"]]
player_us_state
# Groupby State
player_state = player_us_state.groupby("birth_state").count()
player_state
# Create a list indicating where to write x labels and set figure size to adjust for space
player_state.plot(kind="bar", figsize=(20,3))
# Set a Title and labels
plt.title("Baseball Players Born per State")
plt.xlabel("State")
plt.ylabel("Amount of Baseball Players")
plt.tight_layout()
plt.savefig("./birth_state.png")
plt.show()
```
## Should the NL adopt the DH rule?
```
batting_data = pd.read_csv(batting_path)
batting_data.head()
# DH rule was adopted by the AL league in 1973.
batting_data = batting_data[batting_data["year"] >= 1973]
batting_data
# Find the batting average and add a new column
batting_data["ba"] = ""
ba = batting_data["h"]/batting_data["ab"]
batting_data["ba"] = ba
batting_data
# Remove NAN
batting_data.dropna()
# Get the mean batting average per year for the AL
batting_al = batting_data[batting_data["league_id"] == "AL"]
batting_al
# Group by year
batting_al = batting_al.groupby("year").mean()["ba"]
batting_al
# Get the mean batting average per year for the NL
batting_nl = batting_data[batting_data["league_id"] == "NL"]
batting_nl
# Group by year
batting_nl = batting_nl.groupby("year").mean()["ba"]
batting_nl
# Plot as a line graph
x_axis = np.arange(1973,2016,1)
# print(x_axis)
al_ba, = plt.plot(x_axis, batting_al, color="red", label="AL")
nl_ba, = plt.plot(x_axis, batting_nl, color="blue", label="NL")
plt.title("Batting Average Comparison for NL and AL from 1973-2015")
plt.xlabel("Years")
plt.ylabel("Batting average")
plt.legend(handles=[al_ba, nl_ba], loc="best")
plt.show()
plt.savefig("./al_vs_nl.png")
```
### Observation: Overall batting average for the National League is lower then the American League. The American League uses desginated hitters in place of their pitchers batting. This could show the impact of having a hitting focused player in the line-up who replaces the pitcher, who tends to be the weaker batter.
## Pitching
## Has ERA improved over the years?
```
pitching_data = pd.read_csv(pitching_path)
pitching_data.head()
# Show only more recent pitching starting at 1970 and games played more than 5
pitching_clean = pitching_data[pitching_data["year"] >= 1970]
pitching_games = pitching_clean[pitching_clean["g"] > 5]
pitching_games
# Group pitching records by year and average era
pitching_era = pitching_games.groupby("year").mean()["era"]
pitching_era
# Plot as line
xaxis = np.arange(1970, 2016, 1)
plt.plot(xaxis, pitching_era)
plt.title("Ptiching ERA from 1970-2015")
plt.xlabel("Years")
plt.ylabel("Earned Run Average")
plt.savefig("./pitching_era.png")
```
## What caused the increase in ERA and batting average?
```
# Change in player size over the years
import warnings
warnings.filterwarnings('ignore')
player_data
player_data["final"] = pd.to_datetime(player_data['final_game'], format='%Y-%m-%d').dt.year
player_data.dropna()
# Create scatter plot
x_values = player_data["final"]
y_values = player_data["weight"]
plt.scatter(x_values,y_values)
plt.xlabel("Year")
plt.ylabel("Weight of Player (lbs)")
plt.title("Baseball Player Weight Over the Years")
plt.show()
plt.savefig("./player_weight.png")
# Create scatter plot
x_values = player_data["final"]
y_values2 = player_data["height"]
plt.scatter(x_values,y_values2)
plt.xlabel("Year")
plt.ylabel("Height of Player (inches)")
plt.title("Baseball Player Height Over the Years")
plt.show()
plt.savefig("./player_height.png")
```
### Observation: Pitching ERA spike in the late 90's. Baseball began to focus on strangth training in the 80s and this could show an benefit of trianing as pitchers gave up more hits. The 90's became known as the "Steroid Era" where players used performance enhancing drugs to improve their power. There appears to be a trend for increased height and weight in baseball players over the years, but this can be a result of general population size increase.
## What position has the most fielding errors?
```
# Import the fielding data
fielding_data = pd.read_csv(fielding_path)
fielding_data
# Only show data from 1970 and remove position DH(hitter only)
fielding_data = fielding_data [fielding_data["year"] >= 1970]
fielding_data = fielding_data[fielding_data["pos"] != "DH"]
fielding_data
# Combine by position and find the most errors
err_data = fielding_data[fielding_data["g"] != 0]
err_data = err_data.groupby("pos").sum()["e"]
err_data.sort_values()
# Plot the data as a bar chart
err_data.plot(kind = "bar", color = "blue", alpha = 0.8, align ="center")
# Add labels
plt.title("Total Errors by Position from 1970-2015")
plt.xlabel("Position")
plt.ylabel("Total Errors")
plt.savefig("./position_errors.png")
```
### Observation: Shortstops and 3rd Base have the most errors and individual outfield positions(RF, LF, CF) have the least. This is expected as SS and 3rd base have more attempts at fielding than other positions.
## Do players with more years played have better fielding percentage?
```
import warnings
warnings.filterwarnings('ignore')
# Find the total years played
player_clean["started"] = pd.to_datetime(player_clean['debut'], format='%Y-%m-%d').dt.year
player_clean["final"] = pd.to_datetime(player_clean['final_game'], format='%Y-%m-%d').dt.year
years_played = player_clean["final"] - player_clean["started"]
player_clean["years_played"] = years_played
player_clean
# Merge the data
new_field = pd.merge(fielding_data, player_clean, on="player_id")
new_field
# Find the fielding percentage(FP = (put out + attempts)/(put outs + attempts + errors))
# Create a new column for fielding percentage
new_field["FP"] = ""
new_field
new_field["FP"] = (new_field["po"] + new_field["a"])/(new_field["po"] + new_field["a"] + new_field["e"])
# Remove players that had less than 5 games played
new_field = new_field[new_field["g"]> 5]
new_field = new_field.groupby(["pos", "years_played"]).mean()[["FP"]]
new_field = new_field.reset_index(level=['pos', 'years_played'])
positions = ['1B', '2B', '3B', 'C', 'CF', 'LF', 'P', 'RF', 'SS']
new_field
# Create a scatter plot
fig, axes = plt.subplots(nrows=3, ncols=3, sharex = True, sharey = True, figsize=(9, 9))
fig.text(0.5, 0.04, 'Years Played', ha='center')
fig.text(0.04, 0.5, 'Fielding Percentage', va='center', rotation='vertical')
axes = axes.ravel()
for i in range(9):
data = new_field[new_field["pos"] == positions[i]]
axes[i].scatter(data["years_played"], data["FP"])
axes[i].set_xlim(0,30)
axes[i].set_ylim(0.75, 1.03)
axes[i].set_title("Position: " + positions[i])
plt.savefig("./position_errs.png")
```
### Observation: At this level of competition, the average fielding percentage does not vary much for amount of time spent playing professional baseball. Fielding percentages remain fairly consistent across all positions.
| github_jupyter |
# Introduction to the Harmonic Oscillator
*Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html
This week week we are going to begin studying molecular dynamics, which uses classical mechanics to study molecular systems. Our "hydrogen atom" in this section will be the 1D harmomic oscillator.

The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:
$$F=-kx$$
The potential energy of this system is
$$V = {1 \over 2}k{x^2}$$
These are sometime rewritten as
$$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} \omega_0^2 m {x^2}$$
Where $\omega_0 = \sqrt {{k \over m}} $
In classical mechanics, our goal is to determine the equations of motion, $x(t),y(t)$, that describe our system.
In this notebook we will use sympy to solve an second order, ordinary differential equation.
## 1. Solving differential equations with sympy
Soliving differential equations can be tough, and there is not always a set plan on how to proceed. Luckily for us, the harmonic osscillator is the classic second order diffferential eqations.
Consider the following second order differential equation
$$ay(t)''+by(t)'=c$$
where $y(t)'' = {{{d^2}y} \over {dt^2}}$, and $y(t)' = {{{d}y} \over {dt}}$
We can rewrite this as a homogeneous linear differential equations
$$ay(t)''+by(t)'-c=0$$
The goal here is to find $y(t)$, similar to our classical mechanics problems. Lets use sympy to solve this equation
### Second order ordinary differential equation
First we import the sympy library
```
import sympy as sym
```
Next we initialize pretty printing
```
sym.init_printing()
```
Next we will set our symbols
```
t,a,b,c=sym.symbols("t,a,b,c")
```
Now for somehting new. We can define functions using `sym.Function("f")`
```
y=sym.Function("y")
y(t)
```
Now, If I want to define a first or second derivative, I can use `sym.diff`
```
sym.diff(y(t),(t,1)),sym.diff(y(t),(t,2))
```
My differential equation can be written as follows
```
dfeq=a*sym.diff(y(t),(t,2))+b*sym.diff(y(t),(t,1))-c
dfeq
sol = sym.dsolve(dfeq)
sol
```
The two constants $C_1$ and $C_2$ can be determined by setting boundry conditions.
First, we can set the condition $y(t=0)=y_0$
The next intial condition we will set is $y'(t=0)=v_0$
To setup the equality we want to solve, we are using `sym.Eq`. This function sets up an equaility between a lhs aand rhs of an equation
```
# sym.Eq example
alpha,beta=sym.symbols("alpha,beta")
sym.Eq(alpha+2,beta)
```
Back to the actual problem
```
x0,v0=sym.symbols("x_0,v_0")
ics=[sym.Eq(sol.args[1].subs(t, 0), x0),
sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]
ics
```
We can use this result to first solve for $C_2$ and then solve for $C_1$.
Or we can use sympy to solve this for us.
```
solved_ics=sym.solve(ics)
solved_ics
```
Substitute the result back into $y(t)$
```
full_sol = sol.subs(solved_ics[0])
full_sol
```
We can plot this result too. Assume that $a,b,c=1$ and that the starting conditions are $y_0=0,v_0=0$
We will use two sample problems:
* case 1 : initial position is nonzero and initial velocity is zero
* case 2 : initial position is zero and initialvelocity is nonzero
```
# Print plots
%matplotlib inline
```
#### Initial velocity set to zero
```
case1 = sym.simplify(full_sol.subs({y0:0, v0:0, a:1, b:1, c:1}))
case1
sym.plot(case1.rhs)
sym.plot(case1.rhs,(t,-2,2))
```
#### Initial velocity set to one
```
case2 = sym.simplify(full_sol.subs({y0:0, v0:1, a:1, b:1, c:1}))
case2
sym.plot(case2.lhs,(t,-2,2))
```
## Calculate the phase space
As we will see in lecture, the state of our classical systems are defined as points in phase space, a hyperspace defined by ${{\bf{r}}^N},{{\bf{p}}^N}$. We will convert our sympy expression into a numerical function so that we can plot the path of $y(t)$ in phase space $y,y'$.
```
case1
# Import numpy library
import numpy as np
# Make numerical functions out of symbolic expressions
yfunc=sym.lambdify(t,case1.rhs,'numpy')
vfunc=sym.lambdify(t,case1.rhs.diff(t),'numpy')
# Make list of numbers
tlst=np.linspace(-2,2,100)
# Import pyplot
import matplotlib
import matplotlib.pyplot as plt
# Make plot
plt.plot(yfunc(tlst),vfunc(tlst))
plt.xlabel('$y$')
plt.ylabel("$y'$")
plt.show()
```
### Exercise 1.1
Change the initial starting conditions and see how that changes the plots. Make three different plots with different starting conditions
```
# Change starting velocity to 5
case3 = sym.simplify(full_sol.subs({y0:0, v0:5, a:1, b:1, c:1}))
tlst=np.linspace(-2,2,100)
sym.plot(case3.rhs,(t,-2,2))
# Change starting position to 3
case4 = sym.simplify(full_sol.subs({y0:3, v0:0, a:1, b:1, c:1}))
tlst=np.linspace(-2,2,100)
sym.plot(case4.rhs,(t,-2,2))
# Change starting velocity to 0.5
case5 = sym.simplify(full_sol.subs({y0:0, v0:0.5, a:1, b:1, c:1}))
tlst=np.linspace(-2,2,100)
sym.plot(case5.rhs,(t,-2,2))
#
```
## 2. Harmonic oscillator
Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation
$$ F = m a $$
$$ F= - \omega_0^2 m x $$
$$ a = - \omega_0^2 x $$
$$ x(t)'' = - \omega_0^2 x $$
The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above
Your goal is determine and plot the equations of motion of a 1D harmomnic oscillator
### Exercise 2.1
1. Use the methodology above to determine the equations of motion $x(t), v(t)$ for a harmonic ocillator
1. Solve for any constants by using the following initial conditions: $x(0)=x_0, v(0)=v_0$
1. Show expressions for and plot the equations of motion for the following cases:
1. $x(0)=0, v(0)=0$
1. $x(0)=0, v(0)>0$
1. $x(0)>0, v(0)=0$
1. $x(0)<0, v(0)=0$
1. Plot the phasespace diagram for the harmonic oscillator
```
# Equations of motion x(t). a= dv/dt
t,a,b,c,x,k,z, w0=sym.symbols("t,a,b,c,x,k,z,w0")
x=sym.Function("x")
x(t)
sym.diff(x(t),(t,1)),sym.diff(x(t),(t,2))
dfeq=a*sym.diff(x(t),(t,2))+b*sym.diff(x(t),(t,1))-c
dfeq
sol = sym.dsolve(dfeq)
sol
# sym.Eq example. limitation on C1 and C2
alpha,beta=sym.symbols("alpha,beta")
sym.Eq(alpha+2,beta)
#x= x0, z'= v
x0,v0=sym.symbols("x_0,v_0")
ics=[sym.Eq(sol.args[1].subs(t, 0), x0),
sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]
ics
full_sol = sol.subs(solved_ics[0])
full_sol
dfeq=sym.diff(x(t),(t,2))+w0**2*x(t)
dfeq
sol = sym.dsolve(dfeq)
sol
alpha,beta=sym.symbols("alpha,beta")
sym.Eq(alpha+2,beta)
x0,v0=sym.symbols("x_0,v_0")
ics=[sym.Eq(sol.args[1].subs(t, 0), x0),
sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]
ics
solved_ics=sym.solve(ics)
solved_ics
full_sol= sol.subs(solved_ics[0])
full_sol
#1. 𝑥(0)=0,𝑣(0)=0
casea = sym.simplify(full_sol.subs({x0:0, v0:0, w0:1}))
tlst=np.linspace(-2,2,100)
sym.plot(casea.rhs,(t,-2,2))
#2. 𝑥(0)=0,𝑣(0)>0
caseb = sym.simplify(full_sol.subs({x0:0, v0:2, w0:1}))
tlst=np.linspace(-2,2,100)
sym.plot(caseb.rhs,(t,-2,2))
#3. 𝑥(0)>0,𝑣(0)=0
casec = sym.simplify(full_sol.subs({x0:2, v0:0, wo:1}))
tlst=np.linspace(-2,2,100)
sym.plot(caseb.rhs,(t,-2,2))
#4. 𝑥(0)<0,𝑣(0)=0
cased = sym.simplify(full_sol.subs({x0:-2, v0:0, w0:1))
tlst=np.linspace(-2,2,100)
sym.plot(caseb.rhs,(t,-2,2))
# Phase Plot
import numpy as np
# Make numerical functions out of symbolic expressions
xfunc=sym.lambdify(t,caseb.rhs,'numpy')
vfunc=sym.lambdify(t,caseb.rhs.diff(t),'numpy')
# Make list of numbers
tlst=np.linspace(-10,10,100)
# Import pyplot
import matplotlib
import matplotlib.pyplot as plt
# Make plot
plt.plot(xfunc(tlst),vfunc(tlst))
plt.xlabel('$x$')
plt.ylabel("$x'$")
plt.show()
```
| github_jupyter |
```
#utils I made to look at this data
import switchy.util as ut
import pandas as pd
import numpy as np
import scipy
import sys
import os
import time
import random
import copy
import math
%matplotlib inline
from matplotlib import pyplot as plt
import matplotlib as mpl
import scanpy as sc
import seaborn as sns
import autoreload
params = {
'font.size': 12,
'axes.titlesize': 12,
'axes.labelsize': 12,
'legend.fontsize': 12,
'xtick.labelsize': 12,
'ytick.labelsize': 12,
'font.family': "Helvetica",
'pdf.fonttype': 42,
'ps.fonttype': 42,
'figure.dpi': 300
}
mpl.rcParams.update(params)
sns.set_style("ticks")
savefig_args = {"dpi": 300, "bbox_inches": "tight", "pad_inches": 0, "transparent": True}
mpl.rc('savefig', dpi=300)
output_dir = "outs"
output_suffix = ""
output_formats = [".png", ".pdf"]
def save_figure(fig, name, output_dir=output_dir, output_suffix=output_suffix, output_formats=output_formats, savefig_args=savefig_args):
for output_format in output_formats:
fig.savefig(output_dir + "/" + name + output_suffix + output_format, **savefig_args)
return None
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
data_dir = "../../../SharedData/"
#import auto reload
%load_ext autoreload
%autoreload 2
# MUNGE THE sequence id to get a Cell column and a Donor Column
changeodb = pd.read_csv('../../../ImmTrinity/ShazamTrinity.tab', index_col = None, sep = '\t')
_x = changeodb.SEQUENCE_ID.str.split('_')[0][1:-7]
cell_list = []
for SeqID in changeodb.SEQUENCE_ID:
_x = SeqID.split('_')[1:-7]
cell_list.append('_'.join(_x))
changeodb['CELL'] = cell_list
changeodb['Donor'] = changeodb.SEQUENCE_ID.str.split(' ', expand = True)[1]
df_contig_aggr = changeodb.groupby(["Donor", "CELL", "LOCUS"]).size().unstack(fill_value=0)
# Examine number of high-quality contigs assembled per cell (joint distribution of IGH, IGK/L)
# full range
x = df_contig_aggr["IGH"]
y = df_contig_aggr["IGL"] + df_contig_aggr["IGK"]
print(max(x), max(y))
xbins = np.array(range(0,max(x)+2))-0.5
ybins = np.array(range(0,max(y)+2))-0.5
fig, ax = plt.subplots(1, 1, figsize=(5,4))
counts, xedges, yedges, im = ax.hist2d(x, y, bins=(xbins, ybins),
cmap="magma",
norm=mpl.colors.LogNorm(1, 1e5))
ax.set_xlabel("Productive heavy chain contigs")
ax.set_ylabel("Productive light chain contigs (K + L)")
plt.colorbar(im, ax=ax, label="Cells")
ax.set_ylim(top=8)
# show counts
dx = xedges[2]-xedges[1]
dy = yedges[2]-yedges[1]
for i in range(xedges.size-1):
for j in range(yedges.size-1):
xb = xedges[i] + 0.5*dx
yb = yedges[j] + 0.5*dy
ax.text(xb, yb, str(int(counts[i,j])), fontsize=4, ha="center", va="center", color="w")
# show count of 1H+1L in black
xb = xedges[1] + 0.5*dx
yb = yedges[1] + 0.5*dy
ax.text(xb, yb, str(int(counts[1,1])), fontsize=4, ha="center", va="center", color="k")
# Filter for cells having exactly 1H+1L
df_contig_aggr_filtered = df_contig_aggr
df = df_contig_aggr_filtered.loc[(df_contig_aggr_filtered["IGH"] == 1) &
(((df_contig_aggr_filtered["IGL"] == 1) & (df_contig_aggr_filtered["IGK"] == 0)) |
((df_contig_aggr_filtered["IGL"] == 0) & (df_contig_aggr_filtered["IGK"] == 1)))]
df.head()
# Filter orginal Changeodb to include only singlets
df_all_contig_annotations_valid = changeodb.set_index(["Donor", "CELL"]).loc[df.index]
print(df_all_contig_annotations_valid.shape)
# Filter contigs for only IGH, IGL, or IGK
df_all_contig_annotations_valid = df_all_contig_annotations_valid.loc[df_all_contig_annotations_valid["LOCUS"].isin(["IGH", "IGL", "IGK"])]
print (df_all_contig_annotations_valid.shape)
df_all_contig_annotations_valid.head()
# Filter contigs for only productive
df_all_contig_annotations_valid = df_all_contig_annotations_valid.loc[df_all_contig_annotations_valid["FUNCTIONAL"] == True]
print (df_all_contig_annotations_valid.shape)
df_all_contig_annotations_valid.head()
## Add Isotype column by using splice junctions
df = df_all_contig_annotations_valid
ab_tx, switch_tx = ut.loadSJoutIGH(data_dir + 'CombinedSJouts_chr14_IGH.fthr')
isotype_calls = ut.callIsotypeBySJout(ab_tx, plot=True)
_df = isotype_calls[['ISOTYPE_by_splice', 'cell']]
df = pd.merge(df, _df, left_on='CELL', right_on='cell')
df['ISOTYPE'] = df['ISOTYPE_by_splice']
df.to_csv(data_dir + 'ShazamQCed.tab', sep = '\t')
df = df[df.LOCUS == 'IGH']
df.to_csv(data_dir + 'ShazamQCedIGH.tab', sep = '\t')
```
| github_jupyter |
## One-Way Analysis of Variance (ANOVA)
To compare the means of two independent samples of internval or ratio data (assuming the samples are from normally distributed populations having equal variance) we can do a t-test. But what if you have more than two groups that you want to compare? You could do multiple t-tests, one for each pairing of groups. But this approach would increase the likelhood of experiencing a type-1 error, that is, of rejecting the null hypothesis when you should not have done so (false positive). The practice of doing repeated t-tests between multiple variables in the search for p-values less than 0.05 is sometimes called p-hacking or data dredging. A better approach is to do an analysis of variance (ANOVA) test. Think of ANOVA as testing all groups simultaneously and looking for statistical evidence that at least one of the groups is different than any of the others. We will focus on one-way ANOVA were there is only one factor that is different between groups. If ANOVA reveals that at least one of the groups is different than the others, a follow up test or post-hoc test is necessary to uncover which group or groups are different from the others. A popular post-hoc test demonstrated here is called Tukey's range test.
After this notebook you will know:
* how to conduct one-way ANOVA (analysis of variance) between multiple groups of interval or ratio data.
* how to do a Tukey's range test.
### About the Data
The dataset we will work with is from an experiment testing the connection between red dye no. 40 and the occurance of cancer in mice. There are three treatment groups receiving different size doses and a control. Here are some more details about the data.
Name: reddye40.csv
Title: Red Dye 40 and Cancer in Mice
Source: Journal Natl. Cancer Inst., Vol. 66, p 197-212
Description: S.W. Laagakos and F. Mosteller of Harvard University fed mice different doses of red dye number 40 and recorded the time of death in weeks. Results for female mice, dosage and time of death are shown in the data:
* X1 = time of death for control group
* X2 = time of death for group with low dosage
* X3 = time of death for group with medium dosage
* X4 = time of death for group with high dosage
The following cell will import the red dye 40 cancer data using pandas. The data is formated as a CSV (comma-separated-values) file. Such a file could be opend in Excel but here we simply load the file into a pandas data structure called a dataframe and print out the first couple rows of the dataframe.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats # some useful stuff
url = "https://raw.githubusercontent.com/prof-groff/evns462/master/data/reddye40.csv"
reddye = pd.read_csv(url)
reddye
```
### One-Way ANOVA Hypothesis Testing
ANOVA allows us to test the null hypothesis that their is no difference between the means of different groups in a study. For the red dye 40 data the null hypothesis would be that there is no difference between the mean time of death in weeks between mice receiving no dose (control), a low dose, a medium dose, or a high dose of red dye 40.
* H<sub>0</sub>: x1_bar = x2_bar = x3_bar = x4_bar at α = 0.05
* H<sub>A</sub>: The means are not all equal. That is, at least one of the means is different from the rest.
The test statistic for ANOVA is called the F-statistic and is defined as the ratio of mean squared error between groups divided by the mean squared error within groups.
F = MSB/MSE
where
MSB = SUM(nj(xj_bar - x_bar)^2) / (k-1)
* The sum is taken over all k groups, nj is the number of data values in group j, xj_bar is the mean of group j, x_bar is the grand mean, which is the mean of all data values in all groups. The degrees of freedom between groups is k-1
MSE = SUM(SUM(x-xj_bar)^2) / (N-k)
* The inner sum is taken over all data values in group j and the other sum is taken over all k groups. The degrees of freedom within groups is N-k, where N is the total number of data values in all groups.
The F-critical value for the stated significance level can be found in F-tables like [these](http://www.socr.ucla.edu/applets.dir/f_table.html) or using a calculator like [this one](https://www.danielsoper.com/statcalc/calculator.aspx?id=4). There is a different F-table for each significance level. The columns are for different between group degrees of freedom (k-1) and the rows are for different within group degrees of freedom (N-k). For the red dye data dfB = k-1 = 3 and dfW = N - k = 38 giving a F-critical value of 2.85174134.
Below the F-statistic is calculated using the formulas above and again using a built in python function that is much easier to use.
**NOTE: ANOVA assumes that the data in each group is normally distributed and the various groups have uniform variance. In practice, the ANOVA test works well if the data is decently normal and the smallest group variance is no more than 3 times smaller than the largest group variance. (More arbitrary rules?)**
```
# FIRST LET'S PULL OUT THE FOUR GROUPS. Notice that the number of mice in each sample is different.
groups = ['X1 control', 'X2 low dose', 'X3 medium dose', 'X4 high dose']
x1 = reddye[reddye[groups[0]]>0][groups[0]]
x2 = reddye[reddye[groups[1]]>0][groups[1]]
x3 = reddye[reddye[groups[2]]>0][groups[2]]
x4 = reddye[reddye[groups[3]]>0][groups[3]]
# NOW LET'S FIND THE SIZE OF EACH GROUP ...
n1 = len(x1)
n2 = len(x2)
n3 = len(x3)
n4 = len(x4)
N = n1+n2+n3+n4 # 38 data values in all groups
# AND CALCULATE dfB and dfW
k = 4 # 4 groups
dfB = k-1
dfW = N-k
# NOW CALCULATE THE GRAND MEAN ...
x_bar = (n1*x1.mean() + n2*x2.mean() + n3*x3.mean() + n4*x4.mean())/N
print(x_bar)
# THE SUM OF SQUARES BETWEEN GROUPS ...
SSB = n1*(x1.mean()-x_bar)**2 + n2*(x2.mean()-x_bar)**2 + n3*(x3.mean()-x_bar)**2 + n4*(x4.mean()-x_bar)**2
print(SSB)
# AND THE SUM OF SQUARES WITHIN GROUPS ...
SSW = sum(((x1 - x1.mean()))**2) + sum(((x2 - x2.mean()))**2) + sum(((x3 - x3.mean()))**2) + sum(((x4 - x4.mean()))**2)
print(SSW)
# NOW CALCULATE THE F-STATISTIC
F = (SSB/dfB)/(SSW/dfW)
print("F = ", F)
# Now, let's do this the easy way using a stats function.
F, p = stats.f_oneway(x1, x2, x3, x4)
print("F = ", F, " p = ", p)
```
### Repeat the Analysis with "Flattened" Data
Perhaps you notice that the dataframe used here is a bit strange in that not all of the columns have the same number of elements. This is because the different columns represent different treatment groups with different sample sizes. A better way to organize the data may be to make the treatment group a dimension of the data set. Many of you will have data sets structured like this. Let's repeat the analysis with the data reformated in this way.
```
# import the flattend data and view it
url = "https://raw.githubusercontent.com/prof-groff/evns-462/master/data/reddye40_flat.csv"
reddye2 = pd.read_csv(url)
reddye2
# group by treatment group
groups = reddye2.groupby('treatment group')
x1 = groups.get_group('X1 control')
x2 = groups.get_group('X2 low dose')
x3 = groups.get_group('X3 medium dose')
x4 = groups.get_group('X4 high dose')
# each of these groups are now different data frames with two columns
# we only want the "weeks till death" column though
x1 = x1['weeks till death']
x2 = x2['weeks till death']
x3 = x3['weeks till death']
x4 = x4['weeks till death']
# now do ANOVA, observe the same result as before
F, p = stats.f_oneway(x1,x2,x3,x4)
print("F = ", F, " p = ", p)
```
### Intepreting the Result
Since the F-statistic is greater than F-critical we reject the null and accept the alternative hypothesis. The means of the groups are not the same. But this doesn't tell us which mean or means are different. To determine this we could proceed to do independent sample t-tests or explore the data some other way. Let's do a test called Tukey's range test.
```
# LET'S JUST LOOK AT SOME SUMMARY STATISTICS FOR EACH GROUP
groups.describe()
# Now let's do the Tukey test
from statsmodels.stats.multicomp import pairwise_tukeyhsd
tukey = pairwise_tukeyhsd(endog=reddye2['weeks till death'], groups=reddye2['treatment group'], alpha=0.05)
tukey.summary() # See test summary
# and plot the group confidence intervals
tukey.plot_simultaneous()
plt.show()
```
### Intepreting the Results
The results of the Tukey test show that the only statistically significant difference is group X1 (the control) compared to group X4 (high dose).
| github_jupyter |
```
import pandas as pd
from sklearn.metrics import mean_squared_error
from scipy.optimize import curve_fit
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime, timedelta
def logistic_model(x, a, b, c):
return c / (1 + np.exp(-(x - b) / a))
def exponential_model(x, a, b, c):
return a * np.exp( b * (x - c))
# Datos
df_original = pd.read_csv("https://covid.ourworldindata.org/data/total_cases.csv")
# arguments
country = "Spain"
first_day = datetime.strptime('2020-01-01', '%Y-%m-%d')
p0_log = [5, 20, 40000]
p0_exp = [0.5, 0.5, 0.5]
```
Se selecciona el país puesto en `country` y se crea una columna `days`que indica los días que han transcurrido desde el 1 de enero. Luego se crean `x` e `y` como listas de las columnas `days` y los casos del país, respectivamente.
```
df = df_original
df = df[['date', country]]
df = df[True != df[country].isna()]
df = df.assign(days = df['date'].map(lambda x : (datetime.strptime(x, '%Y-%m-%d') - first_day).days))
x = list(df.iloc[:, 2])
y = list(df.iloc[:, 1])
x
y
```
Luego se utiliza la función creada de `logistic_curve` con `curve_fit`, pero lo que no en entiendo es realmente qué hace.
Deduzco que en el parámetro `p0`, el contenido de `p0_log` equivale a:
$$
\frac{40000}{1 + e^{-(x - 20)/5}}
$$
pero no entiendo bien por qué esos parámetros.
```
fit = curve_fit(logistic_model, xdata=x, ydata=y, p0=p0_log, maxfev=2000)
a, b, c = fit[0]
errors = np.sqrt(np.diag(fit[1]))
a
b
c
```
Luego con la función `fsolve` no sé realmente qué hace porque le indica que resuelva `logistic_model` como argumento principal `b`, de tal forma que queda
$$
\frac{17189.69}{1 + e^{-(75.60 - 75.60)/2.37}} - 17189.69 = \frac{17189.69}{1 + e^{0}} - 17189.69
$$
pero no entiendo por qué se tiene que resolver a través de `b`.
```
sol = int(fsolve(lambda z : logistic_model(z, a, b, c) - int(c), b))
last_day = datetime.strftime(first_day + timedelta(days=sol), '%Y-%m-%d')
```
Al final, con `sol` ya se determinan los días de predicción. Supongo que la clave está de hecho en que `b` corresponda a los días, pero no estoy seguro.
```
print("Last day of infections : ", last_day , " (approximately)")
exp_fit = curve_fit(exponential_model, x, y, p0=p0_exp)
pred_x = list(range(max(x), sol))
fig = plt.figure(figsize = (10, 10))
plt.scatter(df.iloc[:, 2], df.iloc[:, 1], label='Actual data')
plt.plot(x+pred_x, [logistic_model(i,fit[0][0],fit[0][1],fit[0][2]) for i in x+pred_x], label="Logistic curve", alpha=0.7, color="green")
plt.plot(x+pred_x, [exponential_model(i,exp_fit[0][0],exp_fit[0][1],exp_fit[0][2]) for i in x+pred_x], label="Exponential curve",alpha=0.6, color = "red")
plt.legend()
plt.xlabel("Days from 1 January 2020")
plt.ylabel("Amount of infected people")
plt.ylim((min(y)*0.9,c*1.1))
plt.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
```
# Forecast like observations
Use observation files to produce new files that fit the shape of a forecast file.
That makes them easier to use for ML purposes.
At the core of this task is the forecast_like_observations provided by the organizers.
This notebooks loads the appropriate forecasts and calls this function to generate corresponding obs, from our own set of obs files.
The obs files were modified to make them more consisten w/r to nans, see *land-mask-investigate.ipybn*.
```
import climetlab as cml
import climetlab_s2s_ai_challenge
import dask
import dask.array as da
import dask.distributed
import dask_jobqueue
import pathlib
import xarray as xr
from crims2s.util import fix_dataset_dims
DATA_PATH = '***BASEDIR***'
data_path = pathlib.Path(DATA_PATH)
```
## Boot dask cluster
```
cluster = dask_jobqueue.SLURMCluster(env_extra=['source ***HOME***.bash_profile','conda activate s2s'])
cluster.scale(jobs=4)
client = dask.distributed.Client(cluster)
client
```
## Temperature
```
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 't2m' in f.stem]
forecast_files[:10]
forecast = xr.open_mfdataset(forecast_files, preprocess=fix_dataset_dims)
obs = xr.open_dataset(data_path / 'obs_t2m_interp_remask.nc')
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
forecast_shaped_t2m
sample = forecast_shaped_t2m.isel(forecast_dayofyear=0, forecast_year=10, lead_time=40)
sample.valid_time.item()
(sample == obs.sel(time=sample.valid_time)).t2m.plot()
```
Seems legit!
```
forecast_shaped_t2m.isel(forecast_year=0).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_2000.nc')
forecast_shaped_t2m.isel(forecast_year=[0])
forecast_files[:10]
for f in forecast_files:
print(f)
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_t2m.forecast_time.dt.dayofyear[0].item()
forecast_shaped_t2m = forecast_shaped_t2m.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_t2m.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{day_of_year:03}.nc')
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
forecast_shaped_t2m.sel(forecast_year=[y]).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{y.item()}.nc')
forecast_shaped_t2m.to_netcdf(data_path / 'obs_t2m_forecast_shape.nc')
forecast_shaped_t2m.to_netcdf('***BASEDIR***obs_t2m_forecast_shape.nc')
del obs
del forecast
del forecast_shaped_t2m
```
## Precipitation
```
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 'tp' in f.stem]
forecast_files[:10]
obs = xr.open_dataset(data_path / 'obs_pr_interp_remask.nc')
for f in forecast_files:
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_tp = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_tp.forecast_time.dt.dayofyear[0].item()
forecast_shaped_tp = forecast_shaped_tp.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp.forecast_time.dt.day[0].item()
day_of_year = 289
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp
sample = forecast.isel(forecast_year=10, lead_time=10)
sample
obs
forecast_shaped_tp
sample = forecast_shaped_tp.isel(forecast_year=10, lead_time=15)
sample
obs_of_sample = obs.sel(time=slice(sample.forecast_time, sample.forecast_time + sample.lead_time)).isel(time=slice(None, -1))
obs_of_sample
(obs_of_sample.sum(dim='time').pr == sample.tp).plot()
```
seems legit! don't forget to exclude the last day when computing the cumsum
| github_jupyter |
```
import pandas as pd
import pandera as pa
valores_ausentes = ['**','###!','####','****','*****','NULL']
df = pd.read_csv("data.csv", sep=";", parse_dates=['ocorrencia_dia'], dayfirst=True, na_values=valores_ausentes)
df.head(10)
schema = pa.DataFrameSchema(
columns = {
"codigo_ocorrencia": pa.Column(pa.Int),
"codigo_ocorrencia2": pa.Column(pa.Int),
"ocorrencia_classificacao": pa.Column(pa.String),
"ocorrencia_cidade": pa.Column(pa.String),
"ocorrencia_uf": pa.Column(pa.String, pa.Check.str_length(2,2), nullable=True),
"ocorrencia_aerodromo": pa.Column(pa.String, nullable=True),
"ocorrencia_dia": pa.Column(pa.DateTime),
"ocorrencia_hora": pa.Column(pa.String, pa.Check.str_matches(r'^([0-1]?[0-9]|[2][0-3]):([0-5][0-9])(:[0-5][0-9])?$'), nullable=True),
"total_recomendacoes": pa.Column(pa.Int)
}
)
schema.validate(df)
df.dtypes
df.loc[1]
df.iloc[1]
df.iloc[-1]
df.tail()
df.iloc[10:15]
df.loc[10:15]
df.loc[:,'ocorrencia_uf']
df['ocorrencia_uf']
df.isna().sum()
df.isnull().sum()
filtro = df.ocorrencia_uf.isnull()
df.loc[filtro]
filtro = df.ocorrencia_aerodromo.isnull()
df.loc[filtro]
filtro = df.ocorrencia_hora.isnull()
df.loc[filtro]
df.count()
#ocorrências com mais de 10 recomendações
filtro = df.total_recomendacoes > 10
df.loc[filtro]
#ocorrências com mais de 10 recomendações
filtro = df.total_recomendacoes > 10
df.loc[filtro, ['ocorrencia_cidade', 'total_recomendacoes']]
#ocorrências cuja classificação == INCIDENTE GRAVE
filtro = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'
df.loc[filtro]
#ocorrências cuja classificação == INCIDENTE GRAVE e o estado == SP
filtro1 = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'
filtro2 = df.ocorrencia_uf == 'SP'
df.loc[filtro1 & filtro2]
#ocorrências cuja classificação == INCIDENTE GRAVE ou o estado == SP
filtro1 = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'
filtro2 = df.ocorrencia_uf == 'SP'
df.loc[filtro1 | filtro2]
#ocorrências cuja (classificação == INCIDENTE GRAVE ou classificação == INCIDENTE) e o estado == SP
filtro1 = df.ocorrencia_classificacao.isin(['INCIDENTE GRAVE', 'INCIDENTE'])
filtro2 = df.ocorrencia_uf == 'SP'
df.loc[filtro1 & filtro2]
#ocorrências cuja cidade comecem com a letra C
filtro = df.ocorrencia_cidade.str[0] == 'C'
df.loc[filtro]
#ocorrências cuja cidade terminam com a letra A
filtro = df.ocorrencia_cidade.str[-1] == 'A'
df.loc[filtro]
#ocorrências cuja cidade terminam com os caracteres MA
filtro = df.ocorrencia_cidade.str[-2:] == 'MA'
df.loc[filtro]
#ocorrências cuja cidade contém (em qualquer parte do conteúdo) os caracteres MA ou AL
filtro = df.ocorrencia_cidade.str.contains('MA|AL')
df.loc[filtro]
#ocorrências do ano de 2015
filtro = df.ocorrencia_dia.dt.year == 2015
df.loc[filtro]
df.dtypes
#ocorrências do ano de 2015 e mês 12 e dias entre 3 e 8
filtro_ano = df.ocorrencia_dia.dt.year == 2015
filtro_mes = df.ocorrencia_dia.dt.month == 12
filtro_dia_inicio = df.ocorrencia_dia.dt.day > 2
filtro_dia_fim = df.ocorrencia_dia.dt.day < 9
df.loc[filtro_ano & filtro_mes & filtro_dia_inicio & filtro_dia_fim]
df['ocorrencia_dia_hora'] = pd.to_datetime(df.ocorrencia_dia.astype(str) + ' ' + df.ocorrencia_hora)
df.head()
df.dtypes
#ocorrências do ano de 2015 e mês 12 e dias entre 3 e 8
filtro_ano = df.ocorrencia_dia_hora.dt.year == 2015
filtro_mes = df.ocorrencia_dia_hora.dt.month == 12
filtro_dia_inicio = df.ocorrencia_dia_hora.dt.day > 2
filtro_dia_fim = df.ocorrencia_dia_hora.dt.day < 9
df.loc[filtro_ano & filtro_mes & filtro_dia_inicio & filtro_dia_fim]
filtro1 = df.ocorrencia_dia_hora >= '2015-12-03 11:00:00'
filtro2 = df.ocorrencia_dia_hora <= '2015-12-08 14:30:00'
df.loc[filtro1 & filtro2]
#ocorrências do ano de 2015 e mês 03
filtro1 = df.ocorrencia_dia.dt.year == 2015
filtro2 = df.ocorrencia_dia.dt.month == 3
df201503 = df.loc[filtro1 & filtro2]
df201503
df201503.count()
df201503.groupby(['ocorrencia_classificacao']).codigo_ocorrencia.count()
df201503.groupby(['ocorrencia_classificacao']).ocorrencia_aerodromo.count()
df201503.groupby(['ocorrencia_classificacao']).size()
df201503.groupby(['ocorrencia_classificacao']).size().sort_values()
df201503.groupby(['ocorrencia_classificacao']).size().sort_values(ascending=False)
filtro1 = df.ocorrencia_dia.dt.year == 2010
filtro2 = df.ocorrencia_uf.isin(['SP','MG','ES','RJ'])
dfsudeste2010 = df.loc[filtro1 & filtro2]
dfsudeste2010
dfsudeste2010.groupby(['ocorrencia_classificacao']).size()
dfsudeste2010.count()
dfsudeste2010.groupby(['ocorrencia_uf', 'ocorrencia_classificacao']).size()
dfsudeste2010.groupby(['ocorrencia_cidade']).size().sort_values(ascending=False)
filtro1 = dfsudeste2010.ocorrencia_cidade == 'RIO DE JANEIRO'
filtro2 = dfsudeste2010.total_recomendacoes > 0
dfsudeste2010.loc[filtro1 & filtro2]
filtro = dfsudeste2010.ocorrencia_cidade == 'RIO DE JANEIRO'
dfsudeste2010.loc[filtro].total_recomendacoes.sum()
dfsudeste2010.groupby(['ocorrencia_aerodromo'], dropna=False).total_recomendacoes.sum()
dfsudeste2010.groupby(['ocorrencia_cidade']).total_recomendacoes.sum()
filtro = dfsudeste2010.total_recomendacoes > 0
dfsudeste2010.loc[filtro].groupby(['ocorrencia_cidade']).total_recomendacoes.sum().sort_values()
dfsudeste2010.loc[filtro].groupby(['ocorrencia_cidade', dfsudeste2010.ocorrencia_dia.dt.month]).total_recomendacoes.sum()
filtro1 = dfsudeste2010.total_recomendacoes > 0
filtro2 = dfsudeste2010.ocorrencia_cidade == 'SÃO PAULO'
dfsudeste2010.loc[filtro1 & filtro2]
```
| github_jupyter |
Author: Xi Ming.
## Build a Multilayer Perceptron from Scratch based on PyTorch.
PyTorch's automatic differentiation mechanism can help quickly implement multilayer perceptrons.
### Import Packages.
```
import torch
import torchvision
import torch.nn as nn
from torchvision import datasets,transforms
from torch.utils.data import DataLoader
import numpy as np
print('pytorch version:',torch.__version__,'\ntorchvision version: ',torchvision.__version__,'\nnumpy version:' ,np.__version__)
```
### Settings
```
# model runs on GPU or CPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Hyperparameters
learning_rate = 1e-2
momentum = 0.9
num_epochs = 10
batch_size = 128
# Architecture
num_features = 784
num_hidden_1 = 400
num_hidden_2 = 200
num_classes = 10
```
### Dataset: MNIST
```
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size, shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
### Define model
```
class MultilayerPerceptron(nn.Module):
def __init__(self, num_features, num_classes):
super(MultilayerPerceptron, self).__init__()
self.model = nn.Sequential(
nn.Linear(num_features, num_hidden_1),
nn.Sigmoid(),
nn.Linear(num_hidden_1, num_hidden_2),
nn.Sigmoid(),
nn.Linear(num_hidden_2, num_classes)
)
def forward(self, x):
x = self.model(x)
return x
```
### Init model, define optimizer and loss function
```
model = MultilayerPerceptron(num_features=num_features,
num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)
criterion = nn.CrossEntropyLoss()
```
### Training model
```
train_loss_list = []
test_acc_list = []
for epoch in range(num_epochs):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
data = data.view(-1, 28*28)
# forward
logits = model(data)
loss = criterion(logits, target)
# backprop
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data.item()))
train_loss_list.append(loss.data.item())
test_loss = 0
correct = 0
model.eval()
with torch.no_grad():
# test
total_correct = 0
total_num = 0
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = data.view(-1, 28*28)
logits = model(data)
test_loss += criterion(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
test_acc = 100. * correct / len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset), test_acc))
test_acc_list.append(test_acc)
```
### Plot Training index curve
```
import matplotlib
import matplotlib.pyplot as plt
x = np.arange(0, num_epochs)
plt.title("Training index curve")
plt.plot(x, train_loss_list, label='train loss')
plt.xlabel('epochs')
plt.ylabel('train loss')
plt.show()
plt.title("Training index curve")
plt.plot(x, test_acc_list, label='test accuracy')
plt.xlabel('epochs')
plt.ylabel('train acc')
plt.show()
```
### Visual Inspection
```
for features, targets in test_loader:
break
fig, ax = plt.subplots(1, 4)
data = data.to('cpu')
for i in range(4):
ax[i].imshow(data[i].view(28, 28), cmap=matplotlib.cm.binary)
plt.show()
data = data.to(device)
predictions = model.forward(data[:4].view(-1, 28*28))
predictions = torch.argmax(predictions, dim=1)
print('Predicted labels', predictions)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
os.system('rm -rf tacotron2-female-alignment')
os.system('mkdir tacotron2-female-alignment')
import tensorflow as tf
import numpy as np
from glob import glob
import tensorflow as tf
import malaya_speech
import malaya_speech.train
from malaya_speech.train.model import tacotron2_nvidia as tacotron2
import malaya_speech.config
import numpy as np
import json
import malaya_speech.train as train
def norm_mean_std(x, mean, std):
zero_idxs = np.where(x == 0.0)[0]
x = (x - mean) / std
x[zero_idxs] = 0.0
return x
def average_by_duration(x, durs):
mel_len = durs.sum()
durs_cum = np.cumsum(np.pad(durs, (1, 0)))
x_char = np.zeros((durs.shape[0],), dtype=np.float32)
for idx, start, end in zip(range(mel_len), durs_cum[:-1], durs_cum[1:]):
values = x[start:end][np.where(x[start:end] != 0.0)[0]]
x_char[idx] = np.mean(values) if len(values) > 0 else 0.0
return x_char.astype(np.float32)
f0_stat = np.load('../speech-bahasa/female-stats/stats_f0.npy')
energy_stat = np.load('../speech-bahasa/female-stats/stats_energy.npy')
with open('mels-female.json') as fopen:
files = json.load(fopen)
reduction_factor = 1
maxlen = 904
minlen = 32
pad_to = 8
data_min = 1e-2
_pad = 'pad'
_start = 'start'
_eos = 'eos'
_punctuation = "!'(),.:;? "
_special = '-'
_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
MALAYA_SPEECH_SYMBOLS = (
[_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters)
)
def generate(files):
for f in files:
f = f.decode()
mel = np.load(f)
mel_length = len(mel)
if mel_length > maxlen or mel_length < minlen:
continue
stop_token_target = np.zeros([len(mel)], dtype = np.float32)
text_ids = np.load(f.replace('mels', 'text_ids'), allow_pickle = True)[
0
]
text_input = np.array(
[
MALAYA_SPEECH_SYMBOLS.index(c)
for c in text_ids
if c in MALAYA_SPEECH_SYMBOLS
]
)
num_pad = pad_to - ((len(text_input) + 2) % pad_to)
text_input = np.pad(
text_input, ((1, 1)), 'constant', constant_values = ((1, 2))
)
text_input = np.pad(
text_input, ((0, num_pad)), 'constant', constant_values = 0
)
num_pad = pad_to - ((len(mel) + 1) % pad_to) + 1
pad_value_mel = np.log(data_min)
mel = np.pad(
mel,
((0, num_pad), (0, 0)),
'constant',
constant_values = pad_value_mel,
)
stop_token_target = np.pad(
stop_token_target, ((0, num_pad)), 'constant', constant_values = 1
)
len_mel = [len(mel)]
len_text_ids = [len(text_input)]
f0 = np.load(f.replace('mels', 'f0s'))
num_pad = pad_to - ((len(f0) + 1) % pad_to) + 1
f0 = np.pad(
f0,
((0, num_pad)),
'constant',
)
f0 = norm_mean_std(f0, f0_stat[0], f0_stat[1])
len_f0 = [len(f0)]
energy = np.load(f.replace('mels', 'energies'))
num_pad = pad_to - ((len(energy) + 1) % pad_to) + 1
energy = np.pad(
energy,
((0, num_pad)),
'constant',
)
energy = norm_mean_std(energy, energy_stat[0], energy_stat[1])
len_energy = [len(energy)]
yield {
'mel': mel,
'text_ids': text_input,
'len_mel': len_mel,
'len_text_ids': len_text_ids,
'stop_token_target': stop_token_target,
'f0': f0,
'len_f0': len_f0,
'energy': energy,
'len_energy': len_energy,
'f': [f]
}
def parse(example):
mel_len = example['len_mel'][0]
input_len = example['len_text_ids'][0]
g = tacotron2.generate_guided_attention(mel_len, input_len, reduction_factor = reduction_factor)
example['g'] = g
return example
def get_dataset(files, batch_size = 32, shuffle_size = 32, thread_count = 24):
def get():
dataset = tf.data.Dataset.from_generator(
generate,
{
'mel': tf.float32,
'text_ids': tf.int32,
'len_mel': tf.int32,
'len_text_ids': tf.int32,
'stop_token_target': tf.float32,
'f0': tf.float32,
'len_f0': tf.int32,
'energy': tf.float32,
'len_energy': tf.int32,
'f': tf.string
},
output_shapes = {
'mel': tf.TensorShape([None, 80]),
'text_ids': tf.TensorShape([None]),
'len_mel': tf.TensorShape([1]),
'len_text_ids': tf.TensorShape([1]),
'stop_token_target': tf.TensorShape([None]),
'f0': tf.TensorShape([None]),
'len_f0': tf.TensorShape([1]),
'energy': tf.TensorShape([None]),
'len_energy': tf.TensorShape([1]),
'f': tf.TensorShape([1]),
},
args = (files,),
)
dataset = dataset.map(parse, num_parallel_calls = thread_count)
dataset = dataset.padded_batch(
shuffle_size,
padded_shapes = {
'mel': tf.TensorShape([None, 80]),
'text_ids': tf.TensorShape([None]),
'len_mel': tf.TensorShape([1]),
'len_text_ids': tf.TensorShape([1]),
'g': tf.TensorShape([None, None]),
'stop_token_target': tf.TensorShape([None]),
'f0': tf.TensorShape([None]),
'len_f0': tf.TensorShape([1]),
'energy': tf.TensorShape([None]),
'len_energy': tf.TensorShape([1]),
'f': tf.TensorShape([1]),
},
padding_values = {
'mel': tf.constant(0, dtype = tf.float32),
'text_ids': tf.constant(0, dtype = tf.int32),
'len_mel': tf.constant(0, dtype = tf.int32),
'len_text_ids': tf.constant(0, dtype = tf.int32),
'g': tf.constant(-1.0, dtype = tf.float32),
'stop_token_target': tf.constant(0, dtype = tf.float32),
'f0': tf.constant(0, dtype = tf.float32),
'len_f0': tf.constant(0, dtype = tf.int32),
'energy': tf.constant(0, dtype = tf.float32),
'len_energy': tf.constant(0, dtype = tf.int32),
'f': tf.constant('', dtype = tf.string),
},
)
return dataset
return get
features = get_dataset(files['train'])()
features = features.make_one_shot_iterator().get_next()
input_ids = features['text_ids']
input_lengths = features['len_text_ids'][:, 0]
speaker_ids = tf.constant([0], dtype = tf.int32)
mel_outputs = features['mel']
mel_lengths = features['len_mel'][:, 0]
guided = features['g']
stop_token_target = features['stop_token_target']
batch_size = tf.shape(guided)[0]
model = tacotron2.Model(
[input_ids, input_lengths],
[mel_outputs, mel_lengths],
len(MALAYA_SPEECH_SYMBOLS),
)
r = model.decoder_logits['outputs']
decoder_output, post_mel_outputs, alignment_histories, _, _, _ = r
stop_token_predictions = model.decoder_logits['stop_token_prediction']
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, 'tacotron2-female/model.ckpt-54000')
import matplotlib.pyplot as plt
def decode(x):
return ''.join([MALAYA_SPEECH_SYMBOLS[i] for i in x])
def get_duration_from_alignment(alignment):
D = np.array([0 for _ in range(np.shape(alignment)[0])])
for i in range(np.shape(alignment)[1]):
max_index = list(alignment[:, i]).index(alignment[:, i].max())
D[max_index] = D[max_index] + 1
return D
count = 0
while True:
try:
o = sess.run([decoder_output, post_mel_outputs, stop_token_predictions, alignment_histories, features])
f = o[-1]
for i in range(len(f['f'])):
file = f['f'][i,0].decode().split('/')[-1]
file = f'tacotron2-female-alignment/{file}'
len_mel = f['len_mel'][i, 0]
len_text_ids = f['len_text_ids'][i, 0]
d = get_duration_from_alignment(o[3][i, :len_text_ids, :len_mel])
assert d.sum() == len_mel
np.save(file, d)
print('done', count)
count += 1
except:
break
# import pickle
# with open('dataset-mel.pkl', 'wb') as fopen:
# pickle.dump([o[-1], d], fopen)
# import pickle
# with open('a.pkl', 'wb') as fopen:
# pickle.dump([np.reshape(o[0][0], [-1, 80]), np.reshape(o[1][0], [-1, 80]), o[-1]['mel'][0]], fopen)
```
| github_jupyter |
```
import numpy as np
import cvxpy as cp
import networkx as nx
import matplotlib.pyplot as plt
# Problem data
reservations = np.array([110, 118, 103, 161, 140])
flight_capacities = np.array([100, 100, 100, 150, 150])
cost_per_hour = 50
cost_external_company = 75
# Build transportation grah
G = nx.DiGraph()
# Add nodes
G.add_node(0, supply=reservations[0], label="10am")
G.add_node(1, supply=reservations[1], label="12pm")
G.add_node(2, supply=reservations[2], label="2pm")
G.add_node(3, supply=reservations[3], label="4pm")
G.add_node(4, supply=reservations[4], label="6pm")
G.add_node(5, supply=0, label="9pm")
G.add_node(6, supply=-np.sum(reservations), label="NY")
# Edges
M = 1000
# From 10am
G.add_edge(0, 1, cost=2 * cost_per_hour, capacity=M)
G.add_edge(0, 2, cost=4 * cost_per_hour, capacity=M)
G.add_edge(0, 3, cost=6 * cost_per_hour, capacity=M)
G.add_edge(0, 4, cost=8 * cost_per_hour, capacity=M)
G.add_edge(0, 5, cost=11 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(0, 6, cost=0, capacity=flight_capacities[0])
# From 12pm
G.add_edge(1, 2, cost=2 * cost_per_hour, capacity=M)
G.add_edge(1, 3, cost=4 * cost_per_hour, capacity=M)
G.add_edge(1, 4, cost=6 * cost_per_hour, capacity=M)
G.add_edge(1, 5, cost=9 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(1, 6, cost=0, capacity=flight_capacities[1])
# From 2pm
G.add_edge(2, 3, cost=2 * cost_per_hour, capacity=M)
G.add_edge(2, 4, cost=4 * cost_per_hour, capacity=M)
G.add_edge(2, 5, cost=7 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(2, 6, cost=0, capacity=flight_capacities[2])
# From 4pm
G.add_edge(3, 4, cost=2 * cost_per_hour, capacity=M)
G.add_edge(3, 5, cost=5 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(3, 6, cost=0, capacity=flight_capacities[3])
# From 6pm
G.add_edge(4, 5, cost=3 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(4, 6, cost=0, capacity=flight_capacities[4])
# From 9pm
G.add_edge(5, 6, cost=0, capacity=M)
# Note minus sign for convention
# In our formulation:
# -> 1 means arc exits node
# -> -1 means arc enters node
A = -nx.linalg.graphmatrix.incidence_matrix(G, oriented=True)
print("A =\n", A.todense())
# Get weights, capacities, and supply vectors
c = np.array([G[u][v]['cost'] for u,v in G.edges])
u = np.array([G[u][v]['capacity'] for u,v in G.edges])
b = np.array([G.nodes[u]['supply'] for u in G.nodes])
# Solve airline problem
# Note: you need to install GLPK. It is part of CVXOPT.
# Just run:
# pip install cvxopt
#
# GLPK runs a simple method, which, as you know, returns exactly integral
# solutions at vertices. Other solvers such as ECOS use interior-point methods
# and they return slightly imprecise solutions that are not exactly integral.
x = cp.Variable(len(G.edges))
objective = cp.Minimize(c @ x)
constraints = [A @ x == b, 0 <= x, x <= u]
problem = cp.Problem(objective, constraints)
problem.solve(solver=cp.GLPK)
print("Optimal cost = $", problem.objective.value)
# Show solution
# Note: some bounds/capacities are not integral -> Solution not integral
print("x = ", x.value)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
cmap = plt.cm.Blues
# Positions in 2d plot
layout = {0: np.array([0.0, 0.0]),
1: np.array([1.0, 0.5]),
2: np.array([2.0, 1.0]),
3: np.array([3.0, 0.5]),
4: np.array([4.0, 0.0]),
5: np.array([1.6, -0.3]),
6: np.array([2.0, -2.0]),
}
nx.draw_networkx_nodes(G, layout, node_color='w', edgecolors='k', node_size=2000)
nx.draw_networkx_edges(G, layout, edge_cmap=cmap, edge_color=x.value,
width=2, arrowsize=30, min_target_margin=20)
labels = {u: G.nodes[u]['label'] for u in G.nodes}
nx.draw_networkx_labels(G,layout,labels,font_size=14)
# Print colormap
sm = plt.cm.ScalarMappable(cmap=cmap,
norm=plt.Normalize(vmin=0, vmax=200)
)
cbar = plt.colorbar(sm)
plt.show()
```
| github_jupyter |
```
import tempfile
import urllib.request
train_file = "datasets/thermostat/sample-training-data.csv"
test_file = "datasets/thermostat/test-data.csv"
import pandas as pd
COLUMNS = ["month", "day", "hour", "min", "pirstatus",
"isDay", "extTemp", "extHumidity", "loungeTemp", "loungeHumidity",
"state", "temperature", "label"]
df_train = pd.read_csv(train_file, names=COLUMNS, skipinitialspace=True)
df_test = pd.read_csv(test_file, names=COLUMNS, skipinitialspace=True)
CATEGORICAL_COLUMNS = []
CONTINUOUS_COLUMNS = ["month","day", "hour", "min", "pirstatus",
"isDay", "extTemp", "extHumidity", "loungeTemp", "loungeHumidity"
]
LABEL_COLUMN="label"
df_train[LABEL_COLUMN] = df_train["state"]
df_test[LABEL_COLUMN] = df_test["state"]
print(df_test)
import tensorflow as tf
def input_fn(df):
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values)
for k in CONTINUOUS_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values.astype(str),
dense_shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict()
feature_cols.update(continuous_cols.copy())
feature_cols.update(categorical_cols.copy())
#feature_cols = dict(continuous_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
def train_input_fn():
return input_fn(df_train)
def eval_input_fn():
return input_fn(df_test)
month = tf.contrib.layers.real_valued_column("month")
day = tf.contrib.layers.real_valued_column("day")
hour = tf.contrib.layers.real_valued_column("hour")
minute = tf.contrib.layers.real_valued_column("min")
pirstatus = tf.contrib.layers.real_valued_column("pirstatus")
isDay = tf.contrib.layers.real_valued_column("isDay")
extTemp = tf.contrib.layers.real_valued_column("extTemp")
extHumidity = tf.contrib.layers.real_valued_column("extHumidity")
loungeTemp = tf.contrib.layers.real_valued_column("loungeTemp")
loungeHumidity = tf.contrib.layers.real_valued_column("loungeHumidity")
model_dir = tempfile.mkdtemp()
m = tf.contrib.learn.LinearClassifier(feature_columns=[
month, day, hour, minute, pirstatus, isDay,
extTemp, extHumidity, loungeTemp, loungeHumidity],
optimizer=tf.train.FtrlOptimizer(
learning_rate=0.1,
l1_regularization_strength=1.0,
l2_regularization_strength=1.0),
model_dir=model_dir)
m.fit(input_fn=train_input_fn, steps=500)
results = m.evaluate(input_fn=eval_input_fn, steps=1)
print("printin results")
for key in sorted(results):
print("%s: %s" % (key, results[key]))
def predict_input_fn():
test_data = {
"month":[12],
"day":[12],
"hour":[22],
"min":[0],
"pirstatus":[0],
"isDay":[1],
"extTemp":[35],
"extHumidity":[20],
"loungeTemp":[12],
"loungeHumidity":[30],
}
continuous_cols = {k: tf.constant(test_data[k])
for k in test_data}
return continuous_cols
predictions = list(m.predict(input_fn=predict_input_fn, as_iterable=True))
print('Predictions: {}'.format(str(predictions)))
```
| github_jupyter |
```
%matplotlib inline
from datetime import date
import pandas as pd
import urllib.request
import xmltodict
from ipywidgets import HTML
from ipyleaflet import *
import configparser
config = configparser.ConfigParser()
config.read("config.cfg")
import math
axis = None # Semi-major axis of the ellipsoid.
flattening = None # Flattening of the ellipsoid.
central_meridian = None # Central meridian for the projection.
lat_of_origin = None # Latitude of origin.
scale = None # Scale on central meridian.
false_northing = None # Offset for origo.
false_easting = None # Offset for origo.
# Parameters for RT90 and SWEREF99TM.
# Note: Parameters for RT90 are choosen to eliminate the
# differences between Bessel and GRS80-ellipsoides.
# Bessel-iants should only be used if lat/long are given as
# RT90-lat/long based on the Bessel ellipsoide (from old maps).
# Parameter: projection (string). Must match if-statement.
def swedish_params(projection) :
global central_meridian
global scale
global false_northing
global false_easting
global lat_of_origin
# RT90 parameters, GRS 80 ellipsoid.
if (projection == "rt90_7.5_gon_v") :
grs80_params()
central_meridian = 11.0 + 18.375/60.0
scale = 1.000006000000
false_northing = -667.282
false_easting = 1500025.141
elif (projection == "rt90_5.0_gon_v") :
grs80_params()
central_meridian = 13.0 + 33.376/60.0
scale = 1.000005800000
false_northing = -667.130
false_easting = 1500044.695
elif (projection == "rt90_2.5_gon_v") :
grs80_params()
central_meridian = 15.0 + 48.0/60.0 + 22.624306/3600.0
scale = 1.00000561024
false_northing = -667.711
false_easting = 1500064.274
elif (projection == "rt90_0.0_gon_v") :
grs80_params()
central_meridian = 18.0 + 3.378/60.0
scale = 1.000005400000
false_northing = -668.844
false_easting = 1500083.521
elif (projection == "rt90_2.5_gon_o") :
grs80_params()
central_meridian = 20.0 + 18.379/60.0
scale = 1.000005200000
false_northing = -670.706
false_easting = 1500102.765
elif (projection == "rt90_5.0_gon_o") :
grs80_params()
central_meridian = 22.0 + 33.380/60.0
scale = 1.000004900000
false_northing = -672.557
false_easting = 1500121.846
# RT90 parameters, Bessel 1841 ellipsoid.
elif (projection == "bessel_rt90_7.5_gon_v") :
bessel_params()
central_meridian = 11.0 + 18.0/60.0 + 29.8/3600.0
elif (projection == "bessel_rt90_5.0_gon_v") :
bessel_params()
central_meridian = 13.0 + 33.0/60.0 + 29.8/3600.0
elif (projection == "bessel_rt90_2.5_gon_v") :
bessel_params()
central_meridian = 15.0 + 48.0/60.0 + 29.8/3600.0
elif (projection == "bessel_rt90_0.0_gon_v") :
bessel_params()
central_meridian = 18.0 + 3.0/60.0 + 29.8/3600.0
elif (projection == "bessel_rt90_2.5_gon_o") :
bessel_params()
central_meridian = 20.0 + 18.0/60.0 + 29.8/3600.0
elif (projection == "bessel_rt90_5.0_gon_o") :
bessel_params()
central_meridian = 22.0 + 33.0/60.0 + 29.8/3600.0
# SWEREF99TM and SWEREF99ddmm parameters.
elif (projection == "sweref_99_tm") :
sweref99_params()
central_meridian = 15.00
lat_of_origin = 0.0
scale = 0.9996
false_northing = 0.0
false_easting = 500000.0
elif (projection == "sweref_99_1200") :
sweref99_params()
central_meridian = 12.00
elif (projection == "sweref_99_1330") :
sweref99_params()
central_meridian = 13.50
elif (projection == "sweref_99_1500") :
sweref99_params()
central_meridian = 15.00
elif (projection == "sweref_99_1630") :
sweref99_params()
central_meridian = 16.50
elif (projection == "sweref_99_1800") :
sweref99_params()
central_meridian = 18.00
elif (projection == "sweref_99_1415") :
sweref99_params()
central_meridian = 14.25
elif (projection == "sweref_99_1545") :
sweref99_params()
central_meridian = 15.75
elif (projection == "sweref_99_1715") :
sweref99_params()
central_meridian = 17.25
elif (projection == "sweref_99_1845") :
sweref99_params()
central_meridian = 18.75
elif (projection == "sweref_99_2015") :
sweref99_params()
central_meridian = 20.25
elif (projection == "sweref_99_2145") :
sweref99_params()
central_meridian = 21.75
elif (projection == "sweref_99_2315") :
sweref99_params()
central_meridian = 23.25
# Test-case:
# Lat: 66 0'0", lon: 24 0'0".
# X:1135809.413803 Y:555304.016555.
elif (projection == "test_case") :
axis = 6378137.0
flattening = 1.0 / 298.257222101
central_meridian = 13.0 + 35.0/60.0 + 7.692000/3600.0
lat_of_origin = 0.0
scale = 1.000002540000
false_northing = -6226307.8640
false_easting = 84182.8790
# Not a valid projection.
else :
central_meridian = None
# Sets of default parameters.
def grs80_params() :
global axis
global flattening
global central_meridian
global lat_of_origin
axis = 6378137.0 # GRS 80.
flattening = 1.0 / 298.257222101 # GRS 80.
central_meridian = None
lat_of_origin = 0.0
def bessel_params() :
global axis
global flattening
global central_meridian
global lat_of_origin
global scale
global false_northing
global false_easting
axis = 6377397.155 # Bessel 1841.
flattening = 1.0 / 299.1528128 # Bessel 1841.
central_meridian = None
lat_of_origin = 0.0
scale = 1.0
false_northing = 0.0
false_easting = 1500000.0
def sweref99_params() :
global axis
global flattening
global central_meridian
global lat_of_origin
global scale
global false_northing
global false_easting
axis = 6378137.0 # GRS 80.
flattening = 1.0 / 298.257222101 # GRS 80.
central_meridian = None
lat_of_origin = 0.0
scale = 1.0
false_northing = 0.0
false_easting = 150000.0
# Conversion from geodetic coordinates to grid coordinates.
def geodetic_to_grid(latitude, longitude) :
x_y = [0] * 2
if (central_meridian == None) :
return x_y
# Prepare ellipsoid-based stuff.
e2 = flattening * (2.0 - flattening)
n = flattening / (2.0 - flattening)
a_roof = axis / (1.0 + n) * (1.0 + n*n/4.0 + n*n*n*n/64.0)
A = e2
B = (5.0*e2*e2 - e2*e2*e2) / 6.0
C = (104.0*e2*e2*e2 - 45.0*e2*e2*e2*e2) / 120.0
D = (1237.0*e2*e2*e2*e2) / 1260.0
beta1 = n/2.0 - 2.0*n*n/3.0 + 5.0*n*n*n/16.0 + 41.0*n*n*n*n/180.0
beta2 = 13.0*n*n/48.0 - 3.0*n*n*n/5.0 + 557.0*n*n*n*n/1440.0
beta3 = 61.0*n*n*n/240.0 - 103.0*n*n*n*n/140.0
beta4 = 49561.0*n*n*n*n/161280.0
# Convert.
deg_to_rad = math.pi / 180.0
phi = latitude * deg_to_rad
lambda_ = longitude * deg_to_rad
lambda_zero = central_meridian * deg_to_rad
phi_star = phi - math.sin(phi) * math.cos(phi) * (A + B*math.pow(math.sin(phi), 2) + C*math.pow(math.sin(phi), 4) + D*math.pow(math.sin(phi), 6))
delta_lambda = lambda_ - lambda_zero
xi_prim = math.atan(math.tan(phi_star) / math.cos(delta_lambda))
eta_prim = math_atanh(math.cos(phi_star) * math.sin(delta_lambda))
x = scale * a_roof * (xi_prim +beta1 * math.sin(2.0*xi_prim) * math_cosh(2.0*eta_prim) +beta2 * math.sin(4.0*xi_prim) * math_cosh(4.0*eta_prim) +beta3 * math.sin(6.0*xi_prim) * math_cosh(6.0*eta_prim) +beta4 * math.sin(8.0*xi_prim) * math_cosh(8.0*eta_prim)) + false_northing
y = scale * a_roof * (eta_prim +beta1 * math.cos(2.0*xi_prim) * math_sinh(2.0*eta_prim) +beta2 * math.cos(4.0*xi_prim) * math_sinh(4.0*eta_prim) +beta3 * math.cos(6.0*xi_prim) * math_sinh(6.0*eta_prim) +beta4 * math.cos(8.0*xi_prim) * math_sinh(8.0*eta_prim)) + false_easting
x_y[0] = math.round(x * 1000.0) / 1000.0
x_y[1] = math.round(y * 1000.0) / 1000.0
# x_y[0] = x
# x_y[1] = y
return x_y
# Conversion from grid coordinates to geodetic coordinates.
def grid_to_geodetic(x, y) :
lat_lon = [0] * 2
if (central_meridian == None) :
return lat_lon
# Prepare ellipsoid-based stuff.
e2 = flattening * (2.0 - flattening)
n = flattening / (2.0 - flattening)
a_roof = axis / (1.0 + n) * (1.0 + n*n/4.0 + n*n*n*n/64.0)
delta1 = n/2.0 - 2.0*n*n/3.0 + 37.0*n*n*n/96.0 - n*n*n*n/360.0
delta2 = n*n/48.0 + n*n*n/15.0 - 437.0*n*n*n*n/1440.0
delta3 = 17.0*n*n*n/480.0 - 37*n*n*n*n/840.0
delta4 = 4397.0*n*n*n*n/161280.0
Astar = e2 + e2*e2 + e2*e2*e2 + e2*e2*e2*e2
Bstar = -(7.0*e2*e2 + 17.0*e2*e2*e2 + 30.0*e2*e2*e2*e2) / 6.0
Cstar = (224.0*e2*e2*e2 + 889.0*e2*e2*e2*e2) / 120.0
Dstar = -(4279.0*e2*e2*e2*e2) / 1260.0
# Convert.
deg_to_rad = math.pi / 180
lambda_zero = central_meridian * deg_to_rad
xi = (x - false_northing) / (scale * a_roof)
eta = (y - false_easting) / (scale * a_roof)
xi_prim = xi - delta1*math.sin(2.0*xi) * math_cosh(2.0*eta) - delta2*math.sin(4.0*xi) * math_cosh(4.0*eta) - delta3*math.sin(6.0*xi) * math_cosh(6.0*eta) - delta4*math.sin(8.0*xi) * math_cosh(8.0*eta)
eta_prim = eta - delta1*math.cos(2.0*xi) * math_sinh(2.0*eta) - delta2*math.cos(4.0*xi) * math_sinh(4.0*eta) - delta3*math.cos(6.0*xi) * math_sinh(6.0*eta) - delta4*math.cos(8.0*xi) * math_sinh(8.0*eta)
phi_star = math.asin(math.sin(xi_prim) / math_cosh(eta_prim))
delta_lambda = math.atan(math_sinh(eta_prim) / math.cos(xi_prim))
lon_radian = lambda_zero + delta_lambda
lat_radian = phi_star + math.sin(phi_star) * math.cos(phi_star) * (Astar + Bstar*math.pow(math.sin(phi_star), 2) + Cstar*math.pow(math.sin(phi_star), 4) + Dstar*math.pow(math.sin(phi_star), 6))
lat_lon[0] = lat_radian * 180.0 / math.pi
lat_lon[1] = lon_radian * 180.0 / math.pi
return lat_lon
# Missing defs in the math library.
def math_sinh(value) :
return 0.5 * (math.exp(value) - math.exp(-value))
def math_cosh(value) :
return 0.5 * (math.exp(value) + math.exp(-value))
def math_atanh(value) :
return 0.5 * math.log((1.0 + value) / (1.0 - value))
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
key = config["tokens"]["vaginfo"]
urlstr = "http://opendata.linkoping.se/ws_opendata/main.asmx/VagarbeteAlla?CustomKey=" + key
data = {}
with urllib.request.urlopen(urlstr) as url:
data = xmltodict.parse(url.read().decode())
def timeconv(item):
start = item['STARTTID'].replace("MAJ","MAY")
start = start.replace('OKT', 'OCT')
slut = item['SLUTTID'].replace("MAJ","MAY")
slut = slut.replace('OKT', 'OCT')
item['STARTTID'] = start
item['SLUTTID'] = slut
return item
data2 = [timeconv(item) for item in data['ResponseVagarbete']['ListaVagarbeten']['Vagarbete'] ]
df = pd.DataFrame.from_dict(data2)
df2 = df.sort_values('IDNR')
def make_clickable(val):
return '<a target="_blank" href="{}">{}</a>'.format(val,val)
m = Map(center=(58.41, 15.6), zoom=13, basemap=basemaps.OpenStreetMap.Mapnik)
m
df2['SLUTTID'] = pd.to_datetime(df2['SLUTTID'])
df2['STARTTID'] = pd.to_datetime(df2['STARTTID'])
df3 = df2.loc[df2['SLUTTID'] > date.today(),:]
df3
#def get_location(swerefx, swerefy):
# return [58.41, 15.6]
swedish_params("sweref_99_1500")
def create_marker(datapoint):
loc = grid_to_geodetic(float(datapoint.Y_SWEREF991500), float(datapoint.X_SWEREF991500))
roadcondition = ""
if datapoint.FRAMKOMLIGHET_BIL:
roadcondition = "Avstängd" if float(datapoint.FRAMKOMLIGHET_BIL.replace(',','.'))<0.1 else "Begränsad"
htmltext = """
<div>{}
<ul class='list-group'>
<li class='list-group-item'>Loc: {}</li>
<li class='list-group-item'>Start: {}, Slut: {}</li>
<li class='list-group-item'>Framkomlighet: {}</li>
</ul></div>""".format(
datapoint.BESKRIVNING,
datapoint.PLATS,
datapoint.STARTTID,
datapoint.SLUTTID,
roadcondition)
html_widget = HTML(
value=htmltext,
placeholder='',
description=''
)
return Marker(location=loc, popup=html_widget)
for item in range(0,len(df3)):
mark = create_marker(df3.iloc[item,:])
m += mark
m
```
| github_jupyter |
# Fuzzing APIs
So far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate _program code that very efficiently invokes functions directly._
```
from bookutils import YouTubeVideo
YouTubeVideo('U842dC2R3V0')
```
**Prerequisites**
* You have to know how grammar fuzzing work, e.g. from the [chapter on grammars](Grammars.ipynb).
* We make use of _generator functions_, as discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb).
* We make use of probabilities, as discussed in the [chapter on fuzzing with probabilities](ProbabilisticGrammarFuzzer.ipynb).
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from fuzzingbook.APIFuzzer import <identifier>
```
and then make use of the following features.
This chapter provides *grammar constructors* that are useful for generating _function calls_.
The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
```python
>>> from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
```
`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:
```python
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['-51', '9', '0', '0', '0', '0', '32', '0', '0', '0']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['0e0',
'-9.43e34',
'-7.3282e0',
'-9.5e-9',
'0',
'-30.840386e-5',
'3',
'-4.1e0',
'-9.7',
'413']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['"#vYV*t@I%KNTT[q~}&-v+[zAzj[X-z|RzC$(g$Br]1tC\':5<F-"',
'""',
'"^S/"',
'"y)QDs_9"',
'")dY~?WYqMh,bwn3\\"A!02Pk`gx"',
'"01n|(dd$-d.sx\\"83\\"h/]qx)d9LPNdrk$}$4t3zhC.%3VY@AZZ0wCs2 N"',
'"D\\6\\xgw#TQ}$\'3"',
'"LaM{"',
'"\\"ux\'1H!=%;2T$.=l"',
'"=vkiV~w.Ypt,?JwcEr}Moc>!5<U+DdYAup\\"N 0V?h3x~jFN3"']
```
`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:
```python
>>> int_grammar = int_grammar_with_range(100, 200)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
>>> [fuzzer.fuzz() for i in range(10)]
['154', '149', '185', '117', '182', '154', '131', '194', '147', '192']
```
`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
```python
>>> float_grammar = float_grammar_with_range(100, 200)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)
>>> [fuzzer.fuzz() for i in range(10)]
['121.8092479227325',
'187.18037169119634',
'127.9576486784452',
'125.47768739781723',
'151.8091820472274',
'117.864410860742',
'187.50918008379483',
'119.29335112884749',
'149.2637029583114',
'126.61818995939146']
```
All such values can be immediately used for testing function calls:
```python
>>> from math import sqrt
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
>>> call = "sqrt(" + fuzzer.fuzz() + ")"
>>> call
'sqrt(143)'
>>> eval(call)
11.958260743101398
```
These grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
```python
>>> int_list_grammar = list_grammar(int_grammar)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
>>> [fuzzer.fuzz() for i in range(5)]
['[118, 111, 188, 137, 129]',
'[170, 172]',
'[171, 161, 117, 191, 175, 183, 164]',
'[189]',
'[129, 110, 178]']
>>> some_list = eval(fuzzer.fuzz())
>>> some_list
[172, 120, 106, 192, 124, 191, 161, 100, 117]
>>> len(some_list)
9
```
In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.
## Fuzzing a Function
Let us start with our first problem: How do we fuzz a given function? For an interpreted language like Python, this is pretty straight-forward. All we need to do is to generate _calls_ to the function(s) we want to test. This is something we can easily do with a grammar.
As an example, consider the `urlparse()` function from the Python library. `urlparse()` takes a URL and decomposes it into its individual components.
```
import bookutils
from urllib.parse import urlparse
urlparse('https://www.fuzzingbook.com/html/APIFuzzer.html')
```
You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/APIFuzzer.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input.
To test `urlparse()`, we'd want to feed it a large set of different URLs. We can obtain these from the URL grammar we had defined in the ["Grammars"](Grammars.ipynb) chapter.
```
from Grammars import URL_GRAMMAR, is_valid_grammar, START_SYMBOL
from Grammars import opts, extend_grammar, Grammar
from GrammarFuzzer import GrammarFuzzer
url_fuzzer = GrammarFuzzer(URL_GRAMMAR)
for i in range(10):
url = url_fuzzer.fuzz()
print(urlparse(url))
```
This way, we can easily test any Python function – by setting up a scaffold that runs it. How would we proceed, though, if we wanted to have a test that can be re-run again and again, without having to generate new calls every time?
## Synthesizing Code
The "scaffolding" method, as sketched above, has an important downside: It couples test generation and test execution into a single unit, disallowing running both at different times, or for different languages. To decouple the two, we take another approach: Rather than generating inputs and immediately feeding this input into a function, we _synthesize code_ instead that invokes functions with a given input.
For instance, if we generate the string
```
call = "urlparse('http://www.example.com/')"
```
we can execute this string as a whole (and thus run the test) at any time:
```
eval(call)
```
To systematically generate such calls, we can again use a grammar:
```
URLPARSE_GRAMMAR: Grammar = {
"<call>":
['urlparse("<url>")']
}
# Import definitions from URL_GRAMMAR
URLPARSE_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_GRAMMAR["<start>"] = ["<call>"]
assert is_valid_grammar(URLPARSE_GRAMMAR)
```
This grammar creates calls in the form `urlparse(<url>)`, where `<url>` comes from the "imported" URL grammar. The idea is to create many of these calls and to feed them into the Python interpreter.
```
URLPARSE_GRAMMAR
```
We can now use this grammar for fuzzing and synthesizing calls to `urlparse)`:
```
urlparse_fuzzer = GrammarFuzzer(URLPARSE_GRAMMAR)
urlparse_fuzzer.fuzz()
```
Just as above, we can immediately execute these calls. To better see what is happening, we define a small helper function:
```
# Call function_name(arg[0], arg[1], ...) as a string
def do_call(call_string):
print(call_string)
result = eval(call_string)
print("\t= " + repr(result))
return result
call = urlparse_fuzzer.fuzz()
do_call(call)
```
If `urlparse()` were a C function, for instance, we could embed its call into some (also generated) C function:
```
URLPARSE_C_GRAMMAR: Grammar = {
"<cfile>": ["<cheader><cfunction>"],
"<cheader>": ['#include "urlparse.h"\n\n'],
"<cfunction>": ["void test() {\n<calls>}\n"],
"<calls>": ["<call>", "<calls><call>"],
"<call>": [' urlparse("<url>");\n']
}
URLPARSE_C_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_C_GRAMMAR["<start>"] = ["<cfile>"]
assert is_valid_grammar(URLPARSE_C_GRAMMAR)
urlparse_fuzzer = GrammarFuzzer(URLPARSE_C_GRAMMAR)
print(urlparse_fuzzer.fuzz())
```
## Synthesizing Oracles
In our `urlparse()` example, both the Python as well as the C variant only check for _generic_ errors in `urlparse()`; that is, they only detect fatal errors and exceptions. For a full test, we need to set up a specific *oracle* as well that checks whether the result is valid.
Our plan is to check whether specific parts of the URL reappear in the result – that is, if the scheme is `http:`, then the `ParseResult` returned should also contain a `http:` scheme. As discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb), equalities of strings such as `http:` across two symbols cannot be expressed in a context-free grammar. We can, however, use a _generator function_ (also introduced in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb)) to automatically enforce such equalities.
Here is an example. Invoking `geturl()` on a `urlparse()` result should return the URL as originally passed to `urlparse()`.
```
from GeneratorGrammarFuzzer import GeneratorGrammarFuzzer, ProbabilisticGeneratorGrammarFuzzer
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("assert urlparse('<url>').geturl() == '<url>'",
opts(post=lambda url_1, url_2: [None, url_1]))]
})
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
```
In a similar way, we can also check individual components of the result:
```
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("result = urlparse('<scheme>://<host><path>?<params>')\n"
# + "print(result)\n"
+ "assert result.scheme == '<scheme>'\n"
+ "assert result.netloc == '<host>'\n"
+ "assert result.path == '<path>'\n"
+ "assert result.query == '<params>'",
opts(post=lambda scheme_1, authority_1, path_1, params_1,
scheme_2, authority_2, path_2, params_2:
[None, None, None, None,
scheme_1, authority_1, path_1, params_1]))]
})
# Get rid of unused symbols
del URLPARSE_ORACLE_GRAMMAR["<url>"]
del URLPARSE_ORACLE_GRAMMAR["<query>"]
del URLPARSE_ORACLE_GRAMMAR["<authority>"]
del URLPARSE_ORACLE_GRAMMAR["<userinfo>"]
del URLPARSE_ORACLE_GRAMMAR["<port>"]
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
```
The use of generator functions may feel a bit cumbersome. Indeed, if we uniquely stick to Python, we could also create a _unit test_ that directly invokes the fuzzer to generate individual parts:
```
def fuzzed_url_element(symbol):
return GrammarFuzzer(URLPARSE_GRAMMAR, start_symbol=symbol).fuzz()
scheme = fuzzed_url_element("<scheme>")
authority = fuzzed_url_element("<authority>")
path = fuzzed_url_element("<path>")
query = fuzzed_url_element("<params>")
url = "%s://%s%s?%s" % (scheme, authority, path, query)
result = urlparse(url)
# print(result)
assert result.geturl() == url
assert result.scheme == scheme
assert result.path == path
assert result.query == query
```
Using such a unit test makes it easier to express oracles. However, we lose the ability to systematically cover individual URL elements and alternatives as with [`GrammarCoverageFuzzer`](GrammarCoverageFuzzer.ipynb) as well as the ability to guide generation towards specific elements as with [`ProbabilisticGrammarFuzzer`](ProbabilisticGrammarFuzzer.ipynb). Furthermore, a grammar allows us to generate tests for arbitrary programming languages and APIs.
## Synthesizing Data
For `urlparse()`, we have used a very specific grammar for creating a very specific argument. Many functions take basic data types as (some) arguments, though; we therefore define grammars that generate precisely those arguments. Even better, we can define functions that _generate_ grammars tailored towards our specific needs, returning values in a particular range, for instance.
### Integers
We introduce a simple grammar to produce integers.
```
from Grammars import convert_ebnf_grammar, crange
from ProbabilisticGrammarFuzzer import ProbabilisticGrammarFuzzer
INT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<int>"],
"<int>": ["<_int>"],
"<_int>": ["(-)?<leaddigit><digit>*", "0"],
"<leaddigit>": crange('1', '9'),
"<digit>": crange('0', '9')
}
assert is_valid_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR = convert_ebnf_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR
int_fuzzer = GrammarFuzzer(INT_GRAMMAR)
print([int_fuzzer.fuzz() for i in range(10)])
```
If we need integers in a specific range, we can add a generator function that does right that:
```
from Grammars import set_opts
import random
def int_grammar_with_range(start, end):
int_grammar = extend_grammar(INT_GRAMMAR)
set_opts(int_grammar, "<int>", "<_int>",
opts(pre=lambda: random.randint(start, end)))
return int_grammar
int_fuzzer = GeneratorGrammarFuzzer(int_grammar_with_range(900, 1000))
[int_fuzzer.fuzz() for i in range(10)]
```
### Floats
The grammar for floating-point values closely resembles the integer grammar.
```
FLOAT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<float>"],
"<float>": [("<_float>", opts(prob=0.9)), "inf", "NaN"],
"<_float>": ["<int>(.<digit>+)?<exp>?"],
"<exp>": ["e<int>"]
}
FLOAT_EBNF_GRAMMAR.update(INT_EBNF_GRAMMAR)
FLOAT_EBNF_GRAMMAR["<start>"] = ["<float>"]
assert is_valid_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR = convert_ebnf_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR
float_fuzzer = ProbabilisticGrammarFuzzer(FLOAT_GRAMMAR)
print([float_fuzzer.fuzz() for i in range(10)])
def float_grammar_with_range(start, end):
float_grammar = extend_grammar(FLOAT_GRAMMAR)
set_opts(float_grammar, "<float>", "<_float>", opts(
pre=lambda: start + random.random() * (end - start)))
return float_grammar
float_fuzzer = ProbabilisticGeneratorGrammarFuzzer(
float_grammar_with_range(900.0, 900.9))
[float_fuzzer.fuzz() for i in range(10)]
```
### Strings
Finally, we introduce a grammar for producing strings.
```
ASCII_STRING_EBNF_GRAMMAR: Grammar = {
"<start>": ["<ascii-string>"],
"<ascii-string>": ['"<ascii-chars>"'],
"<ascii-chars>": [
("", opts(prob=0.05)),
"<ascii-chars><ascii-char>"
],
"<ascii-char>": crange(" ", "!") + [r'\"'] + crange("#", "~")
}
assert is_valid_grammar(ASCII_STRING_EBNF_GRAMMAR)
ASCII_STRING_GRAMMAR = convert_ebnf_grammar(ASCII_STRING_EBNF_GRAMMAR)
string_fuzzer = ProbabilisticGrammarFuzzer(ASCII_STRING_GRAMMAR)
print([string_fuzzer.fuzz() for i in range(10)])
```
## Synthesizing Composite Data
From basic data, as discussed above, we can also produce _composite data_ in data structures such as sets or lists. We illustrate such generation on lists.
### Lists
```
LIST_EBNF_GRAMMAR: Grammar = {
"<start>": ["<list>"],
"<list>": [
("[]", opts(prob=0.05)),
"[<list-objects>]"
],
"<list-objects>": [
("<list-object>", opts(prob=0.2)),
"<list-object>, <list-objects>"
],
"<list-object>": ["0"],
}
assert is_valid_grammar(LIST_EBNF_GRAMMAR)
LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)
```
Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.
```
def list_grammar(object_grammar, list_object_symbol=None):
obj_list_grammar = extend_grammar(LIST_GRAMMAR)
if list_object_symbol is None:
# Default: Use the first expansion of <start> as list symbol
list_object_symbol = object_grammar[START_SYMBOL][0]
obj_list_grammar.update(object_grammar)
obj_list_grammar[START_SYMBOL] = ["<list>"]
obj_list_grammar["<list-object>"] = [list_object_symbol]
assert is_valid_grammar(obj_list_grammar)
return obj_list_grammar
int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))
[int_list_fuzzer.fuzz() for i in range(10)]
string_list_fuzzer = ProbabilisticGrammarFuzzer(
list_grammar(ASCII_STRING_GRAMMAR))
[string_list_fuzzer.fuzz() for i in range(10)]
float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(
float_grammar_with_range(900.0, 900.9)))
[float_list_fuzzer.fuzz() for i in range(10)]
```
Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.
## Synopsis
This chapter provides *grammar constructors* that are useful for generating _function calls_.
The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
```
from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
```
`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:
```
fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
```
`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:
```
int_grammar = int_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
[fuzzer.fuzz() for i in range(10)]
```
`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
```
float_grammar = float_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)
[fuzzer.fuzz() for i in range(10)]
```
All such values can be immediately used for testing function calls:
```
from math import sqrt
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
call = "sqrt(" + fuzzer.fuzz() + ")"
call
eval(call)
```
These grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
```
int_list_grammar = list_grammar(int_grammar)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
[fuzzer.fuzz() for i in range(5)]
some_list = eval(fuzzer.fuzz())
some_list
len(some_list)
```
In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.
## Lessons Learned
* To fuzz individual functions, one can easily set up grammars that produce function calls.
* Fuzzing at the API level can be much faster than fuzzing at the system level, but brings the risk of false alarms by violating implicit preconditions.
## Next Steps
This chapter was all about manually writing test and controlling which data gets generated. [In the next chapter](Carver.ipynb), we will introduce a much higher level of automation:
* _Carving_ automatically records function calls and arguments from program executions.
* We can turn these into _grammars_, allowing to test these functions with various combinations of recorded values.
With these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier.
## Background
The idea of using generator functions to generate input structures was first explored in QuickCheck \cite{Claessen2000}. A very nice implementation for Python is the [hypothesis package](https://hypothesis.readthedocs.io/en/latest/) which allows to write and combine data structure generators for testing APIs.
## Exercises
The exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.
### Exercise 1: Deep Arguments
In the example generating oracles for `urlparse()`, important elements such as `authority` or `port` are not checked. Enrich `URLPARSE_ORACLE_GRAMMAR` with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.
**Solution.** Left to the reader.
### Exercise 2: Covering Argument Combinations
In the chapter on [configuration testing](ConfigurationFuzzer.ipynb), we also discussed _combinatorial testing_ – that is, systematic coverage of _sets_ of configuration elements. Implement a scheme that by changing the grammar, allows all _pairs_ of argument values to be covered.
**Solution.** Left to the reader.
### Exercise 3: Mutating Arguments
To widen the range of arguments to be used during testing, apply the _mutation schemes_ introduced in [mutation fuzzing](MutationFuzzer.ipynb) – for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.
**Solution.** Left to the reader.
| github_jupyter |
```
#Copyright 2020 Vraj Shah, Arun Kumar
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn import metrics
import joblib
import numpy as np
np.random.seed(512)
xtrain = pd.read_csv('../data/ml/data_train.csv')
xtest = pd.read_csv('../data/ml/data_test.csv')
xtrain = xtrain.sample(frac=1,random_state=100).reset_index(drop=True)
print(len(xtrain))
y_train = xtrain.loc[:,['y_act']]
y_test = xtest.loc[:,['y_act']]
dict_label = {
'numeric': 0,
'categorical': 1,
'datetime': 2,
'sentence': 3,
'url': 4,
'embedded-number': 5,
'list': 6,
'not-generalizable': 7,
'context-specific': 8
}
y_train['y_act'] = [dict_label[i] for i in y_train['y_act']]
y_test['y_act'] = [dict_label[i] for i in y_test['y_act']]
y_train
useStats = 1
useAttributeName = 1
useSample1 = 0
useSample2 = 0
## Using descriptive stats and attribute name
def ProcessStats(data,y):
data1 = data[['total_vals', 'num_nans', '%_nans', 'num_of_dist_val', '%_dist_val', 'mean', 'std_dev', 'min_val', 'max_val','has_delimiters', 'has_url', 'has_email', 'has_date', 'mean_word_count',
'std_dev_word_count', 'mean_stopword_total', 'stdev_stopword_total',
'mean_char_count', 'stdev_char_count', 'mean_whitespace_count',
'stdev_whitespace_count', 'mean_delim_count', 'stdev_delim_count',
'is_list', 'is_long_sentence']]
data1 = data1.reset_index(drop=True)
data1 = data1.fillna(0)
y.y_act = y.y_act.astype(float)
return data1
vectorizerName = CountVectorizer(ngram_range=(2, 2), analyzer='char')
vectorizerSample = CountVectorizer(ngram_range=(2, 2), analyzer='char')
def FeatureExtraction(data,data1,flag):
arr = data['Attribute_name'].values
arr = [str(x) for x in arr]
arr1 = data['sample_1'].values
arr1 = [str(x) for x in arr1]
arr2 = data['sample_2'].values
arr2 = [str(x) for x in arr2]
arr3 = data['sample_3'].values
arr3 = [str(x) for x in arr3]
print(len(arr1),len(arr2))
if flag:
X = vectorizerName.fit_transform(arr)
X1 = vectorizerSample.fit_transform(arr1)
X2 = vectorizerSample.transform(arr2)
else:
X = vectorizerName.transform(arr)
X1 = vectorizerSample.transform(arr1)
X2 = vectorizerSample.transform(arr2)
# print(f"> Length of vectorized feature_names: {len(vectorizer.get_feature_names())}")
attr_df = pd.DataFrame(X.toarray())
sample1_df = pd.DataFrame(X1.toarray())
sample2_df = pd.DataFrame(X2.toarray())
print(len(data1),len(attr_df),len(sample1_df),len(sample2_df))
if useSample1: data2 = sample1_df
if useSample2: data2 = sample2_df
data2 = pd.concat([data1, attr_df], axis=1, sort=False)
print(len(data2))
return data2
xtrain1 = ProcessStats(xtrain,y_train)
xtest1 = ProcessStats(xtest,y_test)
X_train = FeatureExtraction(xtrain,xtrain1,1)
X_test = FeatureExtraction(xtest,xtest1,0)
X_train_new = X_train.reset_index(drop=True)
y_train_new = y_train.reset_index(drop=True)
X_train_new = X_train_new.values
y_train_new = y_train_new.values
k = 5
kf = KFold(n_splits=k,random_state = 100,shuffle=True)
avg_train_acc,avg_test_acc = 0,0
n_estimators_grid = [5,25,50,75,100,500]
max_depth_grid = [5,10,25,50,100,250]
# n_estimators_grid = [25,50,75,100]
# max_depth_grid = [50,100]
avgsc_lst,avgsc_train_lst,avgsc_hld_lst = [],[],[]
avgsc,avgsc_train,avgsc_hld = 0,0,0
best_param_count = {'n_estimator': {}, 'max_depth': {}}
i=0
for train_index, test_index in kf.split(X_train_new):
# if i==1: break
i=i+1
X_train_cur, X_test_cur = X_train_new[train_index], X_train_new[test_index]
y_train_cur, y_test_cur = y_train_new[train_index], y_train_new[test_index]
X_train_train, X_val,y_train_train,y_val = train_test_split(X_train_cur,y_train_cur, test_size=0.25,random_state=100)
bestPerformingModel = RandomForestClassifier(n_estimators=10,max_depth=5,random_state=100)
bestscore = 0
print('='*10)
for ne in n_estimators_grid:
for md in max_depth_grid:
clf = RandomForestClassifier(n_estimators=ne,max_depth=md,random_state=100)
clf.fit(X_train_train, y_train_train.ravel())
sc = clf.score(X_val, y_val)
print(f"[n_estimator: {ne}, max_depth: {md}, accuracy: {sc}]")
if bestscore < sc:
bestne = ne
bestmd = md
bestscore = sc
bestPerformingModel = clf
if str(bestne) in best_param_count['n_estimator']:
best_param_count['n_estimator'][str(bestne)] += 1
else:
best_param_count['n_estimator'][str(bestne)] = 1
if str(bestmd) in best_param_count['max_depth']:
best_param_count['max_depth'][str(bestmd)] += 1
else:
best_param_count['max_depth'][str(bestmd)] = 1
bscr_train = bestPerformingModel.score(X_train_cur, y_train_cur)
bscr = bestPerformingModel.score(X_test_cur, y_test_cur)
bscr_hld = bestPerformingModel.score(X_test, y_test)
avgsc_train_lst.append(bscr_train)
avgsc_lst.append(bscr)
avgsc_hld_lst.append(bscr_hld)
avgsc_train = avgsc_train + bscr_train
avgsc = avgsc + bscr
avgsc_hld = avgsc_hld + bscr_hld
print()
print(f"> Best n_estimator: {bestne} || Best max_depth: {bestmd}")
print(f"> Best training score: {bscr_train}")
print(f"> Best test score: {bscr}")
print(f"> Best held score: {bscr_hld}")
print('='*10)
print(avgsc_train_lst)
print(avgsc_lst)
print(avgsc_hld_lst)
print(avgsc_train/k)
print(avgsc/k)
print(avgsc_hld/k)
y_pred = bestPerformingModel.predict(X_test)
bscr_hld = bestPerformingModel.score(X_test, y_test)
print(bscr_hld)
bestPerformingModel.score(X_test, y_test)
joblib.dump(bestPerformingModel, 'rf.joblib')
joblib.dump(vectorizerName, 'vectorizerName.joblib')
joblib.dump(vectorizerSample, 'vectorizerSample.joblib')
```
| github_jupyter |
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
This colab notebook demonstrates how to read and visualize the data in the Didi dataset: Digital Ink Diagram data.
More information about this data is available at
* https://github.com/google-research/google-research/tree/master/didi_dataset
* [The Didi dataset: Digital Ink Diagram data](https://arxiv.org/abs/2002.09303). P. Gervais, T. Deselaers, E. Aksan, O. Hilliges, 2020.
The colab demonstrates how to:
1. display the data along with the prompt images.
1. convert the data to a sharded `TFRecord` file of `TFExample`s.
```
from __future__ import division
import collections
import contextlib
import io
import json
import os
import random
import statistics
from googleapiclient.discovery import build
from google.colab import auth
from google.colab import files
from googleapiclient.http import MediaIoBaseDownload
from apiclient import errors
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
from matplotlib import pylab
from IPython.display import Image, display
# Setup and settings.
# Settings
JSON_FILES=["diagrams_wo_text_20200131.ndjson", "diagrams_20200131.ndjson"]
PROJECT_ID = "digital-ink-diagram-data"
BUCKET_NAME = "digital_ink_diagram_data"
LOCAL_DATA_DIR = "/tmp"
NUM_TFRECORD_SHARDS = 1
auth.authenticate_user()
# Creating the service client.
gcs_service = build("storage", "v1")
# Download the data
def download_file_from_gcs(filename):
directory_name = os.path.join(LOCAL_DATA_DIR, os.path.dirname(filename))
if not os.path.exists(directory_name):
os.mkdir(directory_name)
with open(os.path.join(LOCAL_DATA_DIR, filename), "wb") as f:
request = gcs_service.objects().get_media(bucket=BUCKET_NAME, object=filename)
media = MediaIoBaseDownload(f, request)
done = False
while not done:
status, done = media.next_chunk()
if not done:
print("Downloading '%s': %-3.0f%%" % (filename, status.progress() * 100))
def get_label_file(type, labelid):
file_id = os.path.join(type, "%s.%s" % (labelid, type))
fname = os.path.join(LOCAL_DATA_DIR, file_id)
if os.path.exists(fname):
return fname
download_file_from_gcs(file_id)
return fname
for json_file in JSON_FILES:
download_file_from_gcs(json_file)
# Displays prompt images with drawing overlaid.
def PrepareDrawing():
pylab.clf()
pylab.axes().set_aspect("equal")
pylab.gca().yaxis.set_visible(False)
pylab.gca().xaxis.set_visible(False)
def display_image(ink):
im = pylab.imread(os.path.join(LOCAL_DATA_DIR, "png", ink["label_id"] + ".png"))
# Compute scaling of the image.
guide_width = ink["writing_guide"]["width"]
guide_height = ink["writing_guide"]["height"]
im_height, im_width, _ = im.shape
scale=min(guide_width / im_width, guide_height / im_height)
offset_x = (guide_width - scale * im_width) / 2
offset_y = (guide_height - scale * im_height) / 2
pylab.imshow(im, origin="upper",
extent=(offset_x, offset_x + scale * im_width,
offset_y + scale * im_height, offset_y),
aspect="equal")
def display_strokes(ink):
for s in ink["drawing"]:
pylab.plot(s[0], [y for y in s[1]], color="red")
def display_ink(ink):
# Fetch the corresponding PNG image.
get_label_file("png", ink["label_id"])
# Draw image, overlay strokes.
PrepareDrawing()
display_image(ink)
display_strokes(ink)
pylab.show()
for json_file in JSON_FILES:
count = 0
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
display_ink(ink)
count += 1
if count == 10:
break
# This cell converts the file to tf.Record of tf.Example.
# This cell takes long time to run.
def get_label_file_contents(type, labelid):
get_label_file(type, labelid)
with open(os.path.join(LOCAL_DATA_DIR, type, "%s.%s" %(labelid, type))) as f:
return f.read()
def ink_to_tfexample(ink, dot=None):
"""Takes a LabeledInk and outputs a TF.Example with stroke information.
Args:
ink: A JSON array containing the drawing information.
dot: (Optional) textual content of the GrahViz dotfile that was used to
generate the prompt image.
Returns:
a Tensorflow Example proto with the drawing data.
"""
features = {}
features["key"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[ink["key"].encode("utf-8")]))
features["label_id"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[ink["label_id"].encode("utf-8")]))
if dot:
features["label_dot"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[dot.encode("utf-8")]))
max_len = np.array([len(stroke[0]) for stroke in ink["drawing"]]).max()
strokes = []
stroke_lengths = []
for stroke in ink["drawing"]:
stroke_len = len(stroke[0])
padded_stroke_with_pen = np.zeros([1, max_len, 4], dtype=np.float32)
padded_stroke_with_pen[0, 0:stroke_len, 0] = stroke[0]
padded_stroke_with_pen[0, 0:stroke_len, 1] = stroke[1]
padded_stroke_with_pen[0, 0:stroke_len, 2] = stroke[2]
padded_stroke_with_pen[0, stroke_len - 1, 3] = 1
strokes.append(padded_stroke_with_pen)
stroke_lengths.append(stroke_len)
all_strokes = np.concatenate(strokes, axis=0).astype(float) # (num_strokes, max_len, 4)
all_stroke_lengths = np.array(stroke_lengths).astype(int)
features["ink"] = tf.train.Feature(
float_list=tf.train.FloatList(value=all_strokes.flatten()))
features["stroke_length"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=all_stroke_lengths))
features["shape"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=all_strokes.shape))
features["num_strokes"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=[len(ink["drawing"])]))
example = tf.train.Example(features=tf.train.Features(feature=features))
return example
@contextlib.contextmanager
def create_tfrecord_writers(output_file, num_output_shards):
writers = collections.defaultdict(list)
for split in ["train", "valid", "test"]:
for i in range(num_output_shards):
writers[split].append(
tf.io.TFRecordWriter("%s-%s-%05i-of-%05i" %
(output_file, split, i, num_output_shards)))
try:
yield writers
finally:
for split in ["train", "valid", "test"]:
for w in writers[split]:
w.close()
def pick_output_shard(num_shards):
return random.randint(0, num_shards - 1)
def size_normalization(drawing):
def get_bounding_box(drawing):
minx = 99999
miny = 99999
maxx = 0
maxy = 0
for s in drawing:
minx = min(minx, min(s[0]))
maxx = max(maxx, max(s[0]))
miny = min(miny, min(s[1]))
maxy = max(maxy, max(s[1]))
return (minx, miny, maxx, maxy)
bb = get_bounding_box(drawing)
width, height = bb[2] - bb[0], bb[3] - bb[1]
offset_x, offset_y = bb[0], bb[1]
if height < 1e-6:
height = 1
size_normalized_drawing = [[[(x - offset_x) / height for x in stroke[0]],
[(y - offset_y) / height for y in stroke[1]],
[t for t in stroke[2]]]
for stroke in drawing]
return size_normalized_drawing
def resample_ink(drawing, timestep):
def resample_stroke(stroke, timestep):
def interpolate(t, t_prev, t_next, v0, v1):
d0 = abs(t-t_prev)
d1 = abs(t-t_next)
dist_sum = d0 + d1
d0 /= dist_sum
d1 /= dist_sum
return d1 * v0 + d0 * v1
x,y,t = stroke
if len(t) < 3:
return stroke
r_x, r_y, r_t = [x[0]], [y[0]], [t[0]]
final_time = t[-1]
stroke_time = final_time - t[0]
necessary_steps = int(stroke_time / timestep)
i = 1
current_time = t[i]
while current_time < final_time:
current_time += timestep
while i < len(t) - 1 and current_time > t[i]:
i += 1
r_x.append(interpolate(current_time, t[i-1], t[i], x[i-1], x[i]))
r_y.append(interpolate(current_time, t[i-1], t[i], y[i-1], y[i]))
r_t.append(interpolate(current_time, t[i-1], t[i], t[i-1], t[i]))
return [r_x, r_y, r_t]
resampled = [resample_stroke(s, timestep) for s in drawing]
return resampled
for json_file in JSON_FILES:
counts = collections.defaultdict(int)
with create_tfrecord_writers(os.path.join(LOCAL_DATA_DIR, json_file + ".tfrecord"), NUM_TFRECORD_SHARDS) as writers:
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
dot = get_label_file_contents("dot", ink["label_id"])
ink["drawing"] = size_normalization(ink["drawing"])
ink["drawing"] = resample_ink(ink["drawing"], 20)
example = ink_to_tfexample(ink, dot)
counts[ink["split"]] += 1
writers[ink["split"]][pick_output_shard(NUM_TFRECORD_SHARDS)].write(example.SerializeToString())
print ("Finished writing: %s train: %i valid: %i test: %i" %(json_file, counts["train"], counts["valid"], counts["test"]))
# Download the TFRecord files to local machine (or use the filemanager on the left).
for json_file in JSON_FILES:
for split in ["train", "valid", "test"]:
for i in range(NUM_TFRECORD_SHARDS):
filename = os.path.join(LOCAL_DATA_DIR, json_file + ".tfrecord-%s-%05i-of-%05i" % (split, i, NUM_TFRECORD_SHARDS))
print(filename)
files.download(filename)
stats = {}
# Compute some dataset statistics
def count_points_strokes(ink):
return sum([len(stroke[0]) for stroke in ink]), len(ink)
# Collect data to compute statistics
for json_file in JSON_FILES:
stats[json_file] = collections.defaultdict(list)
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
points, strokes = count_points_strokes(ink["drawing"])
stats[json_file]["points"].append(points)
stats[json_file]["strokes"].append(strokes)
stats[json_file]["labels"].append(ink["label_id"])
print (json_file)
for i in ["points", "strokes"]:
print (i, min(stats[json_file][i]), max(stats[json_file][i]), statistics.median(stats[json_file][i]))
for i in ["labels"]:
labels, counts = np.unique(stats[json_file][i], return_counts=True)
print (i, len(labels), min(counts), max(counts), statistics.median(counts))
print()
```
| github_jupyter |
# Poincare Map
This example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems.
```
import rebound
import numpy as np
```
We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2:1 mean motion resonance.
```
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-3,a=1,e=0.001)
sim.add(m=0.,a=1.65)
sim.move_to_com()
```
We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time.
```
def migrationForce(reb_sim):
tau = 40000.
ps[2].ax -= ps[2].vx/tau
ps[2].ay -= ps[2].vy/tau
ps[2].az -= ps[2].vz/tau
```
Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array.
```
sim.additional_forces = migrationForce
ps = sim.particles
```
Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$.
```
sim.integrate(3000.)
```
Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time.
```
sim.save("resonant_system.bin")
```
To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet.
```
def hyper(sim):
dp = sim.particles[2]-sim.particles[0]
return dp.x*dp.vx + dp.y*dp.vy
```
We will also need a helper function that ensures our resonant angle is in the range $[-\pi:\pi]$.
```
def mod2pi(x):
if x>np.pi:
return mod2pi(x-2.*np.pi)
if x<-np.pi:
return mod2pi(x+2.*np.pi)
return x
```
The following function generate the Poincare Map for one set of initial conditions.
We first load the resonant system from the binary file we created earlier.
We then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore.
We also initialize shadow particles to calculate the MEGNO, a fast chaos indicator.
```
def runone(args):
i = args # integer numbering the run
N_points_max = 2000 # maximum number of point in our Poincare Section
N_points = 0
poincare_map = np.zeros((N_points_max,2))
# setting up simulation from binary file
sim = rebound.Simulation.from_file("resonant_system.bin")
vx = 0.97+0.06*(float(i)/float(Nsim))
sim.particles[2].vx *= vx
sim.t = 0. # reset time to 0
# Integrate simulation in small intervals
# After each interval check if we crossed the
# hypersurface. If so, bisect until we hit the
# hypersurface exactly up to a precision
# of dt_epsilon
dt = 0.13
dt_epsilon = 0.001
sign = hyper(sim)
while sim.t<15000. and N_points < N_points_max:
oldt = sim.t
olddt = sim.dt
sim.integrate(oldt+dt)
nsign = hyper(sim)
if sign*nsign < 0.:
# Hyper surface crossed.
leftt = oldt
rightt = sim.t
sim.dt = -olddt
while (rightt-leftt > dt_epsilon):
# Bisection.
midt = (leftt+rightt)/2.
sim.integrate(midt)
msign = hyper(sim)
if msign*sign > 0.:
leftt = midt
sim.dt = 0.3*olddt
else:
rightt = midt
sim.dt = -0.3*olddt
# Hyper surface found up to precision of dt_epsilon.
# Calculate orbital elements
o = sim.calculate_orbits()
# Check if we cross hypersurface in one direction or the other.
if o[1].d<o[1].a:
# Calculate resonant angle phi and its time derivative
tp = np.pi*2.
phi = mod2pi(o[0].l-2.*o[1].l+o[1].omega+o[1].Omega)
phid = (tp/o[0].P-2.*tp/o[1].P)/(tp/o[0].P)
# Store value for map
poincare_map[N_points] = [phi,phid]
N_points += 1
sim.dt = olddt
sim.integrate(oldt+dt)
sign = nsign
# Rerun to calculate Megno
sim = rebound.Simulation.from_file("resonant_system.bin")
vx = 0.97+0.06*(float(i)/float(Nsim))
sim.particles[2].vx *= vx
sim.t = 0. # reset time to 0
sim.init_megno() # adds variational particles and initialized MEGNO
sim.integrate(15000.)
return (poincare_map, sim.calculate_megno(),vx)
```
For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND.
```
Nsim = 10
pool = rebound.InterruptiblePool()
res = pool.map(runone,range(Nsim))
```
Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion.
```
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14,8))
ax = plt.subplot(111)
ax.set_xlabel("$\phi$"); ax.set_ylabel("$\dot{\phi}$")
ax.set_xlim([-np.pi,np.pi]); ax.set_ylim([-0.06,0.1])
cm = plt.cm.get_cmap('brg')
for m, megno, vx in res:
c = np.empty(len(m[:,0])); c.fill(megno)
p = ax.scatter(m[:,0],m[:,1],marker=".",c=c, vmin=1.4, vmax=3, s=25,edgecolor='none', cmap=cm)
cb = plt.colorbar(p, ax=ax)
cb.set_label("MEGNO $<Y>$")
```
The red orbits are periodic or quasi periodic, the green orbits are chaotic.
| github_jupyter |
```
#import libraries
import rasterio as rs
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import math
from osgeo import gdal
from rasterio.plot import show
import os
print('*********** Libraries were imported successfuly **********')
print('working directory: '+ str(os.getcwd()))
#load classification image
print('**************** Loading classification file *************')
gdal.UseExceptions()
img_clas = rs.open ('20200928_sent_ökoneu_mask_etrs89.img')
print('**************** Image imported successfuly **************')
## Print image data
print('**********************************************************')
print('*********************** Image data ***********************')
print('Number of bands: ' + str(img_clas.count))
print('Coordinate Reference System: ' + str(img_clas.crs))
print('Image width:`' + str(img_clas.width))
print('Image height:`' + str(img_clas.height))
print('Number of Pixels:`' + str(int(img_clas.height)*int(img_clas.width)))
print('**********************************************************')
## create groups using mask values from ERDAS classification mask
#grassland = [13,15,18,19,21,22,25,27,28,32,33] for 2015
grassland = [12,13,15,16,17,20,23,25,28,29,31,33]
#tree_canopy = [2,4,7,9,11,12,16]
tree_canopy = [3,5,6,7,8,9]
tree_list = list()
grass_list = list()
## get bands
print('************** extracting classification data ************')
clas_values = img_clas.read(1)
seeker_column = 0
while seeker_column < img.width:
seeker_row = 0
while seeker_row < img.height:
arr = clas_values[seeker_row]
pos = (seeker_row,seeker_column)
if arr[seeker_column] in grassland:
grass_list.append(pos)
if arr[seeker_column] in tree_canopy:
tree_list.append(pos)
seeker_row = seeker_row+1
seeker_column = seeker_column+1
print('************ classification successfully loaded **********')
print('Grassland/agriculture values...................'+str(len(grass_list)))
print('Tree Canopy values.............................'+str(len(tree_list)))
print('***********************************************************')
print(grass_list[1])
#print((clas_values[200]))
#print(type(clas_values[1]))
#x = clas_values[200] #x = classvalues [row]
#print (x[1]) #value x[column]
#for elementa in (clas_values [200]):
#if elementa in grassland:
#print("V")
#else:
#print("F")
show(img_clas)
##Change directory to file folder
##os.chdir('D:\TEMP\20200324_Sentinel2A')
## open image
gdal.UseExceptions()
img = rs.open ('20200102_Ökoneu_etrs89_ndvi.img')
print('**************** Image imported successfuly **************')
## Print image data
print('**********************************************************')
print('*********************** Image data ***********************')
print('Number of bands: ' + str(img.count))
print('Coordinate Reference System: ' + str(img.crs))
print('Image width:`' + str(img.width))
print('Image height:`' + str(img.height))
print('Number of Pixels:`' + str(int(img.height)*int(img.width)))
print('**********************************************************')
show(img)
## get bands
Index_Values = img.read(1)
print(len(Index_Values))
## stats
from scipy import stats
#stats.describe (Index_Values) #activate just if needed
print('**********************************************************')
print('****************** Analysing values... *******************')
print('**********************************************************')
## create classification conters and indexing lists
very_healthy = 0 # values between [0.7-1]
very_healthy_dic = list()
healthy = 0 # values between [0.55-0.7]
healthy_dic = list()
lightstress = 0 # values between [0.45-0.55]
light_dic = list()
moderatestress = 0 # values between [0.35-0.45]
moderate_dic = list()
heavystress = 0 # values between [0.25-0.35]
heavy_dic = list()
no_veg = 0 # values between [<0.25]
no_veg_dic = list
# create numpy-array for masking for report
output_format = ".png"
t=(img.height, img.width,3)
mask=np.zeros(t,dtype=np.uint8)
#Define Masking Colours
colors= [(0,0,0),(255,0,0),(255,128,0),(255,255,0),(127,255,0),(50,205,50)]
# Classify Pixels
NDVI_tree=list()
NDVI_veg=list()
NDVI_grass=list()
NDVI_neg=list()
NDVI_accum = list()
counter_total= 0
counter_neg= 0
seeker_column = 0
while seeker_column < img.width:
seeker_row = 0
while seeker_row < img.height:
if Index_Values[seeker_row, seeker_column] <= 0.25:
mask[seeker_row, seeker_column] = colors[0]
no_veg = no_veg+1
else:
if Index_Values[seeker_row, seeker_column] <= 0.35:
mask[seeker_row, seeker_column] = colors[1]
heavystress = heavystress+1
else:
if Index_Values[seeker_row, seeker_column] <= 0.45:
mask[seeker_row, seeker_column] = colors[2]
moderatestress = moderatestress + 1
else:
if Index_Values[seeker_row, seeker_column] <= 0.55:
mask[seeker_row, seeker_column] = colors[3]
lightstress = lightstress + 1
else:
if Index_Values[seeker_row, seeker_column] <= 0.7:
mask[seeker_row, seeker_column] = colors[4]
healthy = healthy + 1
else:
mask[seeker_row, seeker_column] = colors[5]
very_healthy = very_healthy + 1
if Index_Values[seeker_row, seeker_column] >= 0:
NDVI_accum.append(Index_Values[seeker_row, seeker_column])
NDVI_neg.append(Index_Values[seeker_row, seeker_column])
seeker_row = seeker_row+1
seeker_column = seeker_column+1
for elements in tree_list:
x_pos = elements[0]
y_pos = elements[1]
value = float(Index_Values[x_pos, y_pos])
if value >= 0:
NDVI_tree.append(value)
NDVI_veg.append(value)
for elemento in grass_list:
x_pos = elemento[0]
y_pos = elemento[1]
value = float(Index_Values[x_pos, y_pos])
if value >= 0:
NDVI_grass.append(value)
NDVI_veg.append(value)
#Calculation of vegeation area and non vegetation area
veg_area = 10*10/10000*(int(very_healthy)+int(healthy)+int(lightstress)+int(moderatestress)+int(heavystress))
no_veg_area = int(no_veg)*10*10/10000
NDVI_treemean = np.nanmean(NDVI_tree)
NDVI_grassmean = np.nanmean(NDVI_grass)
NDVI_mean = np.nanmean(NDVI_accum)
NDVI_scene = np.nanmean(NDVI_neg)
NDVI_vegmean = np.nanmean(NDVI_veg)
print('******************** Analysis completed *******************')
print('**********************************************************')
print('****************Scene analysis results *******************')
print('Scene NDVI [0.7, 1]...................... ' + str(very_healthy) + " pixels")
print('Scene NDVI [0.55, 0.7] .................. ' + str(healthy) + " pixels")
print('Scene NDVI [0.45-0.55]................... ' + str(lightstress) + " pixels")
print('Scene NDVI [0.35-0.45]................... ' + str(moderatestress) + " pixels")
print('Scene NDVI [0.25-0.35]................... ' + str(heavystress) + " pixels")
print('Scene NDVI [<0.25]....................... ' + str(no_veg) + " pixels")
print('**********************************************************')
print('Mean NDVI (ignore negative values)....... ' + str(NDVI_mean))
print('Scene NDVI (incl. negative values)....... ' + str(NDVI_scene))
print('**********************************************************')
print('Total area ............................. ' + str(float(no_veg_area)+float(veg_area)) + " hectareas")
print('**********************************************************')
print(' ')
# vegetation analysis
print('**********************************************************')
print('********** Starting Vegetation Analysis ******************')
print('**********************************************************')
grass_area = int(len(grass_list))*10*10/10000
tree_area = int(len(tree_list))*10*10/10000
veg_area2 = grass_area + tree_area
# Values for NDVI tree canopy
counter_1= 0
counter_2= 0
counter_3= 0
counter_4= 0
counter_5= 0
counter_6= 0
for elements in NDVI_tree:
if elements <= 0.25:
counter_1 = counter_1+1
else:
if elements <= 0.35:
counter_2 = counter_2+1
else:
if elements <= 0.45:
counter_3 = counter_3 + 1
else:
if elements <= 0.55:
counter_4 = counter_4 + 1
else:
if elements <= 0.7:
counter_5 = counter_5 + 1
else:
counter_6 = counter_6 + 1
print('********** Tree canopy NDVI Results ****************')
print('Tree canopy NDVI [0.7, 1]...................... ' + str(counter_6) + " pixels")
print('Tree canopy NDVI [0.55, 0.7] .................. ' + str(counter_5) + " pixels")
print('Tree canopy NDVI [0.45-0.55]................... ' + str(counter_4) + " pixels")
print('Tree canopy NDVI [0.35-0.45]................... ' + str(counter_3) + " pixels")
print('Tree canopy NDVI [0.25-0.35]................... ' + str(counter_2) + " pixels")
print('Tree canopy NDVI [<0.25]....................... ' + str(counter_1) + " pixels")
print('**********************************************************')
print('Tree canopy area .............................. ' + str(tree_area) + " hectareas")
print('**********************************************************')
print(' ')
# Values for NDVI grassland
counter_1= 0
counter_2= 0
counter_3= 0
counter_4= 0
counter_5= 0
counter_6= 0
for elements in NDVI_grass:
if elements <= 0.25:
counter_1 = counter_1+1
else:
if elements <= 0.35:
counter_2 = counter_2+1
else:
if elements <= 0.45:
counter_3 = counter_3 + 1
else:
if elements <= 0.55:
counter_4 = counter_4 + 1
else:
if elements <= 0.7:
counter_5 = counter_5 + 1
else:
counter_6 = counter_6 + 1
print('************** Grassland NDVI results ***************')
print('**********************************************************')
print('Grassland NDVI [0.7, 1]...................... ' + str(counter_6) + " pixels")
print('Grassland NDVI [0.55, 0.7] .................. ' + str(counter_5) + " pixels")
print('Grassland NDVI [0.45-0.55]................... ' + str(counter_4) + " pixels")
print('Grassland NDVI [0.35-0.45]................... ' + str(counter_3) + " pixels")
print('Grassland NDVI [0.25-0.35]................... ' + str(counter_2) + " pixels")
print('Grassland NDVI [<0.25]....................... ' + str(counter_1) + " pixels")
print('**********************************************************')
print('Grassland area .............................. ' + str(grass_area) + " hectareas")
print('**********************************************************')
print(' ')
print('********** Vegetation Analysis Results *******************')
print('**********************************************************')
print('Mean Grassland NDVI............................' + str(NDVI_grassmean))
print('Mean Tree Canopy NDVI .........................' + str(NDVI_treemean))
print('Mean Vegetation NDVI......................' + str(NDVI_vegmean))
print('**********************************************************')
print('Total Analysed vegetation area ........... ' + str(veg_area2) + " hectareas")
# Plot mask
mask_plot = Image.fromarray(mask, 'RGB')
#mask_plot.save('20201219_Ökoneu_NDVI_mask.png')
plt.imshow(mask_plot)
print(len(NDVI_grass))
print(len(NDVI_tree))
print(len(grass_list))
print(len(tree_list))
```
| github_jupyter |
```
import os
import sys
from tqdm import trange
from tqdm import tqdm
from skimage.util import montage
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torchvision.transforms as transforms
import medmnist
from medmnist.dataset import PathMNIST, ChestMNIST, DermaMNIST, OCTMNIST, PneumoniaMNIST, RetinaMNIST, BreastMNIST, OrganMNISTAxial, OrganMNISTCoronal, OrganMNISTSagittal
from medmnist.evaluator import getAUC, getACC
from medmnist.info import INFO
print("Version:", medmnist.__version__)
data_flag = 'pathmnist'
# data_flag = 'breastmnist'
download = True
# input_root = 'tmp_data/'
input_root = 'input/'
NUM_EPOCHS = 10
BATCH_SIZE = 128
lr = 0.001
flag_to_class = {
"pathmnist": PathMNIST,
"chestmnist": ChestMNIST,
"dermamnist": DermaMNIST,
"octmnist": OCTMNIST,
"pneumoniamnist": PneumoniaMNIST,
"retinamnist": RetinaMNIST,
"breastmnist": BreastMNIST,
"organmnist_axial": OrganMNISTAxial,
"organmnist_coronal": OrganMNISTCoronal,
"organmnist_sagittal": OrganMNISTSagittal,
}
DataClass = flag_to_class[data_flag]
info = INFO[data_flag]
task = info['task']
n_channels = info['n_channels']
n_classes = len(info['label'])
```
First, we read the MedMNIST data, preprocess them and encapsulate them into dataloader form.
```
# preprocessing
data_transform = transforms.Compose([
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomRotation(15),
transforms.ColorJitter(brightness=1, contrast=1, hue=0.5, saturation=0.5),
transforms.ToTensor(),
transforms.Normalize(mean=[.5], std=[.5])
])
# data_transform = transforms.Compose([
# transforms.ToTensor(),
# transforms.Normalize(mean=[.5], std=[.5])
# ])
# load the data
train_dataset = DataClass(root=input_root, split='train', transform=data_transform, download=download)
test_dataset = DataClass(root=input_root, split='test', transform=data_transform, download=download)
nonorm_dataset = DataClass(root=input_root, split='train', transform=transforms.ToTensor(), download=download)
# encapsulate data into dataloader form
train_loader = data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True)
test_loader = data.DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=True)
print(train_dataset)
print("===================")
print(test_dataset)
print("===================")
print(nonorm_dataset)
# visualization
img, target = nonorm_dataset[8]
if n_channels == 1:
img = img.reshape(28, 28)
plt.imshow(img, cmap='gray')
else:
img = img.permute(1, 2, 0)
plt.imshow(img)
print(target)
def show_images(imgs, num_rows, num_cols, scale=2):
figsize = (num_cols * scale, num_rows * scale)
_, axes = plt.subplots(num_rows, num_cols, figsize=figsize)
for i in range(num_rows):
for j in range(num_cols):
axes[i][j].imshow(imgs[i * num_cols + j],cmap='gray')
axes[i][j].axes.get_xaxis().set_visible(False)
axes[i][j].axes.get_yaxis().set_visible(False)
return axes
def apply(img,aug,num_rows=2,num_cols=4,scale=1.5):
Y=[aug(img) for _ in range(num_rows*num_cols)]
show_images(Y,num_rows,num_cols,scale)
print(img.shape)
img = img.permute(2, 0, 1)
image = img.cpu().clone()
image = image.squeeze(0) # 压缩一维
print(image.shape)
unloader = transforms.ToPILImage()
# image = transforms.ToPILImage(image) # 自动转换为0-255
# image = unloader(image)
image = transforms.ToPILImage()(image)
print(image)
plt.imshow(image)
apply(image,transforms.ColorJitter(brightness=1, contrast=1, hue=0.5, saturation=0.5))
augs = transforms.Compose(
[transforms.RandomHorizontalFlip(p=0.5),
# transforms.RandomCrop(28),
# transforms.ColorJitter(brightness=0.5, contrast=1, hue=0.5, saturation=0.5),
transforms.RandomRotation(15)])
apply(image,augs)
# montage
def process(n_channels, length=20):
scale = length * length
image = np.zeros((scale, 28, 28, 3)) if n_channels == 3 else np.zeros((scale, 28, 28))
index = [i for i in range(scale)]
np.random.shuffle(index)
for idx in range(scale):
img, _ = nonorm_dataset[idx]
if n_channels == 3:
img = img.permute(1, 2, 0).numpy()
else:
img = img.reshape(28, 28).numpy()
image[index[idx]] = img
if n_channels == 1:
image = image.reshape(scale, 28, 28)
arr_out = montage(image)
plt.imshow(arr_out, cmap='gray')
else:
image = image.reshape(scale, 28, 28, 3)
arr_out = montage(image, multichannel=3)
plt.imshow(arr_out)
process(n_channels=n_channels, length=20)
```
Then, we define a simple model for illustration, object function and optimizer that we use to classify.
```
# define a simple CNN model
class Net(nn.Module):
def __init__(self, in_channels, num_classes):
super(Net, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels, 16, kernel_size=3),
nn.BatchNorm2d(16),
nn.ReLU())
self.layer2 = nn.Sequential(
nn.Conv2d(16, 16, kernel_size=3),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer3 = nn.Sequential(
nn.Conv2d(16, 64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU())
self.layer4 = nn.Sequential(
nn.Conv2d(64, 64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU())
self.layer5 = nn.Sequential(
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Sequential(
nn.Linear(64 * 4 * 4, 128),
nn.ReLU(),
nn.Linear(128, 128),
nn.ReLU(),
nn.Linear(128, num_classes))
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
model = Net(in_channels=n_channels, num_classes=n_classes)
# define loss function and optimizer
if task == "multi-label, binary-class":
criterion = nn.BCEWithLogitsLoss()
else:
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
```
Next, we can start to train and evaluate!
```
# train
for epoch in range(NUM_EPOCHS):
train_correct = 0
train_total = 0
test_correct = 0
test_total = 0
model.train()
for inputs, targets in tqdm(train_loader):
# forward + backward + optimize
optimizer.zero_grad()
outputs = model(inputs)
if task == 'multi-label, binary-class':
targets = targets.to(torch.float32)
loss = criterion(outputs, targets)
else:
targets = targets.squeeze().long()
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
# evaluation
def test(split):
model.eval()
y_true = torch.tensor([])
y_score = torch.tensor([])
data_loader = train_loader if split == 'train' else test_loader
with torch.no_grad():
for inputs, targets in data_loader:
outputs = model(inputs)
if task == 'multi-label, binary-class':
targets = targets.to(torch.float32)
m = nn.Sigmoid()
outputs = m(outputs)
else:
targets = targets.squeeze().long()
m = nn.Softmax(dim=1)
outputs = m(outputs)
targets = targets.float().resize_(len(targets), 1)
y_true = torch.cat((y_true, targets), 0)
y_score = torch.cat((y_score, outputs), 0)
y_true = y_true.numpy()
y_score = y_score.detach().numpy()
auc = getAUC(y_true, y_score, task)
acc = getACC(y_true, y_score, task)
print('%s acc: %.3f auc:%.3f' % (split, acc, auc))
print('==> Evaluating ...')
test('train')
test('test')
```
| github_jupyter |
Perform SVM with PCA operation on Breast Cancer Dataset and Iris Dataset.
With Breast Cancer Dataset
```
from sklearn import datasets
breast_cancer = datasets.load_breast_cancer()
breast_data = breast_cancer.data
breast_labels = breast_cancer.target
print(breast_data.shape)
print(breast_labels.shape)
import numpy as np
labels = np.reshape(breast_labels,(569,1))
final_breast_data = np.concatenate([breast_data,labels],axis=1)
final_breast_data.shape
import pandas as pd
breast_dataset = pd.DataFrame(final_breast_data)
features = breast_cancer.feature_names
features
final_breast_data[0:5]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(breast_data,
breast_labels, random_state=46)
print(X_train.shape, X_test.shape)
```
Preprocessing: Principal Component Analysis
-------------------------------------------
We can use PCA to reduce these features to a manageable size, while maintaining most of the information
in the dataset.
```
from sklearn import decomposition
pca = decomposition.PCA(n_components=20, whiten=True)
pca.fit(X_train)
```
The principal components measure deviations about this mean along
orthogonal axes.
```
print(pca.components_.shape)
```
With this projection computed, we can now project our original training
and test data onto the PCA basis:
```
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
print(X_test_pca.shape)
```
Doing the Learning: Support Vector Machines
-------------------------------------------
Now we'll perform support-vector-machine classification on this reduced
dataset:
```
from sklearn import svm
clf = svm.SVC(C=5., gamma=0.001)
clf.fit(X_train_pca, y_train)
from sklearn import metrics
y_pred = clf.predict(X_test_pca)
print(metrics.classification_report(y_test, y_pred))
```
Another interesting metric is the *confusion matrix*, which indicates
how often any two items are mixed-up. The confusion matrix of a perfect
classifier would only have nonzero entries on the diagonal, with zeros
on the off-diagonal:
```
print(metrics.confusion_matrix(y_test, y_pred))
```
# With Iris Dataset
```
iris = datasets.load_iris()
iris_data = iris.data
iris_labels = iris.target
print(iris_data.shape)
print(iris_labels.shape)
features = iris.feature_names
features
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_labels, random_state=46)
print(X_train.shape, X_test.shape)
```
Preprocessing: Principal Component Analysis
We can use PCA to reduce these features to a manageable size, while maintaining most of the information in the dataset.
```
from sklearn import decomposition
pca = decomposition.PCA(n_components=2, whiten=True)
pca.fit(X_train)
print(pca.components_.shape)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
print(X_test_pca.shape)
from sklearn import svm
clf = svm.SVC(C=5., gamma=0.001)
clf.fit(X_train_pca, y_train)
from sklearn import metrics
y_pred = clf.predict(X_test_pca)
print(metrics.classification_report(y_test, y_pred))
print(metrics.confusion_matrix(y_test, y_pred))
```
| github_jupyter |
```
import gaussianfft
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial.distance import cdist
from gaussianfft.util import EmpiricalVariogram
%matplotlib inline
plt.rcParams['figure.figsize'] = [15,7]
def filter_deltas(m, d):
# Filter nans
deltas_nan = np.array(d)
nan_cols = np.any(np.isnan(deltas_nan), axis=0)
deltas_nan = deltas_nan[:, np.invert(nan_cols)]
mid_nan = m[np.invert(nan_cols)]
return mid_nan, deltas_nan, nan_cols
def plot_deltas(fig, ax, m, d):
mid_nan, deltas_nan, _ = filter_deltas(m, d)
mid_nan /= np.max(mid_nan)
# Plot
cf = ax.contourf(mid_nan, range_length_ratio, deltas_nan, 30, vmin=-0.15, vmax=0.15, cmap='bwr')
return cf
# Setup
nx, ny, nz = 100, 1, 1
dx, dy, dz = 20, 20, 20
px, py, pz = nx, ny, nz
dr = 0.5 * dx
nmax = 10000
strategy = 'origo'
range_length_ratio = np.linspace(0.1, 2, 10)
# Derived constants
Lx, Ly, Lz = nx * dx, ny * dy, nz * dz
def simulate(vtype):
# Simulation
deltas = [[], []]
true_variogram = []
estimated_variograms = [[], []]
for r in range_length_ratio:
v = gaussianfft.variogram(vtype, Lx * r, Ly * r, Lz * r)
ev = EmpiricalVariogram(v, nx, dx, ny, dy, nz, dz, px, py, pz)
true_variogram.append(ev.true_variogram(dr))
refs = ev.pick_reference_points(strategy)
for dd, ee in zip(deltas, estimated_variograms):
mid, estimated_variogram, _, _, convrg = ev.estimate_variogram(nmax, dr, refs, analyze_convergence=10)
ee.append(estimated_variogram)
dd.append(convrg.deltas[-1])
# TODO: analyze convergence
return mid, deltas, true_variogram, estimated_variograms
```
# Gaussian
```
variogram_type = 'gaussian'
mid, deltas, tcorr, ecorr = simulate(variogram_type)
# Plot comparison
fig, axes = plt.subplots(nrows=1, ncols=2)
c = plot_deltas(fig, axes[0], mid, deltas[0])
c = plot_deltas(fig, axes[1], mid, deltas[1])
axes[0].set_ylabel('range/length ratio')
axes[0].set_xlabel('correlation range')
axes[1].set_xlabel('correlation range')
fig.colorbar(c, ax=axes.ravel().tolist())
# Inspect variogram estimation
idelta = 0
ratio = 0.75
fmid, fdelta, nancols = filter_deltas(mid, deltas[idelta])
ir = np.argmin(np.abs(range_length_ratio - ratio))
evario = np.array(ecorr[idelta][ir])[np.invert(nancols)]
tvario = np.array(tcorr[ir])
plt.plot(fmid, evario)
plt.plot(tvario[0], tvario[1])
# plt.plot(fmid, fdelta[ir, :])
# Show a realization
v = gaussianfft.variogram(variogram_type, ratio * Lx)
f = gaussianfft.advanced.simulate(v, nx, dx, padx=px)
plt.plot(f)
```
# Spherical
```
variogram_type = 'spherical'
mid, deltas, tcorr, ecorr = simulate(variogram_type)
# Plot comparison
fig, axes = plt.subplots(nrows=1, ncols=2)
c = plot_deltas(fig, axes[0], mid, deltas[0])
c = plot_deltas(fig, axes[1], mid, deltas[1])
axes[0].set_ylabel('range/length ratio')
axes[0].set_xlabel('correlation range')
axes[1].set_xlabel('correlation range')
fig.colorbar(c, ax=axes.ravel().tolist())
```
# Exponential
```
variogram_type = 'exponential'
mid, deltas, tcorr, ecorr = simulate(variogram_type)
# Plot comparison
fig, axes = plt.subplots(nrows=1, ncols=2)
c = plot_deltas(fig, axes[0], mid, deltas[0])
c = plot_deltas(fig, axes[1], mid, deltas[1])
axes[0].set_ylabel('range/length ratio')
axes[0].set_xlabel('correlation range')
axes[1].set_xlabel('correlation range')
fig.colorbar(c, ax=axes.ravel().tolist())
```
# Matern52
```
variogram_type = 'matern52'
mid, deltas, tcorr, ecorr = simulate(variogram_type)
# Plot comparison
fig, axes = plt.subplots(nrows=1, ncols=2)
c = plot_deltas(fig, axes[0], mid, deltas[0])
c = plot_deltas(fig, axes[1], mid, deltas[1])
axes[0].set_ylabel('range/length ratio')
axes[0].set_xlabel('correlation range')
axes[1].set_xlabel('correlation range')
fig.colorbar(c, ax=axes.ravel().tolist())
```
# General Exponential (1.5)
```
variogram_type = 'general_exponential'
mid, deltas, tcorr, ecorr = simulate(variogram_type)
# Plot comparison
fig, axes = plt.subplots(nrows=1, ncols=2)
c = plot_deltas(fig, axes[0], mid, deltas[0])
c = plot_deltas(fig, axes[1], mid, deltas[1])
axes[0].set_ylabel('range/length ratio')
axes[0].set_xlabel('correlation range')
axes[1].set_xlabel('correlation range')
fig.colorbar(c, ax=axes.ravel().tolist())
```
| github_jupyter |
# Creating a class
```
class Student: # created a class "Student"
name = "Tom"
grade = "A"
age = 15
def display(self):
print(self.name,self.grade,self.age)
# There will be no output here, because we are not invoking (calling) the "display" function to print
```
## Creating an object
```
class Student:
name = "Tom"
grade = "A"
age = 15
def display(self):
print(self.name,self.grade,self.age)
s1 = Student() # created an object "s1" of class "Student"
s1.display() # displaying the details through the "display" finction
```
## Creating a constructor
> If we give parameters inside the constructor (inside __init__) then that type of representation is called "Parameterized constructor"
> If we don't give parameters inside the constructor (inside __init__) then that type of representation is called "Non-Parameterized constructor"

```
# This is a parameterized constructor
class Student:
def __init__(self,name,study,occupation): # intializing all the parameters we need i.e, name, study, occupation in the constructor
self.name = name
self.study = study
self.occupation = occupation
def output(self):
print(self.name + " completed " + self.study + " and working as a " + self.occupation)
s1 = Student('Tom', 'Btech' ,'software engineer') # creating two objects and giving the
s2 = Student('Jerry', "MBBS", 'doctor') # input as the order mentioned in the " __init__ " function
s1.output()
s2.output()
# This is a non-parameterized constructor
class Student:
def __init__(self):
print(" This is a Non parameterized constructor")
s1 = Student()
```
## Python in-built class functions
```
class Student:
def __init__(self,name,grade,age):
self.name = name
self.grade = grade
self.age = age
s1 = Student("Tom","A",15)
print(getattr(s1,'name')) # we get the value of the particular attribute
print(getattr(s1,"age")) # Here,we are asking for attributes "name","age" and the value of those attributes are "Tom",15 respectively
setattr(s1,"age",20) # setting the attribute (changing)
print("Age of the tom is changed using 'setattr' ")
print(getattr(s1,"age"))
print("Checking whether the particular attribute is there or not")
print(hasattr(s1,"name")) # Returns "True" if the attribute is intialized on our class
print(hasattr(s1,"school")) # or else gives "False"
```
## Built-in class attributes
```
class Student:
'''This is doc string where we mention,what's the idea of this progam '''
def __init__(self,name,grade,age):
self.name = name
self.grade = grade
self.age = age
s1 = Student("Tom","A",15)
print(Student.__doc__) # printing the doc string
print(s1.__dict__) # printing the attributes in a dictionary data type way
```
# Inheritance
```
class Parent:
print("This is the parent class")
def dog(self):
print("Dog barks")
class Child(Parent): # Inheriting the "parent" class using "child" class
def lion(self):
print("Lion roars")
c1 = Child() # "c1" is the object of "Child" class
c1.lion()
c1.dog() # because of inheritance, the print statement inside the "dog" function , which is inside the "Parent" class is also printed.
```
## Multi-level inheritance
```
class Parent:
print("This is the parent class")
def dog(self):
print("Dog barks")
class Child(Parent): # Inheriting the "parent" class using "child" class
def lion(self):
print("Lion roars")
class Grandchild(Child): # Inheriting the "Child" class
def pegion(self):
print("pegion coos")
c1 = Grandchild() # "c1" is the object of "Grandchild" class
c1.lion()
c1.dog() # because of inheritance, the print statement inside the "dog" function , which is inside the "Parent" class is also printed.
c1.pegion() # because of inheritance, the print statement inside the "lion" function , which is inside the "Child" class is also printed.
```
# Multiple inheritance
```
class Calculator1:
def sum(self,a,b):
return a + b
class Calculator2:
def mul(self,a,b):
return a * b
class Derived(Calculator1,Calculator2): # Multiple inheritance, since it is having multiple (in this case 2) class arguments.
def div(self,a,b):
return a / b
d = Derived()
print(d.sum(20,30))
print(d.mul(20,30))
print(d.div(20,30))
```
# Polymorphism
```
class Teacher:
def intro(self):
print("I am a teacher")
def experience(self):
print("3 to 4 years")
class Lecturer:
def intro(self):
print("I am a lecturer")
def experience(self):
print("5 to 6 years")
class Professor:
def intro(self):
print("I am a professor")
def experience(self):
print("8 to 10 years")
# Common Interface for all persons
def category(person):
person.intro() # only intros are printed
# type "person.experience" instead of "person.intro", we get only experience. If we type both "person.intro" and "person.experience" , then both statements are printed.
# instantiate objects
t = Teacher()
l = Lecturer()
p = Professor()
# passing the object
category(t)
category(l)
category(p)
```
# Encapsulation
```
class Computer:
def __init__(self):
self.__maxprice = 900 # maxprice is a private data bcz, it is starting with " __ " underscores
def sell(self):
print("Selling Price: {}".format(self.__maxprice))
def setMaxPrice(self, price): # This method is used to set the private data
self.__maxprice = price
c = Computer() # c is an object of "Computer" class
c.sell()
# change the price
c.__maxprice = 1000 # Here, we are modifying our data directly "__maxprice" to 1000. But the data is not modified because it is a private data
c.sell()
# using setter function
c.setMaxPrice(1000) # In order to change the private data, we have to take help of the method "setMaxPrice" and then now the data is modified
c.sell() # Invoking (calling) the "sell" method (function)
```
## Data abstraction
```
from abc import ABC,abstractclassmethod
class Company(ABC): # this is the abstract class and "ABC" is called as "Abstract Base Class" which is imported from module "abc"
# this is the abstact class method and that "@" is called as decorators. With the help of the decorator only we can make the method as abstract class method
@abstractclassmethod
def developer(self):
pass
class Jr_developer(Company):
def developer(self):
print("I am a jr.developer and develops small applications")
class Sr_developer(Company):
def developer(self):
print("I am a sr.developer and develops large applications")
j = Jr_developer()
s = Sr_developer()
j.developer()
s.developer()
```
| github_jupyter |
# Linear Discriminant Analysis (LDA)
## Importing the libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
## Applying LDA
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components = 2)
X_train = lda.fit_transform(X_train, y_train)
X_test = lda.transform(X_test)
```
## Training the Logistic Regression model on the Training set
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
```
## Making the Confusion Matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
## Visualising the Training set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
```
## Visualising the Test set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Inference PyTorch Bert Model with ONNX Runtime on CPU
In this tutorial, you'll be introduced to how to load a Bert model from PyTorch, convert it to ONNX, and inference it for high performance using ONNX Runtime. In the following sections, we are going to use the Bert model trained with Stanford Question Answering Dataset (SQuAD) dataset as an example. Bert SQuAD model is used in question answering scenarios, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This notebook is for CPU inference. For GPU inferenece, please look at another notebook [Inference PyTorch Bert Model with ONNX Runtime on GPU](PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb).
## 0. Prerequisites ##
If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.
Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:
```console
conda create -n cpu_env python=3.6
conda activate cpu_env
conda install jupyter
jupyter notebook
```
The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
```
# Install or upgrade PyTorch 1.5.0 and OnnxRuntime 1.3.0 for CPU-only.
import sys
!{sys.executable} -m pip install --upgrade torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.3.0
!{sys.executable} -m pip install --upgrade onnxruntime-tools
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==2.11.0
!{sys.executable} -m pip install wget netron
```
## 1. Load Pretrained Bert model ##
We begin by downloading the SQuAD data file and store them in the specified location.
```
import os
cache_dir = "./squad"
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
predict_file_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
predict_file = os.path.join(cache_dir, "dev-v1.1.json")
if not os.path.exists(predict_file):
import wget
print("Start downloading predict file.")
wget.download(predict_file_url, predict_file)
print("Predict file downloaded.")
```
Specify some model configuration variables and constant.
```
# For fine tuned large model, the model name is "bert-large-uncased-whole-word-masking-finetuned-squad". Here we use bert-base for demo.
model_name_or_path = "bert-base-cased"
max_seq_length = 128
doc_stride = 128
max_query_length = 64
# Enable overwrite to export onnx model and download latest script each time when running this notebook.
enable_overwrite = True
# Total samples to inference. It shall be large enough to get stable latency measurement.
total_samples = 100
```
Start to load model from pretrained. This step could take a few minutes.
```
# The following code is adapted from HuggingFace transformers
# https://github.com/huggingface/transformers/blob/master/examples/run_squad.py
from transformers import (BertConfig, BertForQuestionAnswering, BertTokenizer)
# Load pretrained model and tokenizer
config_class, model_class, tokenizer_class = (BertConfig, BertForQuestionAnswering, BertTokenizer)
config = config_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True, cache_dir=cache_dir)
model = model_class.from_pretrained(model_name_or_path,
from_tf=False,
config=config,
cache_dir=cache_dir)
# load some examples
from transformers.data.processors.squad import SquadV1Processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(None, filename=predict_file)
from transformers import squad_convert_examples_to_features
features, dataset = squad_convert_examples_to_features(
examples=examples[:total_samples], # convert just enough examples for this notebook
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=False,
return_dataset='pt'
)
```
## 2. Export the loaded model ##
Once the model is loaded, we can export the loaded PyTorch model to ONNX.
```
output_dir = "./onnx"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
export_model_path = os.path.join(output_dir, 'bert-base-cased-squad.onnx')
import torch
device = torch.device("cpu")
# Get the first example data to run the model and export it to ONNX
data = dataset[0]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
# Set model to inference mode, which is required before exporting the model because some operators behave differently in
# inference and training mode.
model.eval()
model.to(device)
if enable_overwrite or not os.path.exists(export_model_path):
with torch.no_grad():
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
args=tuple(inputs.values()), # model input (or a tuple for multiple inputs)
f=export_model_path, # where to save the model (can be a file or file-like object)
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input_ids', # the model's input names
'input_mask',
'segment_ids'],
output_names=['start', 'end'], # the model's output names
dynamic_axes={'input_ids': symbolic_names, # variable length axes
'input_mask' : symbolic_names,
'segment_ids' : symbolic_names,
'start' : symbolic_names,
'end' : symbolic_names})
print("Model exported at ", export_model_path)
```
## 3. PyTorch Inference ##
Use PyTorch to evaluate an example input for comparison purpose.
```
import time
# Measure the latency. It is not accurate using Jupyter Notebook, it is recommended to use standalone python script.
latency = []
with torch.no_grad():
for i in range(total_samples):
data = dataset[i]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
start = time.time()
outputs = model(**inputs)
latency.append(time.time() - start)
print("PyTorch {} Inference time = {} ms".format(device.type, format(sum(latency) * 1000 / len(latency), '.2f')))
```
## 4. Inference ONNX Model with ONNX Runtime ##
### OpenMP Environment Variable
OpenMP environment variables are very important for CPU inference of Bert model. It has large performance impact on Bert model so you might need set it carefully according to [Performance Test Tool](#Performance-Test-Tool) result in later part of this notebook.
Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect.
```
import psutil
# You may change the settings in this cell according to Performance Test Tool result.
os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True))
os.environ["OMP_WAIT_POLICY"] = 'ACTIVE'
```
Now we are ready to inference the model with ONNX Runtime. Here we can see that OnnxRuntime has better performance than PyTorch.
It is better to use standalone python script like [Performance Test tool](#Performance-Test-tool) to get accurate performance results.
```
import onnxruntime
import numpy
# Print warning if user uses onnxruntime-gpu instead of onnxruntime package.
if 'CUDAExecutionProvider' in onnxruntime.get_available_providers():
print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package to test CPU inference.")
sess_options = onnxruntime.SessionOptions()
# Optional: store the optimized graph and view it using Netron to verify that model is fully optimized.
# Note that this will increase session creation time, so it is for debugging only.
sess_options.optimized_model_filepath = os.path.join(output_dir, "optimized_model_cpu.onnx")
# intra_op_num_threads is needed for OnnxRuntime 1.2.0.
# For OnnxRuntime 1.3.0 or later, this does not have effect unless you are using onnxruntime-gpu package.
sess_options.intra_op_num_threads=1
# Specify providers when you use onnxruntime-gpu for CPU inference.
session = onnxruntime.InferenceSession(export_model_path, sess_options, providers=['CPUExecutionProvider'])
latency = []
for i in range(total_samples):
data = dataset[i]
# TODO: use IO Binding (see https://github.com/microsoft/onnxruntime/pull/4206) to improve performance.
ort_inputs = {
'input_ids': data[0].cpu().reshape(1, max_seq_length).numpy(),
'input_mask': data[1].cpu().reshape(1, max_seq_length).numpy(),
'segment_ids': data[2].cpu().reshape(1, max_seq_length).numpy()
}
start = time.time()
ort_outputs = session.run(None, ort_inputs)
latency.append(time.time() - start)
print("OnnxRuntime cpu Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
print("***** Verifying correctness *****")
for i in range(2):
print('PyTorch and ONNX Runtime output {} are close:'.format(i), numpy.allclose(ort_outputs[i], outputs[i].cpu(), rtol=1e-05, atol=1e-04))
```
## 5. Offline Optimization Script and Test Tools
It is recommended to try [OnnxRuntime Transformer Model Optimization Tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) on the exported ONNX models. It could help verify whether the model can be fully optimized, and get performance test results.
#### Transformer Optimizer
Although OnnxRuntime could optimize Bert model exported by PyTorch. Sometime, model cannot be fully optimized due to different reasons:
* A new subgraph pattern is generated by new version of export tool, and the pattern is not covered by older version of OnnxRuntime.
* The exported model uses dynamic axis and this makes it harder for shape inference of the graph. That blocks some optimization to be applied.
* Some optimization is better to be done offline. Like change input tensor type from int64 to int32 to avoid extra Cast nodes, or convert model to float16 to achieve better performance in V100 or T4 GPU.
We have python script **optimizer.py**, which is more flexible in graph pattern matching and model conversion (like float32 to float16). You can also use it to verify whether a Bert model is fully optimized.
In this example, we can see that it introduces optimization that is not provided by onnxruntime: SkipLayerNormalization and bias fusion, which is not fused in OnnxRuntime due to shape inference as mentioned.
It will also tell whether the model is fully optimized or not. If not, that means you might need change the script to fuse some new pattern of subgraph.
Example Usage:
```
from onnxruntime_tools import optimizer
optimized_model = optimizer.optimize_model(export_model_path, model_type='bert', num_heads=12, hidden_size=768)
optimized_model.save_model_to_file(optimized_model_path)
```
You can also use optimizer_cli like the following:
```
optimized_model_path = './onnx/bert-base-cased-squad_opt_cpu.onnx'
!{sys.executable} -m onnxruntime_tools.optimizer_cli --input $export_model_path --output $optimized_model_path --model_type bert --num_heads 12 --hidden_size 768
```
#### Optimized Graph
When you can open the optimized model using Netron to visualize, the graph is like the following:
<img src='images/optimized_bert_gpu.png'>
For CPU, optimized graph is slightly different: FastGelu is replaced by BiasGelu.
```
import netron
# Change it to False to skip viewing the optimized model in browser.
enable_netron = True
if enable_netron:
# If you encounter error "access a socket in a way forbidden by its access permissions", install Netron as standalone application instead.
netron.start(optimized_model_path)
```
#### Model Results Comparison Tool
If your BERT model has three inputs, a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare results from both the original and optimized models. If outputs are all close, it is safe to use the optimized model.
Example of verifying models:
```
!{sys.executable} -m onnxruntime_tools.transformers.compare_bert_results --baseline_model $export_model_path --optimized_model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100
```
#### Performance Test Tool
This tool measures performance of BERT model inference using OnnxRuntime Python API.
The following command will create 100 samples of batch_size 1 and sequence length 128 to run inference, then calculate performance numbers like average latency and throughput etc. You can increase number of samples (recommended 1000) to get more stable result.
```
!{sys.executable} -m onnxruntime_tools.transformers.bert_perf_test --model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100 --test_times 1 --intra_op_num_threads 1 --inclusive --all
```
Let's load the summary file and take a look.
```
import os
import glob
import pandas
latest_result_file = max(glob.glob("./onnx/perf_results_*.txt"), key=os.path.getmtime)
result_data = pandas.read_table(latest_result_file, converters={'OMP_NUM_THREADS': str, 'OMP_WAIT_POLICY':str})
print(latest_result_file)
# Remove some columns that have same values for all rows.
columns_to_remove = ['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu', 'warmup']
# Hide some latency percentile columns to fit screen width.
columns_to_remove.extend(['Latency_P50', 'Latency_P95'])
result_data.drop(columns_to_remove, axis=1, inplace=True)
result_data
```
## 6. Additional Info
Note that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the performance test tool in a console to get more accurate performance numbers.
We have a [benchmark script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/run_benchmark.sh). It is recommended to use it compare inference speed of OnnxRuntime with PyTorch.
[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.
Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.
You might get slower or faster result based on your hardware.
```
!{sys.executable} -m onnxruntime_tools.transformers.machine_info --silent
```
| github_jupyter |
```
args = {
'model_name':'2019_04_11_DRIVE',
'FP':'float16',
'optimizer': 'Adam', #SGD, RMSprop, Adadelta, Adagrad, Adam, Adamax, Nadam
'dataset':['101/training_data.csv', '102/training_data.csv', '103/training_data.csv', '104/training_data.csv', '105/training_data.csv', '106/training_data.csv', '107/training_data.csv'],
'batch_size':1024, #512 with float32, 1024 with float16
'split_point': 0.8, #80% Training , 20% Validation
'rnd_seed': 1234,
'epochs_number':250,
'image_width': 320,
'image_height': 90,
'resume': False
}
# csv uznatılı dosyayı okuyabilmek için
import pandas as pd
# matris işlemleri için
import numpy as np
# opencv
import cv2
# grafik kütüphanesi
import matplotlib.pylab as plt
# histogram colors
from matplotlib import colors
# matplotlib grafiklerini jupyter notebook üzerinde göstermek için
%matplotlib inline
# rasgele sayı üretimi için
import random
# Eğitim için kaydettiğimiz seyir.csv dosaysını okuyoruz
for i, ds in enumerate(args['dataset']):
try:
tmp = pd.read_csv(ds)
except:
pass
else:
tmp['FileName'] = tmp['FileName'].apply(lambda x: ds.split('/')[0] + '/' + x)
if i == 0:
df = tmp
else:
df = df.append(tmp, ignore_index=True)
# Seyir dosaysındaki sutun başlıkları
df.columns
# Açı sutunu hakkında istatistiki bilgiler min max mean vs ...
df.Angle.describe()
# Toplam kayıt sayımız
len(df)
# 20 den 40 kadar kayıtlar
df[0:10]
df[-10:]
# Kayıt ettğimiz resmin birine bakıyoruz
image = cv2.imread(df.FileName[30])
plt.figure(figsize=(15,5))
plt.imshow(image)
# Resmin her tarafını kullanmamıza gerek yok
# çeşitli denemeler yaparak kırptığımız alanı görüyoruz
# Uygun gördüğümüz halini ağda kullanacağız
# Biz burda sadece sol görüntüyü üstten 144 den 300 kadarlık kısmını alıyoruz
tmp = image[160:340,:,:]
plt.figure(figsize=(15,5))
plt.imshow(tmp)
print(type(image[0,0,0]))
# Ağımızı eğitmek için giriş verimiz resimler çıkışımız da Açılar olacak
# bunları birer listeye aktarıyoruz
# Kamera ilk çekmeye başladığında düzgün çekemediği için başlangıçdan 30 kayıdı almıyoruz.
images = list(df.FileName[1:-1])
labels = list(df.Angle[1:-1])
len(labels)
# Verimizdeki açıların dağılımı nasıl diye bir histogram yapıp bakıyoruz
# Dağılımın eşit şekilde olmaması eğitimin de düzgün olmamasına sebep olur
#plt.hist(labels, bins=14)
#plt.show()
N, bins, patches = plt.hist(labels,14, range=[-0.35,0.35], facecolor='blue', align='mid')
fracs = N / N.max()
norm = colors.Normalize(fracs.min(), fracs.max())
# Now, we'll loop through our objects and set the color of each accordingly
for thisfrac, thispatch in zip(fracs, patches):
color = plt.cm.RdBu(norm(thisfrac))
thispatch.set_facecolor(color)
#plt.axis([-0.4, 0.4, 0, 750])
plt.show()
tmp = []
start_value = -0.35
aralik = 0.05
for i in range(14):
length=df.loc[(df['Angle'] > start_value) & (df['Angle'] <= start_value+aralik)]
tmp.append(len(length))
start_value = start_value + aralik
print(tmp[0:7]) # Eksi açı degerleri (Sag)
print(tmp[7:14]) # Pozitif açı degerleri (Sol)
# Veri setindeki açı dağılımını bir paröa düzeltmek için
# sayısı az olan açıdaki kayıtları listeye yeniden ekleyerek
# daha düzgün hale getirmeye çalışıyoruz
def augment():
nitem = len(images)
for i in range(nitem):
if labels[i] >= 0.05 and labels[i] <= 0.1:
addValue=tmp[7]/tmp[8]
for j in range(addValue-5):
images.append(images[i])
labels.append(labels[i])
if labels[i] > 0.1 and labels[i] <= 0.15:
addValue=tmp[7]/tmp[9]
for j in range(addValue-3):
images.append(images[i])
labels.append(labels[i])
if labels[i] > 0.15 and labels[i] <= 0.2:
addValue=tmp[7]/tmp[10]
for j in range(addValue-5):
images.append(images[i])
labels.append(labels[i])
if labels[i] > 0.2 and labels[i] <= 0.25:
addValue=tmp[7]/tmp[11]
for j in range(addValue-10):
images.append(images[i])
labels.append(labels[i])
if labels[i] > 0.25 and labels[i] <= 0.3:
addValue=tmp[7]/tmp[12]
for j in range(addValue-20):
images.append(images[i])
labels.append(labels[i])
if labels[i] > 0.3 and labels[i] <= 0.35:
addValue=tmp[7]/tmp[13]
for j in range(addValue-5):
images.append(images[i])
labels.append(labels[i])
#Negatif degerler
if labels[i] < 0.0 and labels[i] > -0.05:
addValue=tmp[7]/tmp[6]
for j in range(addValue-5):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.05 and labels[i] > -0.1:
addValue=tmp[7]/tmp[5]
for j in range(addValue-3):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.1 and labels[i] > -0.15:
addValue=tmp[7]/tmp[4]
for j in range(addValue-3):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.15 and labels[i] > -0.2:
addValue=tmp[7]/tmp[3]
for j in range(addValue-3):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.2 and labels[i] > -0.25:
addValue=tmp[7]/tmp[2]
for j in range(addValue-4):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.25 and labels[i] > -0.3:
addValue=tmp[7]/tmp[1]
for j in range(addValue-10):
images.append(images[i])
labels.append(labels[i])
if labels[i] <= -0.3 and labels[i] > -0.35:
addValue=tmp[7]/tmp[0]
for j in range(addValue-3):
images.append(images[i])
labels.append(labels[i])
augment()
# İlk histgorama göre daga dengeli sayılabilecek bir dağılıma ulaştık
# En doğru çözüm değil ama pratik işe yarar bir alternatif
N, bins, patches = plt.hist(labels,14, range=[-0.35,0.35], facecolor='blue', align='mid')
fracs = N / N.max()
norm = colors.Normalize(fracs.min(), fracs.max())
# Now, we'll loop through our objects and set the color of each accordingly
for thisfrac, thispatch in zip(fracs, patches):
color = plt.cm.RdBu(norm(thisfrac))
thispatch.set_facecolor(color)
#plt.axis([-0.4, 0.4, 0, 750])
plt.show()
len(labels)
# Veri setimiz ile ilgili ayarlamalar
# Veri seti küme büyüklüğü batch size
# Verisetinin ne kadarı eğitim ne kadarı test için kullanılacak
# Eğitim %80 , Test %20
bsize = args['batch_size']
dlen = len(labels)
splitpoint = int(args['split_point']*dlen)
reindex = list(range(len(labels)))
# Eğtim verisini karıştıryoruz
random.seed(args['rnd_seed'])
random.shuffle(reindex)
# Resim üzerinde Rastgele parlaklık değişimi uygulayan bir fonksiyon
# Augmentation function (taken from github)
def augment_brightness(image):
image1 = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
image1 = np.array(image1, dtype = np.float64)
random_bright = .5+np.random.uniform()
image1[:,:,2] = image1[:,:,2]*random_bright
image1[:,:,2][image1[:,:,2]>255] = 255
image1 = np.array(image1, dtype = np.uint8)
image1 = cv2.cvtColor(image1,cv2.COLOR_HSV2RGB)
return image1
#Resmi Kaydirma Sonradan Eklendi
def random_translate(image,range_x, range_y):
"""
Randomly shift the image virtially and horizontally (translation).
"""
trans_x = range_x * (np.random.rand() - 0.5)
trans_y = range_y * (np.random.rand() - 0.5)
trans_m = np.float32([[1, 0, trans_x], [0, 1, trans_y]])
height, width = image.shape[:2]
image = cv2.warpAffine(image, trans_m, (width, height))
return image
def random_shadow(image,width,heigth):
"""
Generates and adds random shadow
"""
IMAGE_WIDTH=width
IMAGE_HEIGHT=heigth
# (x1, y1) and (x2, y2) forms a line
# xm, ym gives all the locations of the image
x1, y1 = IMAGE_WIDTH * np.random.rand(), 0
x2, y2 = IMAGE_WIDTH * np.random.rand(), IMAGE_HEIGHT
xm, ym = np.mgrid[0:IMAGE_HEIGHT, 0:IMAGE_WIDTH]
print(image.size)
# mathematically speaking, we want to set 1 below the line and zero otherwise
# Our coordinate is up side down. So, the above the line:
# (ym-y1)/(xm-x1) > (y2-y1)/(x2-x1)
# as x2 == x1 causes zero-division problem, we'll write it in the below form:
# (ym-y1)*(x2-x1) - (y2-y1)*(xm-x1) > 0
mask = np.zeros_like(image[:, :, 1])
mask[(ym - y1) * (x2 - x1) - (y2 - y1) * (xm - x1) > 0] = 1
# choose which side should have shadow and adjust saturation
cond = mask == np.random.randint(2)
s_ratio = np.random.uniform(low=0.5, high=0.5)
# adjust Saturation in HLS(Hue, Light, Saturation)
hls = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
hls[:, :, 1][cond] = hls[:, :, 1][cond] * s_ratio
#plt.imshow(random_shadow(image))
return cv2.cvtColor(hls, cv2.COLOR_HLS2RGB)
# ismi verilen resmi okuyup
# rastgele olarak %50 sine parlaklık değişimi uygulayan fonksiyonu uygulayıp
# resim matrisini dönem bir fonksiyon
def get_matrix(fname):
img = cv2.imread(fname)
h, w = img.shape[:2]
#img = img[160:340,:,:] # crop only
if w != 640 or h != 360:
img = cv2.resize(img, (640,360))
img = cv2.resize(img[160:340,:,:], (args['image_width'],args['image_height'])) #crop then resize
if random.randint(0,2) == 1:
img = augment_brightness(img)
if random.randint(0,2) == 1:
img = random_translate(img,100,0)
#if random.randint(0,2) == 1:
# img = random_shadow(img,320,90)
return img
# Bütün veriyi hafızaya almamız mümkün değil
# Ek olarak bazen çeşitli değişimler - Augmentation - uygulamakda istiyebiliriz
# python generator ile gerektiğinde veri okunur düzenlenir ve eğitim veya test için
# sisteme verilir
# alttaki fonksiyonlar bu işi yapar
# Generate data for training
def generate_data():
i = 0
while True:
x = []
y = []
for j in range(i,i+bsize):
ix = reindex[j]
img = get_matrix(images[ix])
lbl = np.array([labels[ix]])
flip = random.randint(0,1)
if flip == 1:
img = cv2.flip(img,1)
lbl = lbl*-1.0
x.append(img)
y.append(lbl)
x = np.array(x)
y = np.array(y)
#print("#------ Sending TRAINING batch ------#")
yield (x,y)
i +=bsize
if i+bsize > splitpoint:
i = 0
# Generate data for validation
def generate_data_val():
i = splitpoint
while True:
x = []
y = []
for j in range(i,i+bsize):
ix = reindex[j]
x.append(get_matrix(images[ix]))
y.append(np.array([labels[ix]]))
x = np.array(x)
y = np.array(y)
#print("#------ Sending VALIDATION batch ------#")
yield (x,y)
i +=bsize
if i+bsize > dlen:
i = splitpoint
# Keras için gerekenler
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten, Lambda
from keras.layers import Conv2D, MaxPooling2D, Cropping2D, Reshape
from keras.callbacks import ModelCheckpoint
from keras.optimizers import SGD, RMSprop, Adadelta, Adagrad, Adam, Adamax, Nadam
from keras.regularizers import l2
from keras import backend as K
import tensorflow as tf
#Destroy the current TF graph and create a new one
K.clear_session()
#import keras
#print(keras.__version__)
# make sure soft-placement is off
# allow_soft_placement: an op will be placed on CPU if not possible on GPU
# allow_growth: attempts to allocate only as much GPU memory based on runtime allocations
# per_process_gpu_memory_fraction: set the fraction of the overall memory that each GPU should be allocated
tf_config = tf.ConfigProto(allow_soft_placement=False)
tf_config.gpu_options.allow_growth = True
#tf_config.gpu_options.per_process_gpu_memory_fraction = 0.5
s = tf.Session(config=tf_config)
K.set_session(s)
# enable 16-bit training
K.set_floatx(args['FP'])
if args['FP'] == "float16":
K.set_epsilon(1e-4)
K.floatx()
# Model based on NVIDIA's End to End Learning for Self-Driving Cars model
#input shape, original paper uses 60*200, openzeka uses 80*320
shape=(args['image_height'],args['image_width'],3)
# Sıralı bir keras modeli tanılıyoruz
model = Sequential()
# Cropping
# gelen resmin istediğimiz ksımını kırpmak için bu katmanı yazıyoruz
# Cropping2D(cropping((top_crop, bottom_crop), (left_crop, right_crop)))
# aşağıdaki satırda
# üstten 150 alttan 20 piksel
# soldan 0 sağdan 640 piksel kırpılıyor
#model.add(Cropping2D(cropping=((150,20),(0,640)), input_shape=shape))
# Normalization
# 0 - 255 arası değerler -1 ila 1 arasına çekiliyor
model.add(Lambda(lambda x: (2*x / 255.0) - 1.0, input_shape=shape))
# lambda + cast
#model.add(Lambda(lambda x: tf.cast((2*x / 255.0) - 1.0 ,dtype=tf.float16)))
# cast to float16
#model.add(Lambda(lambda x: tf.cast(x,dtype=tf.float16)))
# Doesnt work, requires numpy array as input
#model.add(Lambda(lambda x: K.cast_to_floatx(x)))
# Evrişim katmanı (5, 5) lik 24 tane 2 şer piksel kayarak
model.add(Conv2D(24, (5, 5), activation="relu", strides=(2, 2), kernel_regularizer=l2(0.001)))
model.add(Conv2D(36, (5, 5), activation="relu", strides=(2, 2), kernel_regularizer=l2(0.001)))
model.add(Conv2D(48, (5, 5), activation="relu", strides=(2, 2), kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (3, 3), activation="relu", kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (3, 3), activation="relu", kernel_regularizer=l2(0.001)))
#Flattendan once kullanilmayan baglantilari drop out yaptik(sonradan ekledik)
model.add(Dropout(0.5))
# Ağın çıkışı burda vectöre çevriliyor
model.add(Flatten())
# Yapay Sinir ağı kısmı
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
# Ağın çıkışı Açı
model.add(Dense(1))
model.compile(loss='mse', optimizer=args['optimizer'], metrics=['mse', 'msle', 'mae', 'mape', 'cosine'])
# Tanımladığımız ağın yapsı
model.summary()
for layer in model.layers:
print(layer.output)
# Açı değerlerinide -0.3 ila 0.3 aralığından -1 ila 1 aralığına çekebilmek için 3 ile çarpıyoruz
print(type(labels[1]))
labels = 3*np.array(labels).astype(args['FP'])
print(type(labels[1]))
model_name_full = args['model_name'] + '_' + args['FP'] + '_' + args['optimizer']
# Eğitim esnasında test hata değeri en düşük değeri kaydeden bir fonksiyon
model_checkpoint = ModelCheckpoint('models/' + model_name_full + '_weights_{epoch:03d}_{val_loss:.2f}.h5', monitor='val_loss', save_best_only=True, period=10)
if args['resume']:
model = load_model("")
# Eğitim fonksiyonu
hs = model.fit_generator(generate_data(),steps_per_epoch=int(splitpoint/ bsize),
validation_data=generate_data_val(),
validation_steps=(dlen-splitpoint)/bsize, epochs=args['epochs_number'], callbacks=[model_checkpoint])
# Eğitim işleminin gidişatını grafik olarak görüyoruz
# Train and validation loss charts
print(hs.history.keys())
#for val in df.columns[1:]:
# plt.title(val)
# plt.plot(df.epoch,df[val])
# plt.show()
for val in hs.history.keys():
if "val_" not in val:
plt.title(val)
plt.plot(hs.history[val])
plt.plot(hs.history['val_' + val])
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.savefig('models/' + model_name_full + '_' + val + '.png')
plt.show()
try:
plt.plot(hs.history['lr'])
plt.title('Learning rate')
plt.xlabel('epoch')
plt.show()
except:
pass
# Eğittiğimiz modeli kaydediyoruz
# Ağ yapsını json olarak
# Ağ parametre değerlerini h5 uzantılı olarak
import json
# Save model weights and json.
model.save_weights('models/' + model_name_full + '_model.h5')
model_json = model.to_json()
with open('models/' + model_name_full + '_model.json', 'w') as outfile:
json.dump(model_json, outfile)
# rastgele 10 resim seçip modelimiz hesapladığı sonuçla gerçeğine bakıyoruz
# Eğer sonuçlar iyi ise kullanabiliriz
# Sonuççlar kötüyse eğitim aşamasına dönmemiz lazım
# Compare actual and predicted steering
for i in range(100):
ix = random.randint(0,len(df)-1)
out = model.predict(get_matrix(df.FileName[ix]).reshape(1,args['image_height'],args['image_width'],3))
print(df.Angle[ix], ' - > ', out[0][0]/3)
```
| github_jupyter |
## Import libraries
```
from google.colab import drive
from pathlib import Path
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import time
import os
import csv
import concurrent.futures
```
## Utility functions
### Create annot and load descriptors
```
def create_annot(path):
image_list = list(Path(path).glob('*/*.jpg'))
# the identity name is in the path (the name of the parent directory)
names_list = [i.parent.name for i in image_list] # get the identity of each image
# keep info in a pandas DataFrame
annot = pd.DataFrame({'identity': names_list, 'image_path': image_list})
return annot
def concatenate_annots(list_of_paths):
concat_annot = pd.DataFrame()
with concurrent.futures.ThreadPoolExecutor() as executor:
annots = [executor.submit(create_annot, path) for path in list_of_paths]
for annot in annots:
new_annot = annot.result()
concat_annot = concat_annot.append(new_annot, ignore_index = True)
return concat_annot
def load_descriptors(path):
with open(path, 'rb') as file:
return np.load(file)
def concatenate_descriptors(list_of_paths):
concat_descriptors = None
with concurrent.futures.ThreadPoolExecutor() as executor:
descriptors = [executor.submit(load_descriptors, path) for path in list_of_paths]
for descriptor in descriptors:
new_descriptor = descriptor.result()
if concat_descriptors is None:
concat_descriptors = new_descriptor
else:
concat_descriptors = np.concatenate([concat_descriptors, new_descriptor])
return concat_descriptors
```
### Create pivots
```
def generate_pivots(descriptors, n, strategy="rnd"):
if strategy == "kMED":
kmedoids = sklearn_extra.cluster.KMedoids(n_clusters=n).fit(descriptors)
return kmedoids.cluster_centers_
if strategy != "rnd":
print(strategy, "was not implemented. Random pivots were returned")
pivots_id = np.random.choice(np.arange(len(descriptors)), size=n)
return descriptors[pivots_id]
def generate_list_of_pivots(descriptors, t, n, strategy="rnd"):
list_of_pivots = []
with concurrent.futures.ThreadPoolExecutor() as executor:
pivots = [executor.submit(generate_pivots, descriptors, n, strategy) for i in range(t)]
for pivot in concurrent.futures.as_completed(pivots):
new_pivot = pivot.result()
list_of_pivots.append(new_pivot)
return list_of_pivots
```
### Save test results
```
def save_results(dir, file_name, results):
with open(os.path.join(dir, file_name +".csv"), 'w') as f:
writer = csv.writer(f)
# write the header
writer.writerow(["CLASS", "AP", "QUERY TIME"])
# write the data
for r in results:
writer.writerow(r)
```
## Test Performance
```
drive.mount('/content/drive', force_remount=True)
```
### Create annot and load descriptors for the database
```
db_annot = concatenate_annots(['/content/drive/MyDrive/CV_Birds/train', '/content/drive/MyDrive/CV_Birds/mirflickr25k'])
db_annot
db_descriptors = concatenate_descriptors(['/content/drive/MyDrive/CV_Birds/features/training/ResNet152v2/OneDense512_Dropout_fine_tuning.npy','/content/drive/MyDrive/CV_Birds/features/distractor/ResNet152v2/OneDense512_Dropout_fine_tuning.npy'])
db_descriptors.shape
```
### Create annot and load descriptors for the test set
```
query_annot = create_annot('/content/drive/MyDrive/CV_Birds/test')
query_annot
query_descriptors = load_descriptors('/content/drive/MyDrive/CV_Birds/features/test/ResNet152v2/OneDense512_Dropout_fine_tuning.npy')
query_descriptors.shape
```
To run our tests we select only the first image of each species within the test set. Please note that within the test set we have 5 images per species.
```
queries_indexes = [x for x in range(325*5) if x%5 == 0]
```
### Create PP-Index
```
!rm /content/drive/MyDrive/CV_Birds/performance/fine_tuning/index/FT_pert-forest_512_20rnd_cosine/*
!rm -r /content/drive/MyDrive/CV_Birds/performance/fine_tuning/index/FT_pert-forest_512_20rnd_cosine/pert_forest_structure/*
def get_descriptor_from_id(id_object):
return db_descriptors[id_object]
%cd "/content/drive/MyDrive/CV_Birds/Notebooks/PP-Index"
%run PPIndex.ipynb
pivots = generate_list_of_pivots(db_descriptors, t=3, n=20, strategy="rnd")
rnd_pp_forest = PrefixForest(pivots, length=3, distance_metric='cosine', base_directory="/content", forest_file='pert_forest_structure')
rnd_pp_forest.insert_objects_into_forest(range(len(db_descriptors)))
rnd_pp_forest.save()
```
### Compute mAP
```
birds_db = db_annot.loc[db_annot['identity'] != 'mirflickr']
counts = birds_db.groupby('identity').count()
print("Minimum number of images per species:", int(counts.min()))
print("Maximum number of images per species:", int(counts.max()))
print("Average number of images:", float(counts.sum()/325))
```
Since at most we have 249 images per species, we use $n=250$.
```
n = 250
```
The formula for Average Precision is the following:
> $AP@n=\frac{1}{GTP}\sum_{k=1}^{n}P@k×rel@k$
where $GTP$ refers to the total number of ground truth positives, $n$ refers to the total number of images we are interested in, $P@k$ refers to the precision@k and $rel@k$ is a relevance function.
The relevance function is an indicator function which equals 1 if the document at rank $k$ is relevant and equals to 0 otherwise.
```
def compute_ap(query_index, retrieved_ids):
query_identity = query_annot['identity'][query_index]
print(query_index//5, query_identity)
GTP = len(db_annot.loc[db_annot['identity'] == query_identity])
relevant = 0
precision_summation = 0
for k, id in enumerate(retrieved_ids):
if db_annot['identity'][id] == query_identity: # relevant result
relevant = relevant + 1
precision_at_k = relevant/(k+1)
precision_summation = precision_summation + precision_at_k
return (query_identity, precision_summation/GTP)
```
For each query, $Q$, we can calculate a corresponding $AP$. Then, the $mAP$ is simply the mean of all the queries that were made.
> $mAP = \frac{1}{N}\sum_{i=1}^{N}AP_i$
In our case, $N=325$ (one query per species)
```
def rnd_pivots_queries(query_index, n):
start_time = time.time()
ids, distances = rnd_pp_forest.find_nearest_neighbors(query_descriptors[query_index], n, perturbations=3)
end_time = time.time()
ids = ids.tolist()
return compute_ap(query_index, ids) + (end_time - start_time,)
aps = []
for query_index in queries_indexes:
aps.append(rnd_pivots_queries(query_index, n))
aps
ap_at_n = np.array([ap[1] for ap in aps])
query_time = np.array(([ap[2] for ap in aps]))
mAP_at_n = np.mean(ap_at_n, axis=0)
avg_query_time = np.mean(query_time, axis=0)
print("mAP:", mAP_at_n)
print("avg. query time: ", avg_query_time)
save_results('/content/drive/MyDrive/CV_Birds/performance/fine_tuning/index/FT_pert-forest_512_20rnd_cosine', 'FT_pert-forest_512_20rnd_cosine_results', aps)
!mv /content/tree* /content/drive/MyDrive/CV_Birds/performance/fine_tuning/index/FT_pert-forest_512_20rnd_cosine/pert_forest_structure/
!mv /content/pert* /content/drive/MyDrive/CV_Birds/performance/fine_tuning/index/FT_pert-forest_512_20rnd_cosine/pert_forest_structure/
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_TUMOR.ipynb)
# **Detect tumor characteristics**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['secret']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
```
Install dependencies
```
# Install Java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark and SparkNLP
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==2.5.2 --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==2.5.2
```
Import dependencies into Python and start the Spark session
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import sparknlp
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import pyspark.sql.functions as F
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.2') \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-2.5.2.jar')
spark = builder.getOrCreate()
```
## 2. Select the NER model and construct the pipeline
Select the NER model - Tumor model: **ner_bionlp**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# You can change this to the model you want to use and re-run cells below.
# Neoplasm models: ner_bionlp
# All these models use the same clinical embeddings.
MODEL_NAME = "ner_bionlp"
```
Create the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"""Under loupe magnification, the lesion was excised with 2 mm margins, oriented with sutures and submitted for frozen section pathology. The report was "basal cell carcinoma with all margins free of tumor." Hemostasis was controlled with the Bovie. Excised lesion diameter was 1.2 cm. The defect was closed by elevating a left laterally based rotation flap utilizing the glabellar skin. The flap was elevated with a scalpel and Bovie, rotated into the defect without tension, ***** to the defect with scissors and inset in layer with interrupted 5-0 Vicryl for the dermis and running 5-0 Prolene for the skin. Donor site was closed in V-Y fashion with similar suture technique."""
]
```
## 4. Use the pipeline to create outputs
```
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
```
## 5. Visualize results
Visualize outputs as data frame
```
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
```
Functions to display outputs as HTML
```
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
```
Display example outputs as HTML
```
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
```
| github_jupyter |
```
!pip install chart_studio
import plotly.graph_objects as go
import plotly.offline as offline_py
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import plotly.figure_factory as ff
import numpy as np
%matplotlib inline
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/DSEI21000-S21/project-product-price-prediction/main/data/random_samples/stratified_sampling_data_by_price_whigh_sz50000_1619218354.csv")
# size of dataset
print('The size of the dataset is: {} \n'.format(df.shape))
# different data types in the dataset
print('The types of the dataset: {}'.format(df.dtypes))
df.head()
df.price.describe()
# most popular categories -- Women, electronics and men
x = df['c1'].value_counts().index.values.astype('str')[:15]
y = df['c1'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
trace1 = go.Bar(x=x, y=y, text=pct)
layout = dict(title= 'Number of Items by Main Category',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
x = df['brand_name'].value_counts().index.values.astype('str')[:15]
y = df['brand_name'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
colorscale = [[0, '#FAEE1C'], [0.33, '#F3558E'], [0.66, '#9C1DE7'], [1, '#581B98']]
# most popular brands -- Nike & PINK
trace1 = go.Bar(x=x, y=y, text=pct, marker=dict(color = y, colorscale=colorscale, showscale=True))
layout = dict(title= 'Number of Items by brand name',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
dataframe = df[df.brand_name == 'Nike'][:100]
datawomen = dataframe.loc[:, ['price', 'shipping']]
datawomen["index"] = np.arange(1,len(datawomen)+1)
fig = ff.create_scatterplotmatrix(datawomen, diag='box', index='index',colormap='Portland',
colormap_type='cat',
height=700, width=700)
offline_py.iplot(fig)
# visualize which words has the highest frequencies within the top1 category
description = df.item_description[df.c1 == 'women']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Women')
plt.show()
description = df.item_description[df.c1 == 'electronics']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Electronics')
plt.show()
description = df.item_description[df.c1 == 'men']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Men')
plt.show()
```
| github_jupyter |
```
# 加载文本分类数据集
from sklearn.datasets import fetch_20newsgroups
import random
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
print("sample several datas: ")
print("X_train: ", X_train[0: 2])
print("Y_train:", y_train[0: 2])
# 提取文本TF-IDF数据特征
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
def TFIDF(X_train, X_test, MAX_NB_WORDS=75000):
vectorizer_x = TfidfVectorizer(max_features=MAX_NB_WORDS)
X_train = vectorizer_x.fit_transform(X_train).toarray()
X_test = vectorizer_x.transform(X_test).toarray()
print("tf-idf with", str(np.array(X_train).shape[1]),"features")
return X_train, X_test
X_train, X_test = TFIDF(X_train, X_test)
# 使用PCA将文本特征降纬
from sklearn.decomposition import PCA
pca = PCA(n_components=2000)
X_train_new = pca.fit_transform(X_train)
X_test_new = pca.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# 使用LDA将数据降纬
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
LDA = LinearDiscriminantAnalysis(n_components=15)
X_train_new = LDA.fit(X_train, y_train)
X_train_new = LDA.transform(X_train)
X_test_new = LDA.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# 使用NMF将数据降纬
from sklearn.decomposition import NMF
NMF_ = NMF(n_components=2000)
X_train_new = NMF_.fit(X_train)
X_train_new = NMF_.transform(X_train)
X_test_new = NMF_.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new))
# 使用random projection将数据降纬
from sklearn import random_projection
RandomProjection = random_projection.GaussianRandomProjection(n_components=2000)
X_train_new = RandomProjection.fit_transform(X_train)
X_test_new = RandomProjection.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# about T-SNE
import numpy as np
from sklearn.manifold import TSNE
X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
X_embedded = TSNE(n_components=2).fit_transform(X)
print(X_embedded.shape)
# Rocchio classification
from sklearn.neighbors.nearest_centroid import NearestCentroid
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', NearestCentroid()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# boosting classification
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', GradientBoostingClassifier(n_estimators=100)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# bagging classifier
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', BaggingClassifier(KNeighborsClassifier())),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Naive Bayes Classifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# K-nearest Neighbor
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', KNeighborsClassifier()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Support Vector Machine (SVM)
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LinearSVC()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Decision Tree
from sklearn import tree
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', tree.DecisionTreeClassifier()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier(n_estimators=100)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
```
| github_jupyter |
# MDP from multidimensional HJB
see [pdf](https://github.com/songqsh/foo1/blob/master/doc/191206HJB.pdf) for its math derivation
see souce code at
- [py](hjb_mdp_v05_3.py) for tabular approach and
- [py](hjb_mdp_nn_v05.py) for deep learning approach
```
import numpy as np
import time
#import ipdb
import itertools
def deep_iter(*shape):
iters = (range(i) for i in shape)
return itertools.product(*iters)
class Pde:
def __init__(
self,
dim=1,
lam=0.0,
drift = lambda s,a: a,
run_cost = lambda s,a: len(s) + np.sum(s**2)*2.+ np.sum(a**2)/2.0,
term_cost = lambda s: -np.sum(s**2),
limit_s = 1.0, #l-infinity limit for state
limit_a = 2.0, #l-infinity limit for action
verbose=True
):
self.dim = dim
self.lam = lam
self.drift = drift
self.run_cost = run_cost
self.term_cost = term_cost
self.limit_s = limit_s
self.limit_a = limit_a
if verbose:
print(str(dim) + '-dim HJB')
#domain is a unit hyper cube
def is_interior(self, s):
return all(0<s<1)
#cfd2mdp
def mdp(self, n_mesh_s = 8, n_mesh_a = 16, method='cfd'):
out = {}
####domain of mdp
h_s = self.limit_s/n_mesh_s #mesh size in state
h_a = self.limit_a/n_mesh_a #mesh size in action
v_shape = tuple([n_mesh_s + 1]*self.dim)
a_shape = tuple([n_mesh_a + 1]*self.dim)
def is_interior(*ix_s):
return all([0<x<n_mesh_s for x in ix_s])
out.update({
'v_shape': v_shape,
'a_shape': a_shape,
'is_interior': is_interior
})
####domain
# convert index(tuple) to state
def i2s(*ix):
return np.array([x * h_s for x in ix])
out['i2s'] = i2s
#convert index to action
def i2a(*ix):
return np.array([x * h_a for x in ix])
#out['i2a'] = i2a
########running and terminal costs and discount rate
def run_cost(ix_s,ix_a):
return self.run_cost(i2s(*ix_s), i2a(*ix_a))*h_s**2/self.dim
def term_cost(ix_s):
return self.term_cost(i2s(*ix_s))
rate = self.dim/(self.dim+self.lam*(h_s**2))
out.update({
'run_cost': run_cost,
'term_cost': term_cost,
'rate': rate
})
#########
#####transition
#return:
# a list of nbd indices
# a list of prob
def step(ix_s, ix_a):
ix_next_s_up = (np.array(ix_s)+np.eye(self.dim)).astype(int).tolist()
ix_next_s_dn = (np.array(ix_s)-np.eye(self.dim)).astype(int).tolist()
ix_next_s = [tuple(ix) for ix in ix_next_s_up+ix_next_s_dn]
pr=[]
if method == 'cfd':
b = self.drift(i2s(*ix_s), i2a(*ix_a))
pr_up = ((1+2.*h_s*b)/self.dim/2.0).tolist()
pr_dn = ((1-2.*h_s*b)/self.dim/2.0).tolist()
pr = pr_up+pr_dn
return ix_next_s, pr
out.update({'step': step})
return out
def value_iter(v_shape, a_shape, i2s, is_interior,
run_cost, term_cost, rate, step):
dim = len(v_shape)
v0 = np.zeros(v_shape)
# boundary value
for ix_s in deep_iter(*v_shape):
if not is_interior(*ix_s):
v0[ix_s]=term_cost(ix_s)
v1 = v0.copy()
for iter_n in range(100):
for ix_s0 in deep_iter(*v_shape):
if is_interior(*ix_s0):
q1 = []
for ix_a in deep_iter(*a_shape):
rhs = run_cost(ix_s0, ix_a)
ix_s1, pr = step(ix_s0, ix_a);
for k in range(2*dim):
rhs += v0[ix_s1[k]]*pr[k]
q1 += [rhs,]
v1[ix_s0] = rate*min(q1);
if np.max(np.abs(v0 - v1)) < 1e-3:
v0 = v1.copy()
break
v0 = v1.copy();
#iter_n += 1
return iter_n, v0
p = Pde(dim=2); m = p.mdp(n_mesh_s=16)
start_time = time.time()
n, v = value_iter(**m)
end_time = time.time()
print('>>>time elapsed is: ' + str(end_time - start_time))
def true_soln(s):
return -np.sum(s**2)
err = []
for ix_s in deep_iter(*m['v_shape']):
err0 = np.abs(v[ix_s] - true_soln(m['i2s'](*ix_s)))
err += [err0, ]
print('>>> sup norm error is: ' + str(max(err)))
print('>>> number of iterations is: ' + str(n))
```
| github_jupyter |
# Code Reuse
Let’s put what we learned about code reuse all together.
<br><br>
First, let’s look back at **inheritance**. Run the following cell that defines a generic `Animal` class.
```
class Animal:
name = ""
category = ""
def __init__(self, name):
self.name = name
def set_category(self, category):
self.category = category
```
What we have is not enough to do much -- yet. That’s where you come in.
<br><br>
In the next cell, define a `Turtle` class that inherits from the `Animal` class. Then go ahead and set its category. For instance, a turtle is generally considered a reptile. Although modern cladistics call this categorization into question, for purposes of this exercise we will say turtles are reptiles!
```
class Turtle(Animal):
category = "reptile"
```
Run the following cell to check whether you correctly defined your `Turtle` class and set its category to reptile.
```
print(Turtle.category)
```
Was the output of the above cell reptile? If not, go back and edit your `Turtle` class making sure that it inherits from the `Animal` class and its category is properly set to reptile. Be sure to re-run that cell once you've finished your edits. Did you get it? If so, great!
Next, let’s practice **composition** a little bit. This one will require a second type of `Animal` that is in the same category as the first. For example, since you already created a `Turtle` class, go ahead and create a `Snake` class. Don’t forget that it also inherits from the `Animal` class and that its category should be set to reptile.
```
class Snake(Animal):
category = "reptile"
```
Now, let’s say we have a large variety of `Animal`s (such as turtles and snakes) in a Zoo. Below we have the `Zoo` class. We’re going to use it to organize our various `Animal`s. Remember, inheritance says a Turtle is an `Animal`, but a `Zoo` is not an `Animal` and an `Animal` is not a `Zoo` -- though they are related to one another.
Fill in the blanks of the `Zoo` class below so that you can use **zoo.add_animal( )** to add instances of the `Animal` subclasses you created above. Once you’ve added them all, you should be able to use **zoo.total_of_category( )** to tell you exactly how many individual `Animal` types the `Zoo` has for each category! Be sure to run the cell once you've finished your edits.
```
class Zoo:
def __init__(self):
self.current_animals = {}
def add_animal(self, animal):
self.current_animals[animal.name] = animal.category
def total_of_category(self, category):
result = 0
for animal in self.current_animals.values():
if animal == category:
result += 1
return result
zoo = Zoo()
```
Run the following cell to check whether you properly filled in the blanks of your `Zoo` class.
```
turtle = Turtle("Turtle") #create an instance of the Turtle class
snake = Snake("Snake") #create an instance of the Snake class
zoo.add_animal(turtle)
zoo.add_animal(snake)
print(zoo.total_of_category("reptile")) #how many zoo animal types in the reptile category
```
Was the output of the above cell 2? If not, go back and edit the `Zoo` class making sure to fill in the blanks with the appropriate attributes. Be sure to re-run that cell once you've finished your edits.
<br>
Did you get it? If so, perfect! You have successfully defined your `Turtle` and `Snake` subclasses as well as your `Zoo` class. You are all done with this notebook. Great work!
| github_jupyter |
```
!nvidia-smi
import sys
if 'google.colab' in sys.modules:
!pip install -Uqq fastcore onnx onnxruntime sentencepiece seqeval rouge-score
!pip install -Uqq --no-deps fastai ohmeow-blurr
!pip install -Uqq transformers datasets wandb
from fastai.text.all import *
from fastai.callback.wandb import *
from transformers import *
from datasets import load_dataset, concatenate_datasets
from blurr.data.all import *
from blurr.modeling.all import *
```
## Data preprocessing
```
ds_name = 'snli'
train_ds = load_dataset(ds_name, split='train')
valid_ds = load_dataset(ds_name, split='validation')
len(train_ds), len(valid_ds)
train_ds.column_names
train_ds[2]
from collections import Counter
Counter(train_ds['label'])
train_ds = train_ds.filter(lambda sample: sample['label'] in [0,1,2])
valid_ds = valid_ds.filter(lambda sample: sample['label'] in [0,1,2])
```
## Setup
```
model_name = 'distilbert-base-uncased'
# data
max_len = 512
bs = 32
val_bs = bs*2
# training
lr = 2e-5
```
## Tracking
```
import wandb
WANDB_NAME = f'{ds_name}-{model_name}-alum'
GROUP = f'{ds_name}-{model_name}-alum-{lr:.0e}'
NOTES = f'Simple finetuning {model_name} with RAdam lr={lr:.0e}'
CONFIG = {}
TAGS =[model_name,ds_name,'radam','alum']
wandb.init(reinit=True, project="vat", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
```
## Training
```
def _to_device(e, device):
if hasattr(e, 'to'): return e.to(device)
elif isinstance(e, dict):
for _, v in e.items():
if hasattr(v, 'to'): v.to(device)
return {k:(v.to(device) if hasattr(v, 'to') else v) for k, v in e.items()}
@patch
def one_batch(self:Learner, i, b):
self.iter = i
b_on_device = tuple(_to_device(e, self.dls.device) for e in b) if self.dls.device is not None else b
self._split(b_on_device)
self._with_events(self._do_one_batch, 'batch', CancelBatchException)
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR_MODEL_HELPER.get_hf_objects(model_name, model_cls=AutoModelForSequenceClassification, tokenizer_cls=AutoTokenizer,
config_kwargs={'num_labels':3}, tokenizer_kwargs={'max_len':512})
def get_x(sample):
return sample['premise'], sample['hypothesis']
ds = concatenate_datasets([train_ds, valid_ds])
train_idx = list(range(len(train_ds)))
valid_idx = list(range(len(train_ds), len(train_ds)+len(valid_ds)))
# use number of chars as proxy to number of tokens for simplicity
lens = ds.map(lambda s: {'len': len(s['premise'])+len(s['hypothesis'])}, remove_columns=ds.column_names, num_proc=4)
train_lens = lens.select(train_idx)['len']
valid_lens = lens.select(valid_idx)['len']
blocks = (HF_TextBlock(hf_arch, hf_config, hf_tokenizer, hf_model),
CategoryBlock(vocab={0:'entailment', 1:'neutral', 2:'contradiction'}))
dblock = DataBlock(blocks=blocks,
get_x = get_x,
get_y=ItemGetter('label'),
splitter=IndexSplitter(list(range(len(train_ds), len(train_ds)+len(valid_ds)))))
# dblock.summary(train_ds)
%%time
dls = dblock.dataloaders(ds, bs=bs, val_bs=val_bs, dl_kwargs=[{'res':train_lens}, {'val_res':valid_lens}], num_workers=4)
# b = dls.one_batch()
model = HF_BaseModelWrapper(hf_model)
learn = Learner(dls,
model,
opt_func=RAdam,
metrics=[accuracy],
cbs=[HF_BaseModelCallback],
splitter=hf_splitter).to_fp16()
# learn.blurr_summary()
```
### ALUM finetuning
```
# !pip install git+git://github.com/aikindergarten/vat.git --no-deps -q
from vat.core import ALUMCallback
learn.add_cb(ALUMCallback(learn.model.hf_model.base_model.embeddings, start_epoch=2, alpha=0.5));
learn.fit_one_cycle(5, lr, cbs=WandbCallback(log_preds=False, log_model=False))
learn.validate()
test_ds = load_dataset('snli', split='test')
test_ds[0]
test_ds = test_ds.filter(lambda s: s['label'] in [0,1,2])
test_dl = dls.test_dl(test_ds, with_labels=True)
learn.validate(dl=test_dl)
wandb.finish()
```
## Validation on adversarial data
```
adv_ds = load_dataset('anli', split='test_r1')
adv_ds[0]
test_dl = dls.test_dl(adv_ds, with_labels=True)
learn.validate(dl=test_dl)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from pathlib import Path
dir_path = Path().resolve().parent / 'demand_patterns'
low_patterns = "demand_patterns_train_low.csv"
fullrange_patterns = "demand_patterns_train_full_range.csv"
combined_pattern = 'demand_patterns_train_combined.csv'
comb = pd.read_csv(dir_path / combined_pattern)
comb
# FARE PER CREARE MIXED DEMAND_PATTERNS PER TRAINING
low_demand = pd.read_csv(dir_path / low_patterns)
fullrange_demand = pd.read_csv(dir_path / fullrange_patterns)
new = pd.concat([low_demand, fullrange_demand], axis=1, ignore_index=True)
new
output_file = dir_path / 'demand_patterns_train_combined.csv'
new.to_csv(output_file, index=False)
new = pd.read_csv(output_file)
new
import pandas as pd
import numpy as np
from pathlib import Path
dir_path = Path().resolve().parent / 'demand_patterns'
test_low_patterns = 'demand_patterns_test_low.csv'
test_full_range_patterns = 'demand_patterns_test_full_range.csv'
test_high_patterns = "demand_patterns_test.csv"
test_middle_patterns = "demand_patterns_test_middle.csv"
df_low = pd.read_csv(dir_path / test_low_patterns)
df_low
sum_of_columns = [df_low.loc[:, index].sum() for index in df_low.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_low[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_low[str(max_column)].sum()))
df_low['4'].sum()
df_full_range = pd.read_csv(dir_path / test_full_range_patterns)
df_full_range
sum_of_columns = [df_full_range.loc[:, index].sum() for index in df_full_range.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_full_range[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_full_range[str(max_column)].sum()))
df_high = pd.read_csv(dir_path / test_high_patterns)
df_high
sum_of_columns = [df_high.loc[:, index].sum() for index in df_high.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_high[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_high[str(max_column)].sum()))
df_middle = pd.read_csv(dir_path / test_middle_patterns)
df_middle
sum_of_columns = [df_middle.loc[:, index].sum() for index in df_middle.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_middle[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_middle[str(max_column)].sum()))
# Creation of appropriate test dataframe (we take the lower demand patten, a central one and the higher one)
df_new = pd.DataFrame(df_low['4'].values)
df_new.insert(1, '1', df_middle['54'])
df_new.insert(2, '2', df_full_range['132'])
df_new.insert(3, '3', df_high['6'])
df_new
output_file = dir_path / 'demand_patterns_test_mixed.csv'
df_new.to_csv(output_file, index=False)
df = pd.read_csv(output_file)
df
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15,7))
ax.plot(df['0'].values)
ax.plot(df['1'].values)
ax.plot(df['2'].values)
ax.plot(df['3'].values)
ax.set_title("Test demand patterns trend")
ax.legend(('Low', 'Midium', 'Semi-high', 'High' ))
```
| github_jupyter |
# Practical 7. Assignment 3.
Due date is March 7, before the class. You can work with a partner
Partner: Mohamed Salama, utorid: salamam5
## Problem 1. Your first MD simulation.
Read through section 6 and example 6.1-6.2 of the lecture. Run 3 simulations of fully extended polyglycine `data/polyGLY.pdb` for 1 nanosecond in vacuum (no water) with $T_1=100 K$, $T_2=300 K$, and $T_3=500 K$ and visually compare how extended the final structure is at each temperature. Write down your observations.
```
from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *
import MDAnalysis as md
import nglview as ng
from sys import stdout
pdb0_file = 'data/polyGLY.pdb'
file0 = open(pdb0_file, 'r')
for line in file0:
print(line)
u = md.Universe(pdb0_file)
ng.show_mdanalysis(u, gui=True)
def simulate(temp, fname):
'''run simulation on polyclicine for 1 nanosecond in vacuum (no water) with given temperature
and save file as fname
'''
### 1.loading initial coordinates
pdb = PDBFile(pdb0_file)
### 2.choosing a forcefield parameters
ff = ForceField('amber10.xml')
system = ff.createSystem(pdb.topology, nonbondedMethod=CutoffNonPeriodic)
### 3. Choose parameters of the experiment: temperature, pressure, box size, solvation, boundary conditions, etc
temperature = temp*kelvin
frictionCoeff = 1/picosecond
time_step = 0.002*picoseconds
total_steps = 1*nanosecond / time_step
### 4. Choose an algorithm (integrator)
integrator = LangevinIntegrator(temperature, frictionCoeff, time_step)
### 5. Run simulation, saving coordinates time to time:
### 5a. Create a simulation object
simulation = Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)
### 5b. Minimize energy
simulation.minimizeEnergy()
### 5c. Save coordinates to dcd file and energues to standard output console:
simulation.reporters.append(DCDReporter(fname, 1000))
simulation.reporters.append(StateDataReporter(stdout, 5000, step=True, potentialEnergy=True,\
temperature=True, progress=True, totalSteps = total_steps))
### 5d. Run!
simulation.step(total_steps)
simulate(500, 'data/polyALA_traj_500K.dcd')
### 6. Visualization
sys = md.Universe(pdb0_file, 'data/polyALA_traj_100K.dcd')
ng.show_mdanalysis(sys, gui=True)
### 6. Visualization
sys = md.Universe(pdb0_file, 'data/polyALA_traj_300K.dcd')
ng.show_mdanalysis(sys, gui=True)
### 6. Visualization
sys = md.Universe(pdb0_file, 'data/polyALA_traj_500K.dcd')
ng.show_mdanalysis(sys, gui=True)
```
# At the lowest temperature (100 K) the protein was much more extended after 1 ns compared to the other two temperatures (there was very little folding at 100 K). At 300 K and 500 K the protein folded much more, and apeared to be about equally compact after 1 ns for both temperatures
## Problem 2. MD simulation analysis.
Perform a quantitative analysis of how extended/collapsed the proteins are in the trajectories obtained from Problem 1. Use, for example, end-to-end distance and/or the function `radius_of_gyration()` from the `MDAnalysis` module, which returns the [radius of gyration](https://en.wikipedia.org/wiki/Radius_of_gyration) of the protein. Present your findings and explain your observations from the physical perspective.
**Hint**. Think about the entropical and energetical contributions to the collapse and how temperature plays role in these processes.
```
import numpy as np
import matplotlib.pyplot as plt
def end2end(sys):
### analysis of end-to-end distance
## choose terminal atoms
N_terminus = sys.select_atoms('resid 1 and name N')
C_terminus = sys.select_atoms('resid 25 and name C')
## go through the whole trajectory and compute distance between them dor every frame
dist = []
for frame in sys.trajectory:
dist.append(np.linalg.norm(N_terminus.positions - C_terminus.positions))
## the result is in the dist array
dist = np.array(dist)
return dist
```
## Plotting end to end distance for each temperature
```
sys1 = md.Universe(pdb0_file, 'data/polyALA_traj_100K.dcd')
sys2 = md.Universe(pdb0_file, 'data/polyALA_traj_300K.dcd')
sys3 = md.Universe(pdb0_file, 'data/polyALA_traj_500K.dcd')
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.plot( end2end(sys1), '-k' )
plt.xlabel('timesteps')
plt.ylabel('end-to-end distance, A')
plt.title("Speed of Folding at 100 K")
plt.show()
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.plot( end2end(sys2), '-k' )
plt.xlabel('timesteps')
plt.ylabel('end-to-end distance, A')
plt.title("Speed of Folding at 300 K")
plt.show()
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.plot( end2end(sys3), '-k' )
plt.xlabel('timesteps')
plt.ylabel('end-to-end distance, A')
plt.title("Speed of Folding at 500 K")
plt.show()
print("Final end to end distance at 100 K:")
print(end2end(sys1)[-1])
print("Final end to end distance at 300 K:")
print(end2end(sys2)[-1])
print("Final end to end distance at 500 K:")
print(end2end(sys3)[-1])
from MDAnalysis.analysis import hbonds ## module for analysis of hydrogen bonds
## compute information about hbonds and write it in the 'hb.timeseries'
def plot(num):
## go through the 'hb.timeseries' file and calculate number of bonds for each time frame (it's the length of array frame)
hb_number = []
hb = hbonds.hbond_analysis.HydrogenBondAnalysis(num)
hb.run()
for frame in hb.timeseries:
hb_number.append(len(frame))
## the result is in the number array
hb_number = np.array(hb_number)
plt.figure(figsize=(15,5))
plt.plot(hb_number, 'g-')
plt.ylabel('# of hydrogen bonds')
plt.xlabel('timesteps')
plot(sys1)
plt.title("Forming of Hydrogen Bonds at 100 K")
plt.show()
plot(sys2)
plt.title("Forming of Hydrogen Bonds at 300 K")
plt.show()
plot(sys3)
plt.title("Forming of Hydrogen Bonds at 500 K")
plt.show()
# Radii of Gyration
print("Radius of gyration after 1 ns, at 100 K:")
print(sys1.atoms.radius_of_gyration())
print("\nRadius of gyration after 1 ns, at 300 K:")
print(sys2.atoms.radius_of_gyration())
print("\nRadius of gyration after 1 ns, at 500 K:")
print(sys3.atoms.radius_of_gyration())
```
As shown by the first set of plots, the speed of folding (considering end to end distances) increases at higher temperatures, and the final radius of gyration is also inversely proportional to the temperature, suggesting that the protein folds faster, and reaches a more compact state at higher temperatures.
At 100K there may be too little kinetic energy for the protein to fold. At 300 K, the protein can perhaps move more due to the higher kinetic energy, but at 500 K, the kinetic energy is maybe so high that the hydrogen bonds break more freely (which is why the final number of hydrogen bonds seems to be higher at 300 K than at 500 K).
Perhaps the fewer number of hydrogen bonds at 500 K allows the protein to be more flexible and thus reach its most compact state. (ie. at this temperature, the molecule has the most kinetic energy, so it can move around more and try more configurations until the most energetically favourable configuration is reached - it will not get stuck in any local minima that perhaps the 300 K simulation was stuck at because it has enough energy to break out of those configurations).
| github_jupyter |
<h3>Cleaning Bad data
-Strip white space
-Replace bad data
-Fill missing data
-Drop bad data
-Drop duplicate
```
import pandas as pd
data = pd.read_csv('artwork_data.csv',low_memory=False)
data.head(2)
#finding data which has any white space
data.loc[data['title'].str.contains('\s$',regex=True )]
#Str.strip method will remove whitespace at the end of the string
data['title'].str.strip()
#Need to make changes in the dataframe
data['title']=data['title'].str.strip()
data.head()
#Now we can run and check filter whether it has whitespace it it
data.loc[data['title'].str.contains('\s$', regex=True)]
#We can also use lstrip and rstrip
data['title'].str.rstrip()
data['title'].str.lstrip()
#we can also use transform method instead of string methods
data['title'].transform(lambda x: x.strip())
```
<h4> Replace Bad data with NaN
```
import numpy as np
import pandas as pd
data = pd.read_csv('artwork_data.csv', low_memory=False)
data.head(2)
pd.isna(data.loc[:, 'dateText'])
#Without loc method
pd.isna(data['dateText'])
data.replace({'dateText':{'date not known':np.nan}})
data.replace({'dateText':{'date not known':np.nan}}, inplace=True)
data = pd.read_csv('artwork_data.csv', low_memory = False)
data.head()
#Instead of loc we can also use below method in some circumstances
data.loc[data['dateText'] == 'date not known',['dateText']] = np.nan
data.head(3)
data.loc[data['year'].notnull() & data['year'].astype(str).str.contains('[^0-9]')]
data.loc[data['year'].notnull() & data['year'].astype(str).str.contains('[^0-9]'),['year']] = np.nan
data.iloc[67968:67969]
```
<h4>Filling missing data with a value
```
data = pd.read_csv('artwork_data.csv', low_memory=False)
data.head(3)
data.fillna(0)
#Fillna method will replace all nan value to 0
#Lets specify with column and respected value in dictonary format
data.fillna(value = {'depth' : 0, 'inscription' : 0})
#Lets make changes in original dataframe
data.fillna(value = {'depth' : 0,}, inplace = True)
data.head(3)
```
<h4>Dropping data
```
data = pd.read_csv('artwork_data.csv', low_memory = False)
data.head(2)
data.shape
data.dropna()
data.dropna().shape
data.dropna(how = 'all')#Drop rows if any rows contains nan
data.dropna(how = 'all')
data.dropna(how = 'all').shape
#We can set thresh to drop rowns from dataframe
#setting up thresh = 15, it will drop rows if any rows had atleast 15 nan
data.dropna(thresh=15).shape
#we can also set specific columns data to drop
data.dropna(subset=['year','acquisitionYear']).shape
# We can also set how = any or all
data.dropna(subset = ['year','acquisitionYear'], how = 'all').shape
# Will drop rows if both year and acquisitionYear columns has nan value.
data.dropna(subset = ['year','acquisitionYear'], inplace=True)
data.shape
```
<h4> Identifying and Dropping Duplicate Data
```
data = pd.read_csv('artwork_sample.csv')
data.head(3)
# Foe some machine Learning models it okay to have duplicate but some models doensot need it
data.drop_duplicates()
# drop_duplicated method will drop if every single value has duplicate
# Instead we can pass subset of columns
data.drop_duplicates(subset = ['artist'])
# We can also keep first or last duplicate values by passing keep argument
#Keep first rows
data.drop_duplicates(subset = ['artist'], keep = 'first')
#Keep last rows
data.drop_duplicates(subset = ['artist'], keep = 'last')
#If we don't want to keep none of the duplicate row pass keep = False
#Keep first rows
data.drop_duplicates(subset = ['artist'], keep = False)
# Drop duplicate doesn't change in original dataframe we can pass inplace = True
#Keep first rows
data.drop_duplicates(subset = ['artist'], keep = 'first', inplace = True)
data
#lets read large one
data = pd.read_csv('artwork_data.csv', low_memory = False)
data.head(2)
# Lets check which rows are duplicated
data.duplicated()
data.loc[data.duplicated()]
#None of the rows are fully duplicated
#Lets add some parm, duplicated will take same subset parm
data.duplicated(subset = ['artist','title'], keep = False)
data.loc[data.duplicated(subset = ['artist','title'], keep = False)]
data.loc[data.duplicated(subset = ['artist','title'], keep = False)].shape
#Lets take some more example
data.loc[data['title'].str.contains('The Circle of the Lustful')]
#same title has 2 different year in term of acquisition probably reprint of the album
```
| github_jupyter |
# Interfaces
In Nipype, interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python. Such an interface knows what sort of options an external program has and how to execute it.
## Interfaces vs. Workflows
Interfaces are the building blocks that solve well-defined tasks. We solve more complex tasks by combining interfaces with workflows:
<table style="width: 100%; font-size: 14px;">
<thead>
<th style="text-align:left">Interfaces</th>
<th style="text-align:left">Workflows</th>
</thead>
<tbody>
<tr>
<td style="text-align:left">Wrap *unitary* tasks</td>
<td style="text-align:left">Wrap *meta*-tasks
<li style="text-align:left">implemented with nipype interfaces wrapped inside ``Node`` objects</li>
<li style="text-align:left">subworkflows can also be added to a workflow without any wrapping</li>
</td>
</tr>
<tr>
<td style="text-align:left">Keep track of the inputs and outputs, and check their expected types</td>
<td style="text-align:left">Do not have inputs/outputs, but expose them from the interfaces wrapped inside</td>
</tr>
<tr>
<td style="text-align:left">Do not cache results (unless you use [interface caching](advanced_interfaces_caching.ipynb))</td>
<td style="text-align:left">Cache results</td>
</tr>
<tr>
<td style="text-align:left">Run by a nipype plugin</td>
<td style="text-align:left">Run by a nipype plugin</td>
</tr>
</tbody>
</table>
To illustrate why interfaces are so useful, let's have a look at the brain extraction algorithm [BET](http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET) from FSL. Once in its original framework and once in the Nipype framework.
## BET in the origional framework
Let's take a look at one of the T1 images we have in our dataset on which we want to run BET.
```
from nilearn.plotting import plot_anat
%matplotlib inline
import matplotlib.pyplot as plt
plot_anat('/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
In its simplest form, you can run BET by just specifying the input image and tell it what to name the output image:
bet <input> <output>
```
%%bash
FILENAME=/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w
bet ${FILENAME}.nii.gz /output/sub-01_ses-test_T1w_bet.nii.gz
```
Let's take a look at the results:
```
plot_anat('/output/sub-01_ses-test_T1w_bet.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
Perfect! Exactly what we want. Hmm... what else could we want from BET? Well, it's actually a fairly complicated program. As is the case for all FSL binaries, just call it with no arguments to see all its options.
```
%%bash
bet
```
We see that BET can also return a binary brain mask as a result of the skull-strip, which can be useful for masking our GLM analyses (among other things). Let's run it again including that option and see the result.
```
%%bash
FILENAME=/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w
bet ${FILENAME}.nii.gz /output/sub-01_ses-test_T1w_bet.nii.gz -m
plot_anat('/output/sub-01_ses-test_T1w_bet_mask.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
Now let's look at the BET interface in Nipype. First, we have to import it.
## BET in the Nipype framework
So how can we run BET in the Nipype framework?
First things first, we need to import the ``BET`` class from Nipype's ``interfaces`` module:
```
from nipype.interfaces.fsl import BET
```
Now that we have the BET function accessible, we just have to specify the input and output file. And finally we have to run the command. So exactly like in the original framework.
```
skullstrip = BET()
skullstrip.inputs.in_file = "/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz"
skullstrip.inputs.out_file = "/output/T1w_nipype_bet.nii.gz"
res = skullstrip.run()
```
If we now look at the results from Nipype, we see that it is exactly the same as before.
```
plot_anat('/output/T1w_nipype_bet.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
This is not surprising, because Nipype used exactly the same bash code that we were using in the original framework example above. To verify this, we can call the ``cmdline`` function of the constructed BET instance.
```
print(skullstrip.cmdline)
```
Another way to set the inputs on an interface object is to use them as keyword arguments when you construct the interface instance. Let's write the Nipype code from above in this way, but let's also add the option to create a brain mask.
```
skullstrip = BET(in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz",
out_file="/output/T1w_nipype_bet.nii.gz",
mask=True)
res = skullstrip.run()
```
Now if we plot this, we see again that this worked exactly as before. No surprise there.
```
plot_anat('/output/T1w_nipype_bet_mask.nii.gz', title='after skullstrip',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
## Help Function
But how did we know what the names of the input parameters are? In the original framework we were able to just run ``BET``, without any additional parameters to get an information page. In the Nipype framework we can achieve the same thing by using the ``help()`` function on an interface class. For the BET example, this is:
```
BET.help()
```
As you can see, we get three different informations. ***First***, a general explanation of the class.
Wraps command **bet**
Use FSL BET command for skull stripping.
For complete details, see the `BET Documentation.
<http://www.fmrib.ox.ac.uk/fsl/bet2/index.html>`_
Examples
--------
>>> from nipype.interfaces import fsl
>>> from nipype.testing import example_data
>>> btr = fsl.BET()
>>> btr.inputs.in_file = example_data('structural.nii')
>>> btr.inputs.frac = 0.7
>>> res = btr.run() # doctest: +SKIP
***Second***, a list of all possible input parameters.
Inputs:
[Mandatory]
in_file: (an existing file name)
input file to skull strip
flag: %s, position: 0
[Optional]
args: (a string)
Additional parameters to the command
flag: %s
center: (a list of at most 3 items which are an integer (int or
long))
center of gravity in voxels
flag: -c %s
environ: (a dictionary with keys which are a value of type 'str' and
with values which are a value of type 'str', nipype default value:
{})
Environment variables
frac: (a float)
fractional intensity threshold
flag: -f %.2f
functional: (a boolean)
apply to 4D fMRI data
flag: -F
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
ignore_exception: (a boolean, nipype default value: False)
Print an error message instead of throwing an exception in case the
interface fails to run
mask: (a boolean)
create binary mask image
flag: -m
mesh: (a boolean)
generate a vtk mesh brain surface
flag: -e
no_output: (a boolean)
Don't generate segmented output
flag: -n
out_file: (a file name)
name of output skull stripped image
flag: %s, position: 1
outline: (a boolean)
create surface outline image
flag: -o
output_type: ('NIFTI_PAIR' or 'NIFTI_PAIR_GZ' or 'NIFTI_GZ' or
'NIFTI')
FSL output type
padding: (a boolean)
improve BET if FOV is very small in Z (by temporarily padding end
slices)
flag: -Z
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
radius: (an integer (int or long))
head radius
flag: -r %d
reduce_bias: (a boolean)
bias field and neck cleanup
flag: -B
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
remove_eyes: (a boolean)
eye & optic nerve cleanup (can be useful in SIENA)
flag: -S
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
robust: (a boolean)
robust brain centre estimation (iterates BET several times)
flag: -R
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
skull: (a boolean)
create skull image
flag: -s
surfaces: (a boolean)
run bet2 and then betsurf to get additional skull and scalp surfaces
(includes registrations)
flag: -A
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
t2_guided: (a file name)
as with creating surfaces, when also feeding in non-brain-extracted
T2 (includes registrations)
flag: -A2 %s
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
terminal_output: ('stream' or 'allatonce' or 'file' or 'none')
Control terminal output: `stream` - displays to terminal immediately
(default), `allatonce` - waits till command is finished to display
output, `file` - writes output to file, `none` - output is ignored
threshold: (a boolean)
apply thresholding to segmented brain image and mask
flag: -t
vertical_gradient: (a float)
vertical gradient in fractional intensity threshold (-1, 1)
flag: -g %.2f
And ***third***, a list of all possible output parameters.
Outputs:
inskull_mask_file: (a file name)
path/name of inskull mask (if generated)
inskull_mesh_file: (a file name)
path/name of inskull mesh outline (if generated)
mask_file: (a file name)
path/name of binary brain mask (if generated)
meshfile: (a file name)
path/name of vtk mesh file (if generated)
out_file: (a file name)
path/name of skullstripped file (if generated)
outline_file: (a file name)
path/name of outline file (if generated)
outskin_mask_file: (a file name)
path/name of outskin mask (if generated)
outskin_mesh_file: (a file name)
path/name of outskin mesh outline (if generated)
outskull_mask_file: (a file name)
path/name of outskull mask (if generated)
outskull_mesh_file: (a file name)
path/name of outskull mesh outline (if generated)
skull_mask_file: (a file name)
path/name of skull mask (if generated)
So here we see that Nipype also has output parameters. This is very practical. Because instead of typing the full path name to the mask volume, we can also more directly use the ``mask_file`` parameter.
```
print(res.outputs.mask_file)
```
## Interface errors
To execute any interface class we use the ``run`` method on that object. For FSL, Freesurfer, and other programs, this will just make a system call with the command line we saw above. For MATLAB-based programs like SPM, it will actually generate a ``.m`` file and run a MATLAB process to execute it. All of that is handled in the background.
But what happens if we didn't specify all necessary inputs? For instance, you need to give BET a file to work on. If you try and run it without setting the input ``in_file``, you'll get a Python exception before anything actually gets executed:
```
skullstrip2 = BET()
try:
skullstrip2.run()
except(ValueError) as err:
print("ValueError:", err)
else:
raise
```
Nipype also knows some things about what sort of values should get passed to the inputs, and will raise (hopefully) informative exceptions when they are violated -- before anything gets processed. For example, BET just lets you say "create a mask," it doesn't let you name it. You may forget this, and try to give it a name. In this case, Nipype will raise a ``TraitError`` telling you what you did wrong:
```
try:
skullstrip.inputs.mask = "mask_file.nii"
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
```
Additionally, Nipype knows that, for inputs corresponding to files you are going to process, they should exist in your file system. If you pass a string that doesn't correspond to an existing file, it will error and let you know:
```
try:
skullstrip.inputs.in_file = "/data/oops_a_typo.nii"
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
```
It turns out that for default output files, you don't even need to specify a name. Nipype will know what files are going to be created and will generate a name for you:
```
skullstrip = BET(in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz")
print(skullstrip.cmdline)
```
Note that it is going to write the output file to the local directory.
What if you just ran this interface and wanted to know what it called the file that was produced? As you might have noticed before, calling the ``run`` method returned an object called ``InterfaceResult`` that we saved under the variable ``res``. Let's inspect that object:
```
res = skullstrip.run()
print(res.outputs)
```
We see that four possible files can be generated by BET. Here we ran it in the most simple way possible, so it just generated an ``out_file``, which is the skull-stripped image. Let's see what happens when we generate a mask. By the way, you can also set inputs at runtime by including them as arguments to the ``run`` method:
```
res2 = skullstrip.run(mask=True)
print(res2.outputs)
```
Nipype knows that if you ask for a mask, BET is going to generate it in a particular way and makes that information available to you.
## Why this is amazing!
**A major motivating objective for Nipype is to streamline the integration of different analysis packages, so that you can use the algorithms you feel are best suited to your particular problem.**
Say that you want to use BET, as SPM does not offer a way to create an explicit mask from functional data, but that otherwise you want your processing to occur in SPM. Although possible to do this in a MATLAB script, it might not be all that clean, particularly if you want your skullstrip to happen in the middle of your workflow (for instance, after realignment). Nipype provides a unified representation of interfaces across analysis packages.
For more on this, check out the [Interfaces](basic_interfaces.ipynb) and the [Workflow](basic_workflow.ipynb) tutorial.
### Exercise 1
Import `IsotropicSmooth` from `nipype.interfaces.fsl` and find the `FSL` command that is being run. What are the mandatory inputs for this interface?
```
# write your solution here
from nipype.interfaces.fsl import IsotropicSmooth
# all this information can be found when we run `help` method.
# note that you can either provide `in_file` and `fwhm` or `in_file` and `sigma`
IsotropicSmooth.help()
```
### Exercise 2
Run the `IsotropicSmooth` for `/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz` file with a smoothing kernel 4mm:
```
# write your solution here
smoothing = IsotropicSmooth()
smoothing.inputs.in_file = "/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz"
smoothing.inputs.fwhm = 4
smoothing.inputs.out_file = "/output/T1w_nipype_smooth.nii.gz"
smoothing.run()
```
### Exercise 3
Plot the output of your interface.
```
# write your solution here
# we will be using plot_anat from nilearn package
from nilearn.plotting import plot_anat
%matplotlib inline
plot_anat('/output/T1w_nipype_smooth.nii.gz', title='after smoothing',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
| github_jupyter |
<a href="http://cocl.us/pytorch_link_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
</a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Linear Regression Multiple Outputs</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will create a model the PyTroch way. This will help you more complicated models.</p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Model_Cost">Create the Model and Cost Function the PyTorch way</a></li>
<li><a href="#BGD">Train the Model: Batch Gradient Descent</a></li>
</ul>
<p>Estimated Time Needed: <strong>20 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
```
# Import the libraries we need for this lab
from torch import nn,optim
import torch
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from torch.utils.data import Dataset, DataLoader
```
Set the random seed:
```
# Set the random seed.
torch.manual_seed(1)
```
Use this function for plotting:
```
# The function for plotting 2D
def Plot_2D_Plane(model, dataset, n=0):
w1 = model.state_dict()['linear.weight'].numpy()[0][0]
w2 = model.state_dict()['linear.weight'].numpy()[0][0]
b = model.state_dict()['linear.bias'].numpy()
# Data
x1 = data_set.x[:, 0].view(-1, 1).numpy()
x2 = data_set.x[:, 1].view(-1, 1).numpy()
y = data_set.y.numpy()
# Make plane
X, Y = np.meshgrid(np.arange(x1.min(), x1.max(), 0.05), np.arange(x2.min(), x2.max(), 0.05))
yhat = w1 * X + w2 * Y + b
# Plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x1[:, 0], x2[:, 0], y[:, 0],'ro', label='y') # Scatter plot
ax.plot_surface(X, Y, yhat) # Plane plot
ax.set_xlabel('x1 ')
ax.set_ylabel('x2 ')
ax.set_zlabel('y')
plt.title('estimated plane iteration:' + str(n))
ax.legend()
plt.show()
```
<!--Empty Space for separating topics-->
<h2 id="Makeup_Data"r>Make Some Data </h2>
Create a dataset class with two-dimensional features:
```
# Create a 2D dataset
class Data2D(Dataset):
# Constructor
def __init__(self):
self.x = torch.zeros(20, 2)
self.x[:, 0] = torch.arange(-1, 1, 0.1)
self.x[:, 1] = torch.arange(-1, 1, 0.1)
self.w = torch.tensor([[1.0], [1.0]])
self.b = 1
self.f = torch.mm(self.x, self.w) + self.b
self.y = self.f + 0.1 * torch.randn((self.x.shape[0],1))
self.len = self.x.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
```
Create a dataset object:
```
# Create the dataset object
data_set = Data2D()
```
<h2 id="Model_Cost">Create the Model, Optimizer, and Total Loss Function (Cost)</h2>
Create a customized linear regression module:
```
# Create a customized linear
class linear_regression(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
super(linear_regression, self).__init__()
self.linear = nn.Linear(input_size, output_size)
# Prediction
def forward(self, x):
yhat = self.linear(x)
return yhat
```
Create a model. Use two features: make the input size 2 and the output size 1:
```
# Create the linear regression model and print the parameters
model = linear_regression(2,1)
print("The parameters: ", list(model.parameters()))
```
Create an optimizer object. Set the learning rate to 0.1. <b>Don't forget to enter the model parameters in the constructor.</b>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.2paramater_hate.png" width = "100" alt="How the optimizer works" />
```
# Create the optimizer
optimizer = optim.SGD(model.parameters(), lr=0.1)
```
Create the criterion function that calculates the total loss or cost:
```
# Create the cost function
criterion = nn.MSELoss()
```
Create a data loader object. Set the batch_size equal to 2:
```
# Create the data loader
train_loader = DataLoader(dataset=data_set, batch_size=2)
```
<!--Empty Space for separating topics-->
<h2 id="BGD">Train the Model via Mini-Batch Gradient Descent</h2>
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost:
```
# Train the model
LOSS = []
print("Before Training: ")
Plot_2D_Plane(model, data_set)
epochs = 100
def train_model(epochs):
for epoch in range(epochs):
for x,y in train_loader:
yhat = model(x)
loss = criterion(yhat, y)
LOSS.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_model(epochs)
print("After Training: ")
Plot_2D_Plane(model, data_set, epochs)
# Plot out the Loss and iteration diagram
plt.plot(LOSS)
plt.xlabel("Iterations ")
plt.ylabel("Cost/total loss ")
```
<h3>Practice</h3>
Create a new <code>model1</code>. Train the model with a batch size 30 and learning rate 0.1, store the loss or total cost in a list <code>LOSS1</code>, and plot the results.
```
# Practice create model1. Train the model with batch size 30 and learning rate 0.1, store the loss in a list <code>LOSS1</code>. Plot the results.
data_set = Data2D()
model1=linear_regression(2,1)
trainloader=DataLoader(dataset=data_set, batch_size=30)
optimizer1=optim.SGD(model.parameters(),lr=0.1)
LOSS1=[]
for epoch in range(epochs):
for x,y in trainloader:
yhat=model1(x)
loss=criterion(yhat,y)
LOSS1.append(loss)
optimizer1.zero_grad()
loss.backward()
optimizer.step()
print("After Training: ")
Plot_2D_Plane(model, data_set, epochs)
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
train_loader = DataLoader(dataset = data_set, batch_size = 30)
model1 = linear_regression(2, 1)
optimizer = optim.SGD(model1.parameters(), lr = 0.1)
LOSS1 = []
epochs = 100
def train_model(epochs):
for epoch in range(epochs):
for x,y in train_loader:
yhat = model1(x)
loss = criterion(yhat,y)
LOSS1.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_model(epochs)
Plot_2D_Plane(model1 , data_set)
plt.plot(LOSS1)
plt.xlabel("iterations ")
plt.ylabel("Cost/total loss ")
-->
Use the following validation data to calculate the total loss or cost for both models:
```
torch.manual_seed(2)
validation_data = Data2D()
Y = validation_data.y
X = validation_data.x
print("For model:")
totalloss=criterion(model(X),Y)
print(totalloss)
print("For model1:")
totalloss=criterion(model1(X),Y)
print(totalloss)
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
print("total loss or cost for model: ",criterion(model(X),Y))
print("total loss or cost for model: ",criterion(model1(X),Y))
-->
<!--Empty Space for separating topics-->
<a href="http://cocl.us/pytorch_link_bottom">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
</a>
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
| github_jupyter |
# Configuring pandas
```
# import numpy and pandas
import numpy as np
import pandas as pd
# used for dates
import datetime
from datetime import datetime, date
# Set some pandas options controlling output format
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 8)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 90)
# bring in matplotlib for graphics
import matplotlib.pyplot as plt
%matplotlib inline
# view the first five lines of data/msft.csv
!head -n 5 data/msft.csv # mac or Linux
# type data/msft.csv # on windows, but shows the entire file
```
# Reading a CSV into a DataFrame
```
# read in msft.csv into a DataFrame
msft = pd.read_csv("data/msft.csv")
msft[:5]
```
# Specifying the index column when reading a CSV file
```
# use column 0 as the index
msft = pd.read_csv("data/msft.csv", index_col=0)
msft[:5]
```
# Data type inference and specification
```
# examine the types of the columns in this DataFrame
msft.dtypes
# specify that the Volume column should be a float64
msft = pd.read_csv("data/msft.csv",
dtype = { 'Volume' : np.float64})
msft.dtypes
```
# Specifying column names
```
# specify a new set of names for the columns
# all lower case, remove space in Adj Close
# also, header=0 skips the header row
df = pd.read_csv("data/msft.csv",
header=0,
names=['date', 'open', 'high', 'low',
'close', 'volume'])
df[:5]
```
# Specifying specific columns to load
```
# read in data only in the Date and Close columns
# and index by the Date column
df2 = pd.read_csv("data/msft.csv",
usecols=['Date', 'Close'],
index_col=['Date'])
df2[:5]
```
# Saving a DataFrame to a CSV
```
# save df2 to a new csv file
# also specify naming the index as date
df2.to_csv("data/msft_modified.csv", index_label='date')
# view the start of the file just saved
!head -n 5 data/msft_modified.csv
#type data/msft_modified.csv # windows
```
# General field-delimited data
```
# use read_table with sep=',' to read a CSV
df = pd.read_table("data/msft.csv", sep=',')
df[:5]
# save as pipe delimited
df.to_csv("data/msft_piped.txt", sep='|')
# check that it worked
!head -n 5 data/msft_piped.txt # osx or Linux
# type data/psft_piped.txt # on windows
```
# Handling variants of formats in field-delimited data
```
# messy file
!head -n 6 data/msft2.csv # osx or Linux
# type data/msft2.csv # windows
# read, but skip rows 0, 2 and 3
df = pd.read_csv("data/msft2.csv", skiprows=[0, 2, 3])
df[:5]
# another messy file, with the mess at the end
!cat data/msft_with_footer.csv # osx or Linux
# type data/msft_with_footer.csv # windows
# skip only two lines at the end
df = pd.read_csv("data/msft_with_footer.csv",
skipfooter=2,
engine = 'python')
df
# only process the first three rows
pd.read_csv("data/msft.csv", nrows=3)
# skip 100 lines, then only process the next five
pd.read_csv("data/msft.csv", skiprows=100, nrows=5,
header=0,
names=['date', 'open', 'high', 'low',
'close', 'vol'])
```
# Reading and writing data in Excel format
```
# read excel file
# only reads first sheet (msft in this case)
df = pd.read_excel("data/stocks.xlsx")
df[:5]
# read from the aapl worksheet
aapl = pd.read_excel("data/stocks.xlsx", sheetname='aapl')
aapl[:5]
# save to an .XLS file, in worksheet 'Sheet1'
df.to_excel("data/stocks2.xls")
# write making the worksheet name MSFT
df.to_excel("data/stocks_msft.xls", sheet_name='MSFT')
# write multiple sheets
# requires use of the ExcelWriter class
from pandas import ExcelWriter
with ExcelWriter("data/all_stocks.xls") as writer:
aapl.to_excel(writer, sheet_name='AAPL')
df.to_excel(writer, sheet_name='MSFT')
# write to xlsx
df.to_excel("data/msft2.xlsx")
```
# Reading and writing JSON files
```
# wirite the excel data to a JSON file
df[:5].to_json("data/stocks.json")
!cat data/stocks.json # osx or Linux
#type data/stocks.json # windows
# read data in from JSON
df_from_json = pd.read_json("data/stocks.json")
df_from_json[:5]
# the URL to read
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
# read it
banks = pd.read_html(url)
# examine a subset of the first table read
banks[0][0:5].iloc[:,0:2]
# read the stock data
df = pd.read_excel("data/stocks.xlsx")
# write the first two rows to HTML
df.head(2).to_html("data/stocks.html")
# check the first 28 lines of the output
!head -n 10 data/stocks.html # max or Linux
# type data/stocks.html # window, but prints the entire file
```
# Reading and writing HDF5 format files
```
# seed for replication
np.random.seed(123456)
# create a DataFrame of dates and random numbers in three columns
df = pd.DataFrame(np.random.randn(8, 3),
index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
# create HDF5 store
store = pd.HDFStore('data/store.h5')
store['df'] = df # persisting happened here
store
# read in data from HDF5
store = pd.HDFStore("data/store.h5")
df = store['df']
df[:5]
# this changes the DataFrame, but did not persist
df.iloc[0].A = 1
# to persist the change, assign the DataFrame to the
# HDF5 store object
store['df'] = df
# it is now persisted
# the following loads the store and
# shows the first two rows, demonstrating
# the the persisting was done
pd.HDFStore("data/store.h5")['df'][:5] # it's now in there
```
# Accessing data on the web and in the cloud
```
# read csv directly from Yahoo! Finance from a URL
msft_hist = pd.read_csv(
"http://www.google.com/finance/historical?" +
"q=NASDAQ:MSFT&startdate=Apr+01%2C+2017&" +
"enddate=Apr+30%2C+2017&output=csv")
msft_hist[:5]
```
# Reading and writing from/to SQL databases
```
# reference SQLite
import sqlite3
# read in the stock data from CSV
msft = pd.read_csv("data/msft.csv")
msft["Symbol"]="MSFT"
aapl = pd.read_csv("data/aapl.csv")
aapl["Symbol"]="AAPL"
# create connection
connection = sqlite3.connect("data/stocks.sqlite")
# .to_sql() will create SQL to store the DataFrame
# in the specified table. if_exists specifies
# what to do if the table already exists
msft.to_sql("STOCK_DATA", connection, if_exists="replace")
aapl.to_sql("STOCK_DATA", connection, if_exists="append")
# commit the SQL and close the connection
connection.commit()
connection.close()
# connect to the database file
connection = sqlite3.connect("data/stocks.sqlite")
# query all records in STOCK_DATA
# returns a DataFrame
# inde_col specifies which column to make the DataFrame index
stocks = pd.io.sql.read_sql("SELECT * FROM STOCK_DATA;",
connection, index_col='index')
# close the connection
connection.close()
# report the head of the data retrieved
stocks[:5]
# open the connection
connection = sqlite3.connect("data/stocks.sqlite")
# construct the query string
query = "SELECT * FROM STOCK_DATA WHERE " + \
"Volume>29200100 AND Symbol='MSFT';"
# execute and close connection
items = pd.io.sql.read_sql(query, connection, index_col='index')
connection.close()
# report the query result
items
```
# Reading stock data from Google Finance
```
# import data reader package
import pandas_datareader as pdr
# read from google and display the head of the data
start = datetime(2017, 4, 1)
end = datetime(2017, 4, 30)
goog = pdr.data.DataReader("MSFT", 'google', start, end)
goog[:5]
```
# Retrieving options data from Google Finance
```
# read options for MSFT
options = pdr.data.Options('MSFT', 'google')
options.expiry_dates
data = options.get_options_data(expiry=options.expiry_dates[0])
data.iloc[:5,:3]
# get all puts at strike price of $30 (first four columns only)
data.loc[(30, slice(None), 'put'), :].iloc[0:5, 0:3]
# put options at strike of $80, between 2017-06-01 and 2017-06-30
data.loc[(30, slice('20180119','20180130'), 'put'), :] \
.iloc[:, 0:3]
```
# Reading economic data from the Federal Reserve Bank of St. Louis
```
# read GDP data from FRED
gdp = pdr.data.FredReader("GDP",
date(2012, 1, 1),
date(2014, 1, 27))
gdp.read()[:5]
# Get Compensation of employees: Wages and salaries
pdr.data.FredReader("A576RC1A027NBEA",
date(1929, 1, 1),
date(2013, 1, 1)).read()[:5]
```
# Accessing Kenneth French data
```
# read from Kenneth French fama global factors data set
factors = pdr.data.FamaFrenchReader("Global_Factors").read()
factors[0][:5]
```
# Reading from the World Bank
```
# get all indicators
from pandas_datareader import wb
all_indicators = pdr.wb.get_indicators()
all_indicators.iloc[:5,:2]
# search of life expectancy indicators
le_indicators = pdr.wb.search("life expectancy")
# report first three rows, first two columns
le_indicators.iloc[:5,:2]
# get countries and show the 3 digit code and name
countries = pdr.wb.get_countries()
# show a subset of the country data
countries.loc[0:5,['name', 'capitalCity', 'iso2c']]
# get life expectancy at birth for all countries from 1980 to 2014
le_data_all = pdr.wb.download(indicator="SP.DYN.LE00.IN",
start='1980',
end='2014')
le_data_all
# only US, CAN, and MEX are returned by default
le_data_all.index.levels[0]
# retrieve life expectancy at birth for all countries
# from 1980 to 2014
le_data_all = wb.download(indicator="SP.DYN.LE00.IN",
country = countries['iso2c'],
start='1980',
end='2012')
le_data_all
#le_data_all.pivot(index='country', columns='year')
le_data = le_data_all.reset_index().pivot(index='country',
columns='year')
# examine pivoted data
le_data.iloc[:5,0:3]
# ask what is the name of country for each year
# with the least life expectancy
country_with_least_expectancy = le_data.idxmin(axis=0)
country_with_least_expectancy[:5]
# and what is the minimum life expectancy for each year
expectancy_for_least_country = le_data.min(axis=0)
expectancy_for_least_country[:5]
# this merges the two frames together and gives us
# year, country and expectancy where there minimum exists
least = pd.DataFrame(
data = {'Country': country_with_least_expectancy.values,
'Expectancy': expectancy_for_least_country.values},
index = country_with_least_expectancy.index.levels[1])
least[:5]
```
| github_jupyter |
# Wind Statistics
### Introduction:
The data have been modified to contain some missing values, identified by NaN.
Using pandas should make this exercise
easier, in particular for the bonus question.
You should be able to perform all of these operations without using
a for loop or other looping construct.
1. The data in 'wind.data' has the following format:
```
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
```
The first three columns are year, month and day. The
remaining 12 columns are average windspeeds in knots at 12
locations in Ireland on that day.
More information about the dataset go [here](wind.desc).
### Step 1. Import the necessary libraries
### Step 2. Import the dataset from this [address](https://github.com/guipsamora/pandas_exercises/blob/master/06_Stats/Wind_Stats/wind.data)
### Step 3. Assign it to a variable called data and replace the first 3 columns by a proper datetime index.
### Step 4. Year 2061? Do we really have data from this year? Create a function to fix it and apply it.
### Step 5. Set the right dates as the index. Pay attention at the data type, it should be datetime64[ns].
### Step 6. Compute how many values are missing for each location over the entire record.
#### They should be ignored in all calculations below.
### Step 7. Compute how many non-missing values there are in total.
### Step 8. Calculate the mean windspeeds of the windspeeds over all the locations and all the times.
#### A single number for the entire dataset.
### Step 9. Create a DataFrame called loc_stats and calculate the min, max and mean windspeeds and standard deviations of the windspeeds at each location over all the days
#### A different set of numbers for each location.
### Step 10. Create a DataFrame called day_stats and calculate the min, max and mean windspeed and standard deviations of the windspeeds across all the locations at each day.
#### A different set of numbers for each day.
### Step 11. Find the average windspeed in January for each location.
#### Treat January 1961 and January 1962 both as January.
### Step 12. Downsample the record to a yearly frequency for each location.
### Step 13. Downsample the record to a monthly frequency for each location.
### Step 14. Downsample the record to a weekly frequency for each location.
### Step 15. Calculate the min, max and mean windspeeds and standard deviations of the windspeeds across all locations for each week (assume that the first week starts on January 2 1961) for the first 52 weeks.
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
pip install keras-self-attention
!pip install emoji
!pip install ekphrasis
!pip install transformers==4.2.1
import numpy as np
import pandas as pd
import string
from nltk.corpus import stopwords
import re
import os
from collections import Counter
from ekphrasis.classes.preprocessor import TextPreProcessor
from ekphrasis.classes.tokenizer import SocialTokenizer
from ekphrasis.dicts.emoticons import emoticons
text_processor = TextPreProcessor(
# terms that will be normalized
normalize=['url', 'email', 'percent', 'money', 'phone', 'user',
'time', 'url', 'date', 'number'],
# terms that will be annotated
annotate={"hashtag", "allcaps", "elongated", "repeated",
'emphasis', 'censored'},
fix_html=True, # fix HTML tokens
# corpus from which the word statistics are going to be used
# for word segmentation
segmenter="twitter",
# corpus from which the word statistics are going to be used
# for spell correction
corrector="twitter",
unpack_hashtags=True, # perform word segmentation on hashtags
unpack_contractions=True, # Unpack contractions (can't -> can not)
spell_correct_elong=True, # spell correction for elongated words
# select a tokenizer. You can use SocialTokenizer, or pass your own
# the tokenizer, should take as input a string and return a list of tokens
tokenizer=SocialTokenizer(lowercase=True).tokenize,
# list of dictionaries, for replacing tokens extracted from the text,
# with other expressions. You can pass more than one dictionaries.
dicts=[emoticons]
)
def print_text(texts,i,j):
for u in range(i,j):
print(texts[u])
print()
df_1 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_1.head(5)) #last N rows
# print(len(df_1))
df_2 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_2.head(5)) #last N rows
# print(len(df_2))
df_3 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016devtest-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_3.head(5)) #last N rows
# print(len(df_3))
df_4 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016dev-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_4.head(5)) #last N rows
# print(len(df_4))
df_5 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2015train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_5.head(5)) #last N rows
# print(len(df_5))
df_6 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2015test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_6.head(5)) #last N rows
# print(len(df_6))
df_7 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2014test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_7.head(5)) #last N rows
# print(len(df_7))
df_8 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2014sarcasm-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_8.head(5)) #last N rows
# print(len(df_8))
df_9 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_9.head(5)) #last N rows
# print(len(df_9))
df_10 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_10.head(5)) #last N rows
# print(len(df_10))
df_11 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013dev-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_11.head(5)) #last N rows
# print(len(df_11))
```
<h2>Balancing the data</h2>
```
df = pd.DataFrame()
df = df.append(df_1, ignore_index = True)
df = df.append(df_2, ignore_index = True)
df = df.append(df_3, ignore_index = True)
df = df.append(df_4, ignore_index = True)
df = df.append(df_5, ignore_index = True)
df = df.append(df_6, ignore_index = True)
df = df.append(df_7, ignore_index = True)
df = df.append(df_8, ignore_index = True)
df = df.append(df_9, ignore_index = True)
df = df.append(df_10, ignore_index = True)
df = df.append(df_11, ignore_index = True)
print(df.head(5))
print(len(df))
# Testing for null values
# lol = np.asarray(df_[1].isnull())
# for i in range(0,len(lol)):
# if lol[i]:
# print(i)
print(len(df))
text_array = df[2]
labels = df[1]
print("Length of training data: ",len(text_array))
print_text(text_array,0,10)
df_val = pd.read_csv('/content/drive/My Drive/Semeval 2017/Test/SemEval2017-task4-test.subtask-A.english.txt', delimiter='\n', encoding='utf-8', header=None)
print(df_val.tail(5)) #last N rows
print(len(df_val))
lol = []
test_set = np.asarray(df_val[0])
for i in range(0,len(df_val)):
temp = np.asarray(test_set[i].split("\t"))
temp = temp.reshape((3))
lol.append(temp)
df_val = pd.DataFrame(lol)
df_val.head(5)
text_array_val = df_val[2]
labels_val = df_val[1]
print("Length of validation data: ",len(text_array_val))
print_text(text_array_val,0,10)
print(Counter(labels))
print(Counter(labels_val))
#removing website names
def remove_website(text):
return " ".join([word if re.search("r'https?://\S+|www\.\S+'|((?i).com$|.co|.net)",word)==None else "" for word in text.split(" ") ])
# Training set
text_array = text_array.apply(lambda text: remove_website(text))
print_text(text_array,0,10)
print("**************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: remove_website(text))
print_text(text_array_val,0,10)
# Functions for chat word conversion
f = open("/content/drive/My Drive/Semeval 2017/slang.txt", "r")
chat_words_str = f.read()
chat_words_map_dict = {}
chat_words_list = []
for line in chat_words_str.split("\n"):
if line != "":
cw = line.split("=")[0]
cw_expanded = line.split("=")[1]
chat_words_list.append(cw)
chat_words_map_dict[cw] = cw_expanded
chat_words_list = set(chat_words_list)
def chat_words_conversion(text):
new_text = []
for w in text.split():
if w.upper() in chat_words_list:
new_text.append(chat_words_map_dict[w.upper()])
else:
new_text.append(w)
return " ".join(new_text)
# Chat word conversion
# Training set
text_array = text_array.apply(lambda text: chat_words_conversion(text))
print_text(text_array,0,10)
print("********************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: chat_words_conversion(text))
print_text(text_array_val,0,10)
os.chdir("/content/drive/My Drive/Semeval 2017")
#Function for emoticon conversion
from emoticons import EMOTICONS
def convert_emoticons(text):
for emot in EMOTICONS:
text = re.sub(u'('+emot+')', " ".join(EMOTICONS[emot].replace(",","").split()), text)
return text
#testing the emoticon function
text = "Hello :-) :-)"
text = convert_emoticons(text)
print(text + "\n")
# Emoticon conversion
# Training set
text_array = text_array.apply(lambda text: convert_emoticons(text))
print_text(text_array,0,10)
print("**********************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: convert_emoticons(text))
print_text(text_array_val,0,10)
os.chdir("/content")
# FUnction for removal of emoji
import emoji
def convert_emojis(text):
text = emoji.demojize(text, delimiters=(" ", " "))
text = re.sub("_|-"," ",text)
return text
# Training set
text_array = text_array.apply(lambda text: convert_emojis(text))
print_text(text_array,0,10)
print("**************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: convert_emojis(text))
print_text(text_array_val,0,10)
# Ekphrasis pipe for text pre-processing
def ekphrasis_pipe(sentence):
cleaned_sentence = " ".join(text_processor.pre_process_doc(sentence))
return cleaned_sentence
# Training set
text_array = text_array.apply(lambda text: ekphrasis_pipe(text))
print("Training set completed.......")
#Validation set
text_array_val = text_array_val.apply(lambda text: ekphrasis_pipe(text))
print("Test set completed.......")
print_text(text_array,0,10)
print("************************************************************************")
print_text(text_array_val,0,10)
# Removing unnecessary punctuations
PUNCT_TO_REMOVE = "\"$%&'()+,-./;=[\]^_`{|}~"
def remove_punctuation(text):
return text.translate(str.maketrans('', '', PUNCT_TO_REMOVE))
# Training set
text_array = text_array.apply(lambda text: remove_punctuation(text))
print_text(text_array,0,10)
print("********************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: remove_punctuation(text))
print_text(text_array_val,0,10)
# Finding length of longest array
maxLen = len(max(text_array,key = lambda text: len(text.split(" "))).split(" "))
print(maxLen)
u = lambda text: len(text.split(" "))
sentence_lengths = []
for x in text_array:
sentence_lengths.append(u(x))
print(sorted(sentence_lengths)[-800:])
print(len(sentence_lengths))
# Count of each label in dataset
from collections import Counter
# Printing training set counts for analysis
print("Elements: ",set(labels))
print("Length: ",len(labels))
print(Counter(labels))
print("**************************************************************************")
# Printing validation set counts for analysis
print("Elements: ",set(labels_val))
print("Length: ",len(labels_val))
print(Counter(labels_val))
Y = []
Y_val = []
# Training set
for i in range(0,len(labels)):
if(labels[i] == 'neutral'):
Y.append(0)
if(labels[i] == 'positive'):
Y.append(1)
if(labels[i] == 'negative'):
Y.append(2)
# Validation set
for i in range(0,len(labels_val)):
if(labels_val[i] == 'neutral'):
Y_val.append(0)
if(labels_val[i] == 'positive'):
Y_val.append(1)
if(labels_val[i] == 'negative'):
Y_val.append(2)
print(len(Y),len(Y_val))
print(Counter(Y))
print(Counter(Y_val))
# Testing the conversion into integers
for i in range(310,320):
print(text_array_val[i])
print(labels_val[i],Y_val[i])
# Verifying train set
X = np.asarray(list(text_array))
Y = np.asarray(list(Y))
labels = np.asarray(list(labels))
print(type(X))
print(type(Y))
print(type(labels))
print(np.shape(X),np.shape(Y),np.shape(labels))
# Verifying validation set
X_val = np.asarray(list(text_array_val))
Y_val = np.asarray(list(Y_val))
labels_val = np.asarray(list(labels_val))
print(type(X_val))
print(type(Y_val))
print(type(labels_val))
print(np.shape(X_val),np.shape(Y_val),np.shape(labels_val))
index = 824
print(X[index])
print(labels[index])
print(Y[index])
print(type(X))
print(type(Y))
print(np.shape(X),np.shape(Y),np.shape(labels))
print(np.shape(X_val),np.shape(Y_val),np.shape(labels_val))
# Converting to one hot vectors
def convert_to_one_hot(Y, C):
Y = np.eye(C)[Y.reshape(-1)] #u[Y] helps to index each element of Y index at u. U here is a class array
return Y
Y_oh_train = convert_to_one_hot(np.array(Y), C = 3)
Y_oh_val = convert_to_one_hot(np.array(Y_val), C = 3)
print(np.shape(Y_oh_train))
index = 310
print(labels[index], Y[index], "is converted into one hot", Y_oh_train[index])
```
<h2>Tensorflow Model</h2>
```
import tensorflow as tf
import os
import numpy as np
import pandas as pd
import string
from nltk.corpus import stopwords
import re
import os
from collections import Counter
from transformers import RobertaTokenizerFast, TFRobertaModel, TFBertModel, BertTokenizerFast, ElectraTokenizerFast, TFElectraModel, AlbertTokenizerFast, TFAlbertModel, XLNetTokenizerFast, TFXLNetModel, MPNetTokenizerFast, TFMPNetModel
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from keras_self_attention import SeqSelfAttention
print(tf.__version__)
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
tokenizer = MPNetTokenizerFast.from_pretrained("microsoft/mpnet-base")
X = list(X)
X_val = list(X_val)
train_encodings = tokenizer(X, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
val_encodings = tokenizer(X_val, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
print(np.shape(train_encodings["input_ids"]))
print(np.shape(val_encodings["input_ids"]))
print(train_encodings["input_ids"][0])
print("***************************************************************************")
print(val_encodings["input_ids"][0])
# This is the best model
def Offense_classifier(input_shape):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
model = TFMPNetModel.from_pretrained('microsoft/mpnet-base')
layer = model.layers[0]
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
inputs = keras.Input(shape=input_shape, dtype='int32')
input_masks = keras.Input(shape=input_shape, dtype='int32')
embeddings = layer([inputs, input_masks])[0][:,0,:]
# embeddings = keras.layers.GaussianNoise(0.2)(embeddings)
# embeddings = keras.layers.Dropout(0.3)(embeddings)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# lstm_one = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_one(embeddings)
# X = keras.layers.Dropout(0.2)(X)
# lstm_two = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_two(X)
# X = keras.layers.Dropout(0.2)(X)
# # *************Attention*******************
# X = SeqSelfAttention(attention_activation='elu')(X)
# # ****************Attention*******************
# post_activation_GRU_cell = keras.layers.GRU(64, return_sequences = False, recurrent_dropout=0.25, dropout=0.2)
# X = post_activation_GRU_cell(X)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(embeddings)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = keras.Model(inputs=[inputs,input_masks], outputs=[X])
return model
model = Offense_classifier((80,))
model.summary()
strategy = tf.distribute.TPUStrategy(resolver)
class EvaluationMetric(keras.callbacks.Callback):
def __init__(self, trial_encodings, trial_masks, Y_val):
super(EvaluationMetric, self).__init__()
self.trial_encodings = trial_encodings
self.trial_masks = trial_masks
self.Y_val = Y_val
def on_epoch_begin(self, epoch, logs={}):
print("\nTraining...")
def on_epoch_end(self, epoch, logs={}):
print("\nEvaluating...")
trial_prediction = self.model.predict([self.trial_encodings,self.trial_masks])
pred = []
for i in range(0,len(self.Y_val)):
num = np.argmax(trial_prediction[i])
pred.append(num)
from sklearn.metrics import classification_report
print(classification_report(Y_val, pred, digits=3))
evaluation_metric = EvaluationMetric(val_encodings["input_ids"], val_encodings["attention_mask"], Y_val)
with strategy.scope():
model = Offense_classifier((80,))
optimizer = keras.optimizers.Adam(learning_rate=1e-5)
loss_fun = [
tf.keras.losses.CategoricalCrossentropy(from_logits=True)
]
metric = ['acc']
model.compile(optimizer=optimizer, loss=loss_fun, metrics=metric)
model.summary()
checkpoint = ModelCheckpoint(filepath='/content/neutro-mpnet.{epoch:03d}.h5',
verbose = 0,
save_weights_only=True,
epoch=4)
c = Counter(Y)
print(c)
print(c.keys())
neutral = c[0]
pos = c[1]
neg = c[2]
total = pos+neg+neutral
print(neutral,pos,neg,total)
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
maxi = max(pos,neg,neutral)
weight_for_0 = (maxi / (maxi+neutral))
weight_for_1 = (maxi / (maxi+pos))
weight_for_2 = (maxi / (maxi+neg))
class_weight_ = {0: weight_for_0, 1: weight_for_1, 2: weight_for_2}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
print('Weight for class 2: {:.2f}'.format(weight_for_2))
history = model.fit(
x = [train_encodings["input_ids"], train_encodings["attention_mask"]],
y = Y_oh_train,
validation_data = ([val_encodings["input_ids"],val_encodings["attention_mask"]],Y_oh_val),
callbacks = [evaluation_metric, checkpoint],
batch_size = 32,
shuffle=True,
epochs=6,
class_weight = class_weight_
)
# plot_model(model, to_file="model.png", show_shapes=True, show_layer_names=False)
model.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-mpnet.004.h5")
# model.save_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-mpnet.004.h5")
answer = model.predict([val_encodings["input_ids"],val_encodings["attention_mask"]])
print(X_val[0])
print(Y_oh_val[0])
print(labels_val[0])
print("******************************************")
print(len(answer),len(answer))
Counter(Y_val)
# used for querying
count_sl = 0
count_pos = 0
count_not = 0
pred = []
text = df_val[2]
temp = 0
for i in range(0,len(X_val)):
num = np.argmax(answer[i])
pred.append(num)
print(temp)
Counter(pred)
Counter(Y_val)
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=pred, dtype=tf.dtypes.int32)
print(con_mat)
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
from sklearn.metrics import f1_score
f1_score(Y_val, pred, average='macro')
from sklearn.metrics import recall_score
recall_score(Y_val, pred, average='macro')
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, pred, digits=3))
from sklearn.metrics import accuracy_score
accuracy_score(Y_val, pred, normalize=True)
```
<h3>Clustering</h3>
```
pip install plotly==4.5.4
import plotly
import plotly.graph_objs as go
import plotly.express as px
flag = []
count = 0
positive = []
negative = []
neutral = []
for i in range(0,len(pred)):
count = count + 1
neutral.append(answer[i][0])
positive.append(answer[i][1])
negative.append(answer[i][2])
print(count)
pred_colour = []
for i in range(0,len(pred)):
if pred[i] == 0:
pred_colour.append("Neutral")
if pred[i] == 1:
pred_colour.append("Positive")
if pred[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'positive':positive, 'negative':negative, 'neutral':neutral, 'Prediction':pred_colour})
fig = px.scatter_3d(test_df, x='positive', y='negative', z='neutral', color='Prediction')
fig.update_traces(
marker={
'size': 0.7,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 700)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
from sklearn.preprocessing import normalize
from sklearn.cluster import KMeans
from sklearn.metrics.pairwise import cosine_similarity
from scipy.spatial.distance import cosine
```
<h5>SVNS</h5>
<h3>Middle Layer</h3>
```
model.layers[-3]
with strategy.scope():
cl_model = keras.Model(model.input, model.layers[-3].output)
cl_32 = cl_model.predict([val_encodings["input_ids"],val_encodings["attention_mask"]])
kmeans = KMeans(n_clusters=3, random_state=4).fit(cl_32)
y_kmeans_batchnorm = kmeans.predict(cl_32)
for i in range(0,len(y_kmeans_batchnorm)):
if(y_kmeans_batchnorm[i] == 0):
y_kmeans_batchnorm[i] = 1
elif(y_kmeans_batchnorm[i] == 1):
y_kmeans_batchnorm[i] = 2
else:
y_kmeans_batchnorm[i] = 0
centers_batchnorm = kmeans.cluster_centers_
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=y_kmeans_batchnorm)
print(con_mat)
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, y_kmeans_batchnorm, digits=3, target_names=target_names))
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
svns_neu_bn = []
for i in range(0,len(Y_val)):
neu = cosine(cl_32[i], centers_batchnorm[2])/2
svns_neu_bn.append(1-neu)
print(len(svns_neu_bn))
svns_pos_bn = []
for i in range(0,len(Y_val)):
pos = cosine(cl_32[i], centers_batchnorm[0])/2
svns_pos_bn.append(1-pos)
print(len(svns_pos_bn))
svns_neg_bn = []
for i in range(0,len(Y_val)):
neg = cosine(cl_32[i], centers_batchnorm[1])/2
svns_neg_bn.append(1-neg)
print(len(svns_neg_bn))
pred_colour = []
for i in range(0,len(pred)):
if y_kmeans_batchnorm[i] == 0:
pred_colour.append("Neutral")
if y_kmeans_batchnorm[i] == 1:
pred_colour.append("Positive")
if y_kmeans_batchnorm[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'SVNS Positive':svns_pos_bn, 'SVNS Negative':svns_neg_bn, 'SVNS Neutral':svns_neu_bn, 'Labels:':pred_colour})
fig = px.scatter_3d(test_df, x='SVNS Positive', y='SVNS Negative', z='SVNS Neutral', color='Labels:')
fig.update_traces(
marker={
'size': 1,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 850, height = 750)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
```
<h3>GRU</h3>
```
model.layers[-5]
with strategy.scope():
cl_model = keras.Model(model.input, (model.layers[-5].output))
cl_32 = cl_model.predict([val_encodings["input_ids"],val_encodings["attention_mask"]])
kmeans = KMeans(n_clusters=3, random_state=4).fit(cl_32)
y_kmeans_gru = kmeans.predict(cl_32)
for i in range(0,len(y_kmeans_gru)):
if(y_kmeans_gru[i] == 0):
y_kmeans_gru[i] = 1
elif(y_kmeans_gru[i] == 1):
y_kmeans_gru[i] = 2
else:
y_kmeans_gru[i] = 0
centers_gru = kmeans.cluster_centers_
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=y_kmeans_gru)
print(con_mat)
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.set(font_scale=1.5)
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, y_kmeans_gru, digits=3, target_names=target_names))
svns_neu_gru = []
for i in range(0,len(Y_val)):
neu = cosine(cl_32[i], centers_gru[2])/2
svns_neu_gru.append(1-neu)
print(len(svns_neu_gru))
svns_pos_gru = []
for i in range(0,len(Y_val)):
pos = cosine(cl_32[i], centers_gru[0])/2
svns_pos_gru.append(1-pos)
print(len(svns_pos_gru))
svns_neg_gru = []
for i in range(0,len(Y_val)):
neg = cosine(cl_32[i], centers_gru[1])/2
svns_neg_gru.append(1-neg)
print(len(svns_neg_gru))
pred_colour = []
for i in range(0,len(pred)):
if y_kmeans_gru[i] == 0:
pred_colour.append("Neutral")
if y_kmeans_gru[i] == 1:
pred_colour.append("Positive")
if y_kmeans_gru[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'SVNS Positive':svns_pos_gru, 'SVNS Negative':svns_neg_gru, 'SVNS Neutral':svns_neu_gru, 'Labels:':pred_colour})
fig = px.scatter_3d(test_df, x='SVNS Positive', y='SVNS Negative', z='SVNS Neutral', color='Labels:')
fig.update_traces(
marker={
'size': 1,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 850, height = 750)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.