text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Load the dataset
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
anime = pd.read_csv("anime.csv")
anime.head()
anime.isnull().sum()
```
# Data preprocessing
## Episodes
Many animes have unknown number of episodes even if they have similar rating. On top of that many super popular animes such as Naruto Shippuden, Attack on Titan Season 2 were ongoing when the data was collected, thus their number of episodes was considered as "Unknown". For some of my favorite animes I've filled in the episode numbers manually. For the other anime's, I had to make some educated guesses. Changes I've made are :
Animes that are grouped under Hentai Categories generally have 1 episode in my experience. So I've filled the unknown values with 1.
Animes that are grouped are "OVA" stands for "Original Video Animation". These are generally one/two episode long animes(often the popular ones have 2/3 episodes though), but I've decided to fill the unknown numbers of episodes with 1 again.
Animes that are grouped under "Movies" are considered as '1' episode as per the dataset overview goes.
For all the other animes with unknown number of episodes, I've filled the na values with the median which is 2.
```
anime[anime['episodes']=='Unknown'].head(3)
anime.loc[(anime["genre"]=="Hentai") & (anime["episodes"]=="Unknown"),"episodes"] = "1"
anime.loc[(anime["type"]=="OVA") & (anime["episodes"]=="Unknown"),"episodes"] = "1"
anime.loc[(anime["type"] == "Movie") & (anime["episodes"] == "Unknown")] = "1"
known_animes = {"Naruto Shippuuden":500, "One Piece":784,"Detective Conan":854, "Dragon Ball Super":86,
"Crayon Shin chan":942, "Yu Gi Oh Arc V":148,"Shingeki no Kyojin Season 2":25,
"Boku no Hero Academia 2nd Season":25,"Little Witch Academia TV":25}
for k,v in known_animes.items():
anime.loc[anime["name"]==k,"episodes"] = v
anime["episodes"] = anime["episodes"].map(lambda x:np.nan if x=="Unknown" else x)
anime["episodes"].fillna(anime["episodes"].median(),inplace = True)
```
### Type
```
pd.get_dummies(anime[["type"]]).head()
```
### Rating, Members and Genre
For members feature, I Just converted the strings to float.Episode numbers, members and rating are different from categorical variables and very different in values. Rating ranges from 0-10 in the dataset while the episode number can be even 800+ episodes long when it comes to long running popular animes such as One Piece, Naruto etc. So I ended up using sklearn.preprocessing.MinMaxScaler as it scales the values from 0-1.Many animes have unknown ratings. These were filled with the median of the ratings.
```
anime["rating"] = anime["rating"].astype(float)
anime["rating"].fillna(anime["rating"].median(),inplace = True)
anime["members"] = anime["members"].astype(float)
# Scaling
anime_features = pd.concat([anime["genre"].str.get_dummies(sep=","),
pd.get_dummies(anime[["type"]]),
anime[["rating"]],anime[["members"]],anime["episodes"]],axis=1)
anime["name"] = anime["name"].map(lambda name:re.sub('[^A-Za-z0-9]+', " ", name))
anime_features.head()
anime_features.columns
from sklearn.preprocessing import MinMaxScaler
min_max_scaler = MinMaxScaler()
anime_features = min_max_scaler.fit_transform(anime_features)
np.round(anime_features,2)
```
# Fit Nearest Neighbor To Data
```
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=6, algorithm='ball_tree').fit(anime_features)
distances, indices = nbrs.kneighbors(anime_features)
```
# Query examples and helper functions
Many anime names have not been documented properly and in many cases the names are in Japanese instead of English and the spelling is often different. For that reason I've also created another helper function get_id_from_partial_name to find out ids of the animes from part of names.
```
def get_index_from_name(name):
return anime[anime["name"]==name].index.tolist()[0]
all_anime_names = list(anime.name.values)
def get_id_from_partial_name(partial):
for name in all_anime_names:
if partial in name:
print(name,all_anime_names.index(name))
""" print_similar_query can search for similar animes both by id and by name. """
def print_similar_animes(query=None,id=None):
if id:
for id in indices[id][1:]:
print(anime.ix[id]["name"])
if query:
found_id = get_index_from_name(query)
for id in indices[found_id][1:]:
print(anime.ix[id]["name"])
```
# Query Examples
```
print_similar_animes(query="Naruto")
print_similar_animes("Noragami")
print_similar_animes("Mushishi")
print_similar_animes("Gintama")
print_similar_animes("Fairy Tail")
get_id_from_partial_name("Naruto")
print_similar_animes(id=719)
print_similar_animes("Kimi no Na wa ")
```
| github_jupyter |
# LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.
**Learning Objectives**
1. Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS
1. Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE
1. Create and evaluate linear model with ML.TRANSFORM
## Introduction
In this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook [BQML Baseline Model](../solutions/3a_bqml_baseline_babyweight.ipynb).
We will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.
## Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
```
## Verify tables exist
Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
```
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
```
## Model 1: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations.
#### Create model with feature cross.
```
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_1
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_train
```
#### Create two SQL statements to evaluate the model.
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_eval
))
```
## Model 2: Apply the BUCKETIZE Function
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds.
```
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_2
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_train
```
Let's now retrieve the training statistics and evaluate the model.
```
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
```
We now evaluate our model on our eval dataset:
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_eval))
```
Let's select the `mean_squared_error` from the evaluation table we just computed and square it to obtain the rmse.
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_eval))
```
## Model 3: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the model_3 and run the query.
```
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_3
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
)
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
```
Let's retrieve the training statistics:
```
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
```
We now evaluate our model on our eval dataset:
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
```
Let's select the `mean_squared_error` from the evaluation table we just computed and square it to obtain the rmse.
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
```
## Lab Summary:
In this lab, we created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS, created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and created and evaluated a linear model using BigQuery's ML.TRANSFORM.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
import tweepy
import pandas as pd
import numpy as np
import csv
from IPython.display import display
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
consumer_key='A2ZAyYlxuLohYzayM3FeUL7ky'
consumer_secret='DkklDT49vjVJDubyMC1eEFDAOuZIQ6SJPWZVmQMxW19ylNRdvQ'
access_token='1092822084286201857-EzS9rPdVxLeIwFVH2oiwxPdEt4J9ol'
access_token_secret='d3gdLxxpoMJ11jOkwP6zNZvue117V7Hv43nYJxC9F2WGK'
def twitter_setup():
"""
Utility function to setup the Twitter's API
with our access keys provided.
"""
# Authentication and access using keys:
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# Return API with authentication:
api = tweepy.API(auth)
return api
extractor = twitter_setup()
tweets = extractor.user_timeline(screen_name="modi", count=200)
print("Number of tweets extracted: {}.\n".format(len(tweets)))
print("5 recent tweets:\n")
for tweet in tweets[:10]:
print(tweet.text)
print()
# We create a pandas dataframe as follows:
data = pd.DataFrame(data=[tweet.text for tweet in tweets], columns=['Tweets']) #we are taking the text of tweet in tweets
display(data.head(10))
print(dir(tweets[0]))
print(tweets[0].id)
print(tweets[0].created_at)
print(tweets[0].source)
print(tweets[0].favorite_count)
print(tweets[0].retweet_count)
print(tweets[0].geo)
print(tweets[0].coordinates)
print(tweets[0].entities)
print(tweets[0].text)
data['len'] = np.array([len(tweet.text) for tweet in tweets])
data['ID'] = np.array([tweet.id for tweet in tweets])
data['Date'] = np.array([tweet.created_at for tweet in tweets])
data['Source'] = np.array([tweet.source for tweet in tweets])
data['Likes'] = np.array([tweet.favorite_count for tweet in tweets])
data['RTs'] = np.array([tweet.retweet_count for tweet in tweets])
display(data.head(10))
fav_max = np.max(data['Likes'])
rt_max = np.max(data['RTs'])
fav = data[data.Likes == fav_max].index[0]
rt = data[data.RTs == rt_max].index[0]
# Max FAVs:
print("The tweet with more likes is: \n{}".format(data['Tweets'][fav]))
print("Number of likes: {}".format(fav_max))
print("{} characters.\n".format(data['len'][fav]))
# Max RTs:
print("The tweet with more retweets is: \n{}".format(data['Tweets'][rt]))
print("Number of retweets: {}".format(rt_max))
print("{} characters.\n".format(data['len'][rt]))
csvFile = open('modi.csv', 'a')
#Use csv Writer
csvWriter = csv.writer(csvFile)
data.to_csv('modi.csv')
tlen = pd.Series(data=data['len'].values, index=data['Date'])
tfav = pd.Series(data=data['Likes'].values, index=data['Date'])
tret = pd.Series(data=data['RTs'].values, index=data['Date'])
# Lenghts along time:
tlen.plot(figsize=(10,4), color='r');
tfav.plot(figsize=(10,4),color='b',label="Likes", legend=True)
#obtain all possible sources:
sources = []
for source in data['Source']:
if source not in sources:
sources.append(source)
# We print sources list:
print("Creation of content sources:")
for source in sources:
print("* {}".format(source))
#create a numpy vector mapped to labels:
percent = np.zeros(len(sources))
for source in data['Source']:
for index in range(len(sources)):
if source == sources[index]:
percent[index] += 1
pass
percent /= 100
# Pie chart:
pie_chart = pd.Series(percent, index=sources, name='Sources')
pie_chart.plot.pie(fontsize=11, autopct='%.2f', figsize=(6, 6));
from textblob import TextBlob
import re
def clean_tweet(tweet):
'''
Utility function to clean the text in a tweet by removing
links and special characters using regex.
'''
return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", tweet).split())
def analize_sentiment(tweet):
'''
Utility function to classify the polarity of a tweet
using textblob.
'''
analysis = TextBlob(clean_tweet(tweet))
if analysis.sentiment.polarity > 0:
return 1
elif analysis.sentiment.polarity == 0:
return 0
else:
return -1
#print percentages:
print("Percentage of positive tweets: {}%".format(len(pos_tweets)*100/len(data['Tweets'])))
print("Percentage of neutral tweets: {}%".format(len(neu_tweets)*100/len(data['Tweets'])))
print("Percentage de negative tweets: {}%".format(len(neg_tweets)*100/len(data['Tweets'])))
```
| github_jupyter |
# Setting Up the Notebook:
This notebook requires `Pillow` which is a fork of `PIL`. To install this module run the cell below:
```
import sys
prefix = '\"%s\"' % sys.prefix
!conda install --yes --prefix {prefix} pillow
```
# About Item Thumbnails
A thumbnail image is created by default when you add the item to the site. It appears in galleries, search results, contents, and the item page. You can create and load a different image if the default image does not convey the information you want.
In ArcGIS Online, you can drag an image or browse to a file. For best results, add an image that is 600 pixels wide by 400 pixels high or larger with an aspect ratio of 1.5:1 in a web file format such as PNG, JPEG, or GIF. Pan and zoom to what you want to appear in your thumbnail. Depending on the size and resolution of your image file and how far you zoom in to customize the thumbnail, the image may be resampled and scaled when it's saved. If you add an image in GIF or JPEG format, it will be converted to PNG when it's saved.
## Finding Missing and Invalid Images
This notebook shows how a user can find images under the 600x400 pixel size for a given user.
```
import os
import io
import base64
import shutil
from getpass import getpass
import pandas as pd
from PIL import Image
from IPython.display import HTML
import ipywidgets as widgets
from arcgis.gis import GIS, Item, User
pd.set_option('display.max_colwidth', -1)
```
### Connect to the GIS
Use the login credentials to your site to populate the interactive application below.
```
username = getpass()
password = getpass()
gis = GIS(username=username, password=password, verify_cert=False)
def get_images(username, gis, min_width=600, min_height=400, show_image=True):
"""
Finds all images for a given user under a specific image size in pixels.
================ ===========================================
**Inputs** **Description**
---------------- -------------------------------------------
username Required string. The name of the user items
to exmine.
---------------- -------------------------------------------
gis Required GIS. The connection to Portal/AGOL
---------------- -------------------------------------------
min_height Optional Integer. The height in pixels of
the image. Any image below this height will
be returned in the dataframe.
---------------- -------------------------------------------
min_width Optional Integer. The width of the image.
Anything below this width will be returned.
---------------- -------------------------------------------
show_image Optional boolean. If True, the results will
be returned as an HTML table. Else, they
will be returned as a pandas DataFrame.
================ ===========================================
returns: string or pd.DataFrame
"""
results = []
show_image_columns=['title', 'username','folder', 'item_id', 'item_thumbnail','width', 'height']
no_show_image_columns=['title', 'username','folder', 'item_id', 'width', 'height']
user = gis.users.get(username)
username = user.username
folders = [fld['title'] for fld in user.folders] + [None]
for folder in folders:
items = user.items(folder=folder, max_items=1000)
for item in items:
thumbnail = item.thumbnail
if show_image:
if thumbnail:
bn = os.path.basename(thumbnail)
image_bytes = item.get_thumbnail()
img = Image.open(io.BytesIO(image_bytes))
b64_item = base64.b64encode(image_bytes)
b64thmb = "data:image/png;base64," + str(b64_item,"utf-8") + "' width='200' height='133"
item_thumbnail = """<img src='""" + str(b64thmb) + """' class="itemThumbnail">"""
results.append([item.title, username, folder, item.id, item_thumbnail] + list(img.size))
img.close()
del img
else:
results.append([item.title, username, folder, item.id, "", -999,-999])
else:
if thumbnail:
image_bytes = item.get_thumbnail()
img = Image.open(io.BytesIO(image_bytes))
results.append([item.title, username, folder, item.id] + list(img.size))
img.close()
del img
else:
results.append([item.title, username, folder, item.id,None,None])
if show_image:
df = pd.DataFrame(results, columns=show_image_columns)
q = (df.width <= float(min_width)) | (df.height < float(min_height)) | (df.height.isnull()) | (df.width.isnull())
df = (df[q]
.copy()
.reset_index(drop=True))
return HTML(df.to_html(escape=False))
else:
df = pd.DataFrame(results, columns=no_show_image_columns)
q = (df.width <= float(min_width)) | (df.height < float(min_height)) | (df.height.isnull()) | (df.width.isnull())
df = (df[q]
.copy()
.reset_index(drop=True))
return df
```
# Usage
To get a DataFrame that can be queried, set `show_image` to `False`. This will return an object that can be further used for analysis. The dataframe reports back the width/height of the image. If an image is missing, the value will be **NaN** for width/height.
### Example: Retrieve All Thumbnails Under 400x300
```
username = "geodev0"
df = get_images(username, gis=gis,
min_height=300, min_width=400,
show_image=False)
df
```
## Item Thumbnails Back as HTML Report
Sometimes just creating a table to see what is there is good enough. By setting `show_image` to `True`, the method allows for a quick visualization approach to the thumbnail problem.
### Example: Find all Images Under 600x400 Pixels:
```
df = get_images(username, gis=gis,
show_image=True)
df
```
| github_jupyter |
```
import pandas as pd
from google.colab import drive
drive.mount('/content/gdrive')
from keras.preprocessing.text import Tokenizer
import numpy as np
from keras.preprocessing.sequence import pad_sequences
import keras.utils as np_utils
from keras.models import Sequential
from keras.layers import Activation, Flatten, Embedding, LSTM, Dense, Dropout, TimeDistributed, Bidirectional
#from keras.layers import CuDNNLSTM
import tensorflow
tensorflow.random.set_seed(2)
from numpy.random import seed
seed(1)
from keras.callbacks import EarlyStopping, ModelCheckpoint
data = pd.read_csv("/content/gdrive/My Drive/Showerthoughts_200k.csv", header = None, encoding = "latin1", skiprows= 10000, nrows=2000)
data.columns = ['title']
data.head(-10)
import string
"""def clean_tweet(data):
data = data.replace('--', ' ')
tokens = data.split()
table = str.maketrans('','', string.punctuation)
tokens = [w.translate(table) for w in tokens]
tokens = [word for word in tokens if word.isalpha()]
tokens = [word.lower() for word in tokens]
return tokens
corpus = [clean_tweet(x) for x in data['title']] """
def clean_text(txt):
txt = "".join(v for v in txt if v not in string.punctuation).lower()
txt = txt.encode("utf8").decode("ascii",'ignore')
return txt
corpus = [clean_text(x) for x in data['title']]
corpus[:5]
len(corpus)
tokenizer = Tokenizer()
def get_sequence_of_tokens(corpus):
## tokenization
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
## convert data to sequence of tokens
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
return input_sequences, total_words
inp_sequences, total_words = get_sequence_of_tokens(corpus)
inp_sequences[:10]
def generate_padded_sequences(input_sequences):
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
label = np_utils.to_categorical(label, num_classes=total_words)
return predictors, label, max_sequence_len
predictors, label, max_sequence_len = generate_padded_sequences(inp_sequences)
def create_model(max_sequence_len, total_words):
input_len = max_sequence_len - 1
model = Sequential()
# Add Input Embedding Layer
model.add(Embedding(total_words, 10, input_length=input_len))
# Add Hidden Layer 1 - LSTM Layer
model.add(LSTM(100))
model.add(Dropout(0.1))
# Add Output Layer
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
model = create_model(max_sequence_len, total_words)
model.summary()
history = model.fit(predictors, label, epochs=30, verbose=2)
def generate_text(seed_text, next_words, model, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word,index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " "+output_word
return seed_text.title()
print (generate_text("Batman", 19, model, max_sequence_len))
model_save_name = 'showerthoughts.pt'
path = F"/content/gdrive/My Drive/WisdomAI/{model_save_name}"
torch.save(model.state_dict(), path)
```
| github_jupyter |
```
import sympy as sym
from sympy.polys import subresultants_qq_zz
sym.init_printing()
```
Resultant
----------
If $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,
$$p(x)= a_0 (x - r_1) (x- r_2) \dots (x - r_m) $$
$$q(x)=b_0 (x - s_1)(x - s_2) \dots (x - s_n)$$
then the resultant $R(p,q)$ of $p$ and $q$ is defined as:
$$R(p,q)=a^n_{0}b^m_{0}\prod_{i=1}^{m}\prod_{j=1}^{n}(r_i - s_j)$$
Since the resultant is a symmetric function of the roots of the polynomials $p$ and $q$, it can be expressed as a polynomial in the coefficients of $p$ and $q$.
From the definition, it is clear that the resultant will equal zero if and only if $p$ and $q$ have at least one common root. Thus, the resultant becomes very useful in identifying whether common roots exist.
Sylvester's Resultant
---------------------
It was proven that the determinant of the Sylvester's matrix is equal to the resultant. Assume the two polynomials:
- $$p(x) = a_0 x_m + a_1 x_{m-1}+\dots+a_{m-1}x+a_m$$
- $$q(x)=b_0x_n + b_1x_{n-1}+\dots+b_{n-1}x+b_n$$
Then the Sylverster matrix in the $(m+n)\times(m+n)$ matrix:
$$
\left|
\begin{array}{cccccc}
a_{0} & a_{1} & a_{2} & \ldots & a_{m} & 0 & \ldots &0 \\
0 & a_{0} & a_{1} & \ldots &a_{m-1} & a_{m} & \ldots &0 \\
\vdots & \ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\vdots \\
0 & 0 & \ddots & \ddots& \ddots& \ddots& \ddots&a_{m}\\
b_{0} & b_{1} & b_{2} & \ldots & b_{n} & 0 & \ldots & 0 \\
0 & b_{0} & b_{1} & \ldots & b_{n-1} & b_{n} & \ldots & 0\\
\ddots &\ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\ddots \\
0 & 0 & \ldots& \ldots& \ldots& \ldots& \ldots& b_{n}\\
\end{array}
\right| = \Delta $$
Thus $\Delta$ is equal to the $R(p, q)$.
Example: Existence of common roots
------------------------------------------
Two examples are consider here. Note that if the system has a common root we are expecting the resultant/determinant to equal to zero.
```
x = sym.symbols('x')
```
**A common root exists.**
```
f = x ** 2 - 5 * x + 6
g = x ** 2 - 3 * x + 2
f, g
subresultants_qq_zz.sylvester(f, g, x)
subresultants_qq_zz.sylvester(f, g, x).det()
```
**A common root does not exist.**
```
z = x ** 2 - 7 * x + 12
h = x ** 2 - x
z, h
matrix = subresultants_qq_zz.sylvester(z, h, x)
matrix
matrix.det()
```
Example: Two variables, eliminator
----------
When we have system of two variables we solve for one and the second is kept as a coefficient.Thus we can find the roots of the equations, that is why the resultant is often refeered to as the eliminator.
```
y = sym.symbols('y')
f = x ** 2 + x * y + 2 * x + y -1
g = x ** 2 + 3 * x - y ** 2 + 2 * y - 1
f, g
matrix = subresultants_qq_zz.sylvester(f, g, y)
matrix
matrix.det().factor()
```
Three roots for x $\in \{-3, 0, 1\}$.
For $x=-3$ then $y=1$.
```
f.subs({x:-3}).factor(), g.subs({x:-3}).factor()
f.subs({x:-3, y:1}), g.subs({x:-3, y:1})
```
For $x=0$ the $y=1$.
```
f.subs({x:0}).factor(), g.subs({x:0}).factor()
f.subs({x:0, y:1}), g.subs({x:0, y:1})
```
For $x=1$ then $y=-1$ is the common root,
```
f.subs({x:1}).factor(), g.subs({x:1}).factor()
f.subs({x:1, y:-1}), g.subs({x:1, y:-1})
f.subs({x:1, y:3}), g.subs({x:1, y:3})
```
Example: Generic Case
-----------------
```
a = sym.IndexedBase("a")
b = sym.IndexedBase("b")
f = a[1] * x + a[0]
g = b[2] * x ** 2 + b[1] * x + b[0]
matrix = subresultants_qq_zz.sylvester(f, g, x)
matrix.det()
```
| github_jupyter |
```
import numpy
import pylab
import sys
if not './StarSoup' in sys.path:
sys.path.append('./StarSoup')
import StarSoup
import matplotlib
from matplotlib.ticker import FormatStrFormatter
```
# Prepare recipes
Fiducial
```
recipes = {}
recipes['fid'] = StarSoup.fiducial()
```
Lower common envelope $\alpha \lambda$
```
recipes['al01'] = StarSoup.fiducial()
recipes['al01']['binary evolution'] = lambda m1,mh,m2,a,e: StarSoup.CommonEnvelope.evolve_common_envelope(m1,mh,m2,a,e,al=0.1)
```
Natal kick with neutron star momentum
```
recipes['nkm'] = StarSoup.fiducial()
recipes['nkm']['binary evolution'] = StarSoup.EvolutionSequence.EvolutionSequence(
[StarSoup.CommonEnvelope.evolve_common_envelope,
StarSoup.NatalKicks.NatalKicks(StarSoup.NatalKicks.MomentumKick(1e-6*260).calc).evolve]
).evolve
```
Natal kick with neutron star velocity
```
recipes['nkv'] = StarSoup.fiducial()
recipes['nkv']['binary evolution'] = StarSoup.EvolutionSequence.EvolutionSequence(
[StarSoup.CommonEnvelope.evolve_common_envelope,
StarSoup.NatalKicks.NatalKicks(StarSoup.NatalKicks.VelocityKick(1e-6*260).calc).evolve]
).evolve
```
# Generate populations
```
populations = {key:StarSoup.cook(recipes[key], int(1e7)) for key in recipes}
```
# Predictions for gaia
```
star_number_scale = 5e7/len(populations['fid']['primary mass'])
gaia_masks = {key: StarSoup.whittle_gaia(populations[key]) for key in populations}
{key:star_number_scale*numpy.sum(populations[key]['statistical weight'][gaia_masks[key]])
for key in gaia_masks}
```
# Predictions for hipparcos
```
hip_masks = {key: StarSoup.whittle_hipparcos(populations[key]) for key in populations}
{key:star_number_scale*numpy.sum(populations[key]['statistical weight'][hip_masks[key]])
for key in hip_masks}
```
# Figures
## Black hole masses
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
pylab.hist(populations[key]['black hole mass'][gaia_masks[key]],
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key)
pylab.xlabel(r'Black hole mass [$M_{\odot}$]')
pylab.ylabel('Number')
pylab.yscale('log')
pylab.legend(bbox_to_anchor=(1,1))
#pylab.tight_layout()
pass
```
## Companion masses
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
ls = '-'
if key=='al01':
ls='--'
pylab.hist(populations[key]['terminal companion mass'][gaia_masks[key]],
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
ls=ls,
linewidth=4,
label=key,
bins=numpy.logspace(0,2,20))
pylab.xlabel(r'Companion mass [$M_{\odot}$]')
pylab.ylabel('Number')
pylab.xscale('log')
pylab.yscale('log')
pylab.ylim((1,3e1))
pylab.legend(bbox_to_anchor=(1,1))
#pylab.tight_layout()
pass
```
## Period
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
pylab.hist(populations[key]['terminal period'][gaia_masks[key]],
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key,
bins=numpy.logspace(-1,0.7,10))
pylab.xlabel(r'Period [years]')
pylab.ylabel('Number')
pylab.xscale('log')
#pylab.yscale('log')
#pylab.xticks([0.2,0.5,1,2,4],
# ['0.2','0.5','1.0','2.0','4.0'])
pylab.legend(bbox_to_anchor=(1,1))
#pylab.tight_layout()
pass
```
## Distance from earth
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
pylab.hist(populations[key]['distance from earth'][gaia_masks[key]]/1e3,
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key,
bins=numpy.logspace(-1.2,1.2,20))
pylab.xlabel(r'Distance from earth [kpc]')
pylab.ylabel('Number')
pylab.xscale('log')
pylab.yscale('log')
pylab.legend(bbox_to_anchor=(1.1,1))
pylab.ylim((1,1e2))
pylab.xlim((0.3,1e1))
#pylab.tight_layout()
pass
```
## Apparent magntide
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
pylab.hist(populations[key]['apparent magnitude'][gaia_masks[key]],
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key)
pylab.xlabel(r'Apparent magnitude')
pylab.ylabel('Number')
pylab.yscale('log')
pylab.ylim((1,1e2))
pylab.xlim((8,20))
pylab.legend(bbox_to_anchor=(1,1))
#pylab.tight_layout()
pass
```
## Opening angles
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
sma_list = populations[key]['terminal semi major axis'][gaia_masks[key]]
dfe_list = populations[key]['distance from earth'][gaia_masks[key]]
opening_angles = sma_list/dfe_list/5e-9
pylab.hist(opening_angles,
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key,
bins=numpy.logspace(-1.2,2,20))
pylab.xlabel(r'$\theta$ [mas]')
pylab.ylabel('Number')
pylab.xscale('log')
pylab.yscale('log')
pylab.legend(bbox_to_anchor=(0,0,1.5,1))
pylab.xlim((1e-1,1e1))
pylab.ylim((1,1e2))
#pylab.tight_layout()
pass
```
## Radial velocities
```
%matplotlib inline
font = {'size':22}
matplotlib.rc('font',**font)
for key in populations:
period_list = populations[key]['terminal period'][gaia_masks[key]]
sma_list = populations[key]['terminal semi major axis'][gaia_masks[key]]
v_list = sma_list/period_list*1e6
pylab.hist(v_list,
weights=populations[key]['statistical weight'][gaia_masks[key]]*star_number_scale,
histtype='step',
linewidth=4,
label=key)
pylab.xlabel(r'Radial velocities [km/s]')
pylab.ylabel('Number')
pylab.yscale('log')
#pylab.xscale('log')
pylab.ylim((1,1e2))
pylab.xlim((5,40))
pylab.legend(bbox_to_anchor=(1.23,0.8))
#pylab.xticks([5,10,20,50],
# ['5','10','20','50'])
#pylab.tight_layout()
pass
```
| github_jupyter |
```
import time
import os
import json
import pickle
import tensorflow as tf
from tensorflow.keras import utils
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard, ReduceLROnPlateau, LearningRateScheduler
from tensorflow.keras.models import Model, load_model, Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import ResNet50, VGG19
from tensorflow.keras.applications.imagenet_utils import preprocess_input
from matplotlib.pyplot import imread, imshow
import h5py
import numpy as np
import matplotlib.pyplot as plt
import random
from datetime import datetime
import nibabel as nib
import re
from collections import Counter
%matplotlib inline
%load_ext autoreload
%autoreload 1
SEED=1
random.seed(SEED)
np.random.seed(SEED)
#tf.set_random_seed(SEED)
tf.random.set_seed
K.clear_session()
# Get fmri ROI data
import h5py
roi_list = []
ax_length = 200
for roi in ['RHPPA']:
tr_list = []
for TR in ['TR3']:
with h5py.File('/home/ubuntu/boldroi/ROIs/CSI1/h5/CSI1_ROIs_%s.h5' % TR, 'r') as f:
print(list(f.keys()))
x_roi = list(f[roi])
x_r = np.stack(x_roi, axis=0)
tr_list.append(x_r)
#mean_x = np.mean(x_r, axis=0)
#std_x = np.std(x_r, axis=0, ddof=1)
#xt = (x_r - mean_x) / std_x
#tr_list.append(xt)
#xt_pad = np.pad(xt, ((0, 0), (0, ax_length-xt.shape[1])), mode='constant', constant_values=0)
#tr_list.append(xt_pad)
x_stack = np.dstack(tr_list)
xsa = np.swapaxes(x_stack,1,2)
roi_list.append(xsa)
x_all = np.concatenate(roi_list)
x_all =np.squeeze(x_all)
print(x_all.shape)
# load stimuli list
with open('/home/ubuntu/boldroi/ROIs/stim_lists/CSI01_stim_lists.txt', 'r') as f:
stimuli_list = f.read().splitlines()
classes = {'ImageNet': 0, 'COCO': 1, 'Scene': 2}
stimulusDirPath = '/home/ubuntu/fmriNet/fmriNet/images/BOLD5000_Stimuli/Scene_Stimuli/Presented_Stimuli'
stimuli_file_list = []
label_list = []
for imageFileName in stimuli_list:
for (currDir, _, fileList) in os.walk(stimulusDirPath):
currBaseDir = os.path.basename(currDir)
for filename in fileList:
if filename in imageFileName:
stimuli_file_list.append(os.path.join(currDir, filename))
label_list.append(classes.get(currBaseDir))
break
labels = np.asarray(label_list)
print(labels.shape)
labels_all = utils.to_categorical(labels)
# preprocess stimuli images
pstimuli = []
for imgFile in stimuli_file_list:
img = image.load_img(imgFile, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
pstimuli.append(preprocess_input(x))
# Get image features from resnet
base_model = ResNet50(weights='imagenet')
#base_model.summary()
b_model = Model(inputs = base_model.input, outputs = base_model.get_layer('avg_pool').output)
b_model.summary()
# Use pretrained model to get stimuli activations
image_activations = []
num_samples = len(pstimuli)
# precompute x_images with above pretrainded model
for index, x in enumerate(pstimuli):
print("Computing image: %s out of %s" % (index, num_samples))
image_activations.append(b_model.predict(x))
#print(len(image_activations))
#print(image_activations[0].shape)
activations_all = np.concatenate(image_activations)
print(activations_all.shape)
!mkdir -p 'imageActivation'
with open('imageActivation/renet_avg_pool_activations.npy', 'wb') as f:
pickle.dump(activations_all, f)
del activations_all
# load stimuli activations
with open('imageActivation/renet_avg_pool_activations.npy', 'rb') as f:
y_all = pickle.load(f)
# Split data into train/test set
#from sklearn.utils import shuffle
num_samples = x_all.shape[0]
num_voxels = 100 # maximum number of voxels to use for training
div = int(num_samples * 0.9)
print("Division index: %s" % div)
#x_shuffle, y_shuffle = shuffle(x_all, y_all, random_state=0)
x_train = x_all[0:div, :]
y_train = y_all[0:div, :]
x_test = x_all[div:, :]
y_test = y_all[div:, :]
print("x train shape: %s" % str(x_train.shape))
print("y train shape: %s" % str(y_train.shape))
print("x test shape: %s" % str(x_test.shape))
print("y test shape: %s" % str(y_test.shape))
# borrowed from bdpy https://github.com/KamitaniLab/bdpy
from numpy.matlib import repmat
def corrcoef(x, y, var='row'):
"""
Returns correlation coefficients between `x` and `y`
Parameters
----------
x, y : array_like
Matrix or vector
var : str, 'row' or 'col'
Specifying whether rows (default) or columns represent variables
Returns
-------
r
Correlation coefficients
"""
# Convert vectors to arrays
if x.ndim == 1:
x = np.array([x])
if y.ndim == 1:
y = np.array([y])
# Normalize x and y to row-var format
if var == 'row':
# 'rowvar=1' in np.corrcoef
# Vertical vector --> horizontal
if x.shape[1] == 1:
x = x.T
if y.shape[1] == 1:
y = y.T
elif var == 'col':
# 'rowvar=0' in np.corrcoef
# Horizontal vector --> vertical
if x.shape[0] == 1:
x = x.T
if y.shape[0] == 1:
y = y.T
# Convert to rowvar=1
x = x.T
y = y.T
else:
raise ValueError('Unknown var parameter specified')
# Match size of x and y
if x.shape[0] == 1 and y.shape[0] != 1:
x = repmat(x, y.shape[0], 1)
elif x.shape[0] != 1 and y.shape[0] == 1:
y = repmat(y, x.shape[0], 1)
# Check size of normalized x and y
if x.shape != y.shape:
raise TypeError('Input matrixes size mismatch')
# Get num variables
nvar = x.shape[0]
# Get correlation
rmat = np.corrcoef(x, y, rowvar=1)
r = np.diag(rmat[:nvar, nvar:])
return r
# borrowed from bdpy https://github.com/KamitaniLab/bdpy
def select_top(data, value, num, axis=0, verbose=True):
"""
Select top `num` features of `value` from `data`
Parameters
----------
data : array
Data matrix
value : array_like
Vector of values
num : int
Number of selected features
Returns
-------
selected_data : array
Selected data matrix
selected_index : array
Index of selected data
"""
if verbose:
print_start_msg()
num_elem = data.shape[axis]
sorted_index = np.argsort(value)[::-1]
rank = np.zeros(num_elem, dtype=np.int)
rank[sorted_index] = np.array(range(0, num_elem))
selected_index_bool = rank < num
if axis == 0:
selected_data = data[selected_index_bool, :]
selected_index = np.array(range(0, num_elem), dtype=np.int)[selected_index_bool]
elif axis == 1:
selected_data = data[:, selected_index_bool]
selected_index = np.array(range(0, num_elem), dtype=np.int)[selected_index_bool]
else:
raise ValueError('Invalid axis')
if verbose:
print_finish_msg()
return selected_data, selected_index
# borrowed from bdpy https://github.com/KamitaniLab/bdpy
def add_bias(x, axis=0):
"""
Add bias terms to x
Parameters
----------
x : array_like
Data matrix
axis : 0 or 1, optional
Axis in which bias terms are added (default: 0)
Returns
-------
y : array_like
Data matrix with bias terms
"""
if axis == 0:
vlen = x.shape[1]
y = np.concatenate((x, np.array([np.ones(vlen)])), axis=0)
elif axis == 1:
vlen = x.shape[0]
y = np.concatenate((x, np.array([np.ones(vlen)]).T), axis=1)
else:
raise ValueError('axis should be either 0 or 1')
return y
from sklearn.linear_model import LinearRegression
# borrowed from https://github.com/KamitaniLab/GenericObjectDecoding
numFeatures = y_train.shape[1]
# Normalize brian data (x)
norm_mean_x = np.mean(x_train, axis=0)
norm_scale_x = np.std(x_train, axis=0, ddof=1)
x_train = (x_train - norm_mean_x) / norm_scale_x
x_test = (x_test - norm_mean_x) / norm_scale_x
y_true_list = []
y_pred_list = []
for i in range(numFeatures):
print('Unit %03d' % (i + 1))
start_time = time.time()
# Get unit features
y_train_unit = y_train[:, i]
y_test_unit = y_test[:, i]
# Normalize image features for training (y_train_unit)
norm_mean_y = np.mean(y_train_unit, axis=0)
std_y = np.std(y_train_unit, axis=0, ddof=1)
norm_scale_y = 1 if std_y == 0 else std_y
y_train_unit = (y_train_unit - norm_mean_y) / norm_scale_y
# Voxel selection
corr = corrcoef(y_train_unit, x_train, var='col')
x_train_unit, voxel_index = select_top(x_train, np.abs(corr), num_voxels, axis=1, verbose=False)
x_test_unit = x_test[:, voxel_index]
# Add bias terms
x_train_unit = add_bias(x_train_unit, axis=1)
x_test_unit = add_bias(x_test_unit, axis=1)
# Setup regression
# For quick demo, use linaer regression
model = LinearRegression()
#model = SparseLinearRegression(n_iter=n_iter, prune_mode=1)
# Training and test
try:
model.fit(x_train_unit, y_train_unit) # Training
y_pred = model.predict(x_test_unit) # Test
except:
# When SLiR failed, returns zero-filled array as predicted features
y_pred = np.zeros(y_test_unit.shape)
# Denormalize predicted features
y_pred = y_pred * norm_scale_y + norm_mean_y
y_true_list.append(y_test_unit)
y_pred_list.append(y_pred)
print('Time: %.3f sec' % (time.time() - start_time))
# Create numpy arrays for return values
y_predicted = np.vstack(y_pred_list).T
y_true = np.vstack(y_true_list).T
print(y_predicted.shape)
print(y_true.shape)
#delta = y_true[20] - y_predicted[20]
#plt.plot(delta)
print(y_true[20])
print(y_predicted[20])
# Train a classifier on y_true > cat
# see how well it does on y_pred > cat ..use y_pred as test
del classifier_model
X_input = Input(shape=(2048))
#xr = Flatten()(x)
#X = BatchNormalization()(X_input)
#xr = Dense(1024, activation='relu')(xr)
#xr = Dropout(0.2)(xr)
#xr = Dense(1024, activation='softmax')(xr)
#xr = GlobalAveragePooling2D()(x)
X = Dense(1024, activation='relu')(X_input)
X = Dropout(0.4)(X)
#X = BatchNormalization()(X)
X = Dense(16, activation='relu')(X)
X = Dropout(0.2)(X)
#X = BatchNormalization()(X)
#predictions = Dense(17, activation='softmax')(xr)
predictions = Dense(3, activation='softmax')(X)
classifier_model = Model(inputs=X_input, outputs=predictions)
classifier_model.summary()
#features_train = y_true
#labels_train = labels_all[div:]
features_train = y_train
labels_train = labels_all[:div]
features_test = y_test
labels_test = labels_all[div:]
EPOCHS=50
callbacks = [TensorBoard(log_dir=f'./log/{i}')]
classifier_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
classifier_model.fit(x=features_train, y=labels_train, shuffle=True, epochs=EPOCHS, callbacks=callbacks, validation_data=(features_test, labels_test))
```
| github_jupyter |
<p align="center">
<img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="220" height="240" />
</p>
## Uncertaint Model Checking Demonstration
### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### The Interactive Workflow
Here's a simple workflow for checking the uncertainty model from simple kriging estimates and the estimation variance
* we assume a Gaussian local uncertainty model
#### Spatial Estimation
Consider the case of making an estimate at some unsampled location, $𝑧(\bf{u}_0)$, where $z$ is the property of interest (e.g. porosity etc.) and $𝐮_0$ is a location vector describing the unsampled location.
How would you do this given data, $𝑧(\bf{𝐮}_1)$, $𝑧(\bf{𝐮}_2)$, and $𝑧(\bf{𝐮}_3)$?
It would be natural to use a set of linear weights to formulate the estimator given the available data.
\begin{equation}
z^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} z(\bf{u}_{\alpha})
\end{equation}
We could add an unbiasedness constraint to impose the sum of the weights equal to one. What we will do is assign the remainder of the weight (one minus the sum of weights) to the global average; therefore, if we have no informative data we will estimate with the global average of the property of interest.
\begin{equation}
z^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} z(\bf{u}_{\alpha}) + \left(1-\sum^{n}_{\alpha = 1} \lambda_{\alpha} \right) \overline{z}
\end{equation}
We will make a stationarity assumption, so let's assume that we are working with residuals, $y$.
\begin{equation}
y^{*}(\bf{u}) = z^{*}(\bf{u}) - \overline{z}(\bf{u})
\end{equation}
If we substitute this form into our estimator the estimator simplifies, since the mean of the residual is zero.
\begin{equation}
y^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} y(\bf{u}_{\alpha})
\end{equation}
while satisfying the unbaisedness constraint.
#### Kriging
Now the next question is what weights should we use?
We could use equal weighting, $\lambda = \frac{1}{n}$, and the estimator would be the average of the local data applied for the spatial estimate. This would not be very informative.
We could assign weights considering the spatial context of the data and the estimate:
* **spatial continuity** as quantified by the variogram (and covariance function)
* **redundancy** the degree of spatial continuity between all of the available data with themselves
* **closeness** the degree of spatial continuity between the avaiable data and the estimation location
The kriging approach accomplishes this, calculating the best linear unbiased weights for the local data to estimate at the unknown location. The derivation of the kriging system and the resulting linear set of equations is available in the lecture notes. Furthermore kriging provides a measure of the accuracy of the estimate! This is the kriging estimation variance (sometimes just called the kriging variance).
\begin{equation}
\sigma^{2}_{E}(\bf{u}) = C(0) - \sum^{n}_{\alpha = 1} \lambda_{\alpha} C(\bf{u}_0 - \bf{u}_{\alpha})
\end{equation}
What is 'best' about this estimate? Kriging estimates are best in that they minimize the above estimation variance.
#### Properties of Kriging
Here are some important properties of kriging:
* **Exact interpolator** - kriging estimates with the data values at the data locations
* **Kriging variance** can be calculated before getting the sample information, as the kriging estimation variance is not dependent on the values of the data nor the kriging estimate, i.e. the kriging estimator is homoscedastic.
* **Spatial context** - kriging takes into account, furthermore to the statements on spatial continuity, closeness and redundancy we can state that kriging accounts for the configuration of the data and structural continuity of the variable being estimated.
* **Scale** - kriging may be generalized to account for the support volume of the data and estimate. We will cover this later.
* **Multivariate** - kriging may be generalized to account for multiple secondary data in the spatial estimate with the cokriging system. We will cover this later.
* **Smoothing effect** of kriging can be forecast. We will use this to build stochastic simulations later.
#### Spatial Continuity
**Spatial Continuity** is the correlation between values over distance.
* No spatial continuity – no correlation between values over distance, random values at each location in space regardless of separation distance.
* Homogenous phenomenon have perfect spatial continuity, since all values as the same (or very similar) they are correlated.
We need a statistic to quantify spatial continuity! A convenient method is the Semivariogram.
#### The Semivariogram
Function of difference over distance.
* The expected (average) squared difference between values separated by a lag distance vector (distance and direction), $h$:
\begin{equation}
\gamma(\bf{h}) = \frac{1}{2 N(\bf{h})} \sum^{N(\bf{h})}_{\alpha=1} (z(\bf{u}_\alpha) - z(\bf{u}_\alpha + \bf{h}))^2
\end{equation}
where $z(\bf{u}_\alpha)$ and $z(\bf{u}_\alpha + \bf{h})$ are the spatial sample values at tail and head locations of the lag vector respectively.
* Calculated over a suite of lag distances to obtain a continuous function.
* the $\frac{1}{2}$ term converts a variogram into a semivariogram, but in practice the term variogram is used instead of semivariogram.
* We prefer the semivariogram because it relates directly to the covariance function, $C_x(\bf{h})$ and univariate variance, $\sigma^2_x$:
\begin{equation}
C_x(\bf{h}) = \sigma^2_x - \gamma(\bf{h})
\end{equation}
Note the correlogram is related to the covariance function as:
\begin{equation}
\rho_x(\bf{h}) = \frac{C_x(\bf{h})}{\sigma^2_x}
\end{equation}
The correlogram provides of function of the $\bf{h}-\bf{h}$ scatter plot correlation vs. lag offset $\bf{h}$.
\begin{equation}
-1.0 \le \rho_x(\bf{h}) \le 1.0
\end{equation}
#### Accuracy Plots
The accuracy plot was developed by Deutsch (1996) and described in Pyrcz and Deutsch (2014).
* a method for checking uncertainty models
* based on calculating the percentiles of withheld testing data in the estimated local uncertainty distributions, $Z(\bf{u}_{\alpha})$, describes by CDFs, $F_Z(z,\bf{u}_{\alpha})$, at all testing locations $\alpha = 1,\ldots,n_{test}$.
The accuracy plot is the proportion of data within symmetric probability intervals vs. the probability interval, $p$.
* for example, 20% of withheld testing data should fall between the P40 and P60 probability interval
#### Objective
In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - sample_data.csv at https://git.io/fh4gm.
There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
#### Load the required libraries
The following code loads the required libraries.
```
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
```
We will also need some standard packages. These should have been installed with Anaconda 3.
```
%matplotlib inline
import os # to set current working directory
from sklearn.model_selection import train_test_split # train and test data split by random selection of a proportion
from scipy.stats import norm # Gaussian distribution assumed for local uncertainty
import sys # supress output to screen for interactive variogram modeling
import io # set the working directory
import numpy as np # arrays and matrix math
import pandas as pd # DataFrames
import matplotlib.pyplot as plt # plotting
from matplotlib.pyplot import cm # color maps
from matplotlib.patches import Ellipse # plot an ellipse
import math # sqrt operator
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
```
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
#### Simple, Simple Kriging Function
Let's write a fast Python function to take data points and unknown location and provide the:
* **simple kriging estimate**
* **simple kriging variance / estimation variance**
* **simple kriging weights**
This provides a fast method for small datasets, with less parameters (no search parameters) and the ability to see the simple kriging weights
```
def simple_simple_krige(df,xcol,ycol,vcol,dfl,xlcol,ylcol,vario,skmean):
# load the variogram
nst = vario['nst']; pmx = 9999.9
cc = np.zeros(nst); aa = np.zeros(nst); it = np.zeros(nst)
ang = np.zeros(nst); anis = np.zeros(nst)
nug = vario['nug']; sill = nug
cc[0] = vario['cc1']; sill = sill + cc[0]
it[0] = vario['it1']; ang[0] = vario['azi1'];
aa[0] = vario['hmaj1']; anis[0] = vario['hmin1']/vario['hmaj1'];
if nst == 2:
cc[1] = vario['cc2']; sill = sill + cc[1]
it[1] = vario['it2']; ang[1] = vario['azi2'];
aa[1] = vario['hmaj2']; anis[1] = vario['hmin2']/vario['hmaj2'];
# set up the required matrices
rotmat, maxcov = geostats.setup_rotmat(nug,nst,it,cc,ang,pmx)
ndata = len(df); a = np.zeros([ndata,ndata]); r = np.zeros(ndata); s = np.zeros(ndata); rr = np.zeros(ndata)
nest = len(dfl)
est = np.zeros(nest); var = np.full(nest,sill); weights = np.zeros([nest,ndata])
# Make and solve the kriging matrix, calculate the kriging estimate and variance
for iest in range(0,nest):
for idata in range(0,ndata):
for jdata in range(0,ndata):
a[idata,jdata] = geostats.cova2(df[xcol].values[idata],df[ycol].values[idata],df[xcol].values[jdata],df[ycol].values[jdata],
nst,nug,pmx,cc,aa,it,ang,anis,rotmat,maxcov)
r[idata] = geostats.cova2(df[xcol].values[idata],df[ycol].values[idata],dfl[xlcol].values[iest],dfl[ylcol].values[iest],
nst,nug,pmx,cc,aa,it,ang,anis,rotmat,maxcov)
rr[idata] = r[idata]
s = geostats.ksol_numpy(ndata,a,r)
sumw = 0.0
for idata in range(0,ndata):
sumw = sumw + s[idata]
weights[iest,idata] = s[idata]
est[iest] = est[iest] + s[idata]*df[vcol].values[idata]
var[iest] = var[iest] - s[idata]*rr[idata]
est[iest] = est[iest] + (1.0-sumw)*skmean
return est,var,weights
```
#### Set Global Parameters
These impact the look and results of this demonstration.
```
seed = 73073 # random number seed for train and test split and added error term
cmap = plt.cm.plasma # color map
vmin = 0.0; vmax = 0.20 # feature min and max for plotting
error_std = 0.0 # error standard deviation
bins = 20 # number of bins for the accuracy plots
```
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
```
os.chdir("c:/PGE383") # set the working directory
```
#### Load and Visualize the Spatial Data
Here's the code to load our comma delimited data file in to a Pandas' DataFrame object and to visualize it.
```
df = pd.read_csv("sample_data_biased.csv") # read a .csv file in as a DataFrame
df['Porosity'] = df['Porosity']+norm.rvs(0.0,error_std,random_state = seed,size=len(df))
plt.subplot(111)
im = plt.scatter(df['X'],df['Y'],c=df['Porosity'],marker='o',cmap=cmap,vmin=vmin,vmax=vmax,alpha=0.8,linewidths=0.8,
verts=None,edgecolors="black",label="train")
plt.title("Subset of the Data for the Demonstration")
plt.xlim([0,1000]); plt.ylim([0,1000])
plt.xlabel('X(m)'); plt.ylabel('Y(m)'); plt.legend()
cbar = plt.colorbar(im, orientation="vertical", ticks=np.linspace(vmin, vmax, 10),format='%.2f')
cbar.set_label('Porosity (fraction)', rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Interactive Uncertainty Checking with Simple Kriging
The following code includes:
* dashboard with variogram model, number of data and the proportion of testing data
* plots of variogram model, train and test data locations, accuracy plot and training data with testing data percentiles
```
import warnings; warnings.simplefilter('ignore')
# build the dashboard
style = {'description_width': 'initial'}
l = widgets.Text(value=' Simple Kriging, Michael Pyrcz, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
nug = widgets.FloatSlider(min = 0, max = 1.0, value = 0.0, step = 0.1, description = 'nug',orientation='vertical',continuous_update=False,
layout=Layout(width='50px', height='200px'))
nug.style.handle_color = 'gray'
it1 = widgets.Dropdown(options=['Spherical', 'Exponential', 'Gaussian'],value='Spherical',continuous_update=False,
description='Type1:',disabled=False,layout=Layout(width='180px', height='30px'), style=style)
azi = widgets.FloatSlider(min=0, max = 360, value = 0, step = 22.5, description = 'azi',continuous_update=False,
orientation='vertical',layout=Layout(width='80px', height='200px'))
azi.style.handle_color = 'gray'
hmaj1 = widgets.FloatSlider(min=0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmaj1',continuous_update=False,
orientation='vertical',layout=Layout(width='80px', height='200px'))
hmaj1.style.handle_color = 'gray'
hmin1 = widgets.FloatSlider(min = 0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmin1',continuous_update=False,
orientation='vertical',layout=Layout(width='80px', height='200px'))
hmin1.style.handle_color = 'gray'
ptest = widgets.FloatSlider(min = 0.01, max = 0.9, value = 100.0, step = 0.1, description = 'prop test',continuous_update=False,
orientation='vertical',layout=Layout(width='80px', height='200px'))
ptest.style.handle_color = 'gray'
ndata = widgets.IntSlider(min = 1, max = len(df), value = 100, step = 10, description = 'number data',continuous_update=False,
orientation='vertical',layout=Layout(width='80px', height='200px'))
ndata.style.handle_color = 'gray'
uikvar = widgets.HBox([nug,it1,azi,hmaj1,hmin1,ptest,ndata],)
uipars = widgets.HBox([uikvar],)
uik = widgets.VBox([l,uipars],)
# convenience function ot convert variogram model type to a integer
def convert_type(it):
if it == 'Spherical':
return 1
elif it == 'Exponential':
return 2
else:
return 3
# calculate the kriging-based uncertainty distributions and match truth values to percentiles and product plots
def f_make_krige(nug,it1,azi,hmaj1,hmin1,ptest,ndata):
text_trap = io.StringIO()
sys.stdout = text_trap
it1 = convert_type(it1)
train, test = train_test_split(df.iloc[len(df)-ndata:,[0,1,3,]], test_size=ptest, random_state=73073)
nst = 1; xlag = 10; nlag = int(hmaj1/xlag); c1 = 1.0-nug
vario = GSLIB.make_variogram(nug,nst,it1,c1,azi,hmaj1,hmin1) # make model object
index_maj,h_maj,gam_maj,cov_maj,ro_maj = geostats.vmodel(nlag,xlag,azi,vario) # project the model in the major azimuth # project the model in the 135 azimuth
index_min,h_min,gam_min,cov_min,ro_min = geostats.vmodel(nlag,xlag,azi+90.0,vario) # project the model in the minor azimuth
skmean = np.average(train['Porosity']) # calculate the input mean and sill for simple kriging
sill = np.var(train['Porosity'])
sk_est, sk_var, sk_weights = simple_simple_krige(train,'X','Y','Porosity',test,'X','Y',vario,skmean=skmean) # data, esitmation locations
sk_std = np.sqrt(sk_var*sill) # standardize estimation variance by the sill and convert to std. dev.
percentiles = norm.cdf(test['Porosity'],sk_est,sk_std) # calculate the percentiles of truth in the uncertainty models
test["Percentile"] = percentiles
xlag = 10.0; nlag = int(hmaj1/xlag) # lags for variogram plotting
plt.subplot(221) # plot variograms
plt.plot([0,hmaj1*1.5],[1.0,1.0],color = 'black')
plt.plot(h_maj,gam_maj,color = 'black',label = 'Major ' + str(azi))
plt.plot(h_min,gam_min,color = 'black',label = 'Minor ' + str(azi+90.0))
deltas = [22.5, 45, 67.5];
ndelta = len(deltas); hd = np.zeros(ndelta); gamd = np.zeros(ndelta);
color=iter(cm.plasma(np.linspace(0,1,ndelta)))
for delta in deltas:
index,hd,gamd,cov,ro = geostats.vmodel(nlag,xlag,azi+delta,vario);
c=next(color)
plt.plot(hd,gamd,color = c,label = 'Azimuth ' + str(azi+delta))
plt.xlabel(r'Lag Distance $\bf(h)$, (m)')
plt.ylabel(r'$\gamma \bf(h)$')
plt.title('Interpolated NSCORE Porosity Variogram Models')
plt.xlim([0,hmaj1*1.5])
plt.ylim([0,1.4])
plt.legend(loc='upper left')
plt.subplot(222) # plot the train and test data
im = plt.scatter(train['X'],train['Y'],c=train['Porosity'],marker='o',cmap=cmap,vmin=vmin,vmax=vmax,alpha=0.8,linewidths=0.8,
verts=None,edgecolors="black",label="train")
plt.scatter(test['X'],test['Y'],c=test['Porosity'],marker='^',cmap=cmap,vmin=vmin,vmax=vmax,alpha=0.8,linewidths=0.8,
verts=None,edgecolors="black",label="test")
plt.title("Training and Testing Data")
plt.xlim([0,1000]); plt.ylim([0,1000])
plt.xlabel('X(m)'); plt.ylabel('Y(m)'); plt.legend()
cbar = plt.colorbar(im, orientation="vertical", ticks=np.linspace(vmin, vmax, 10),format='%.2f')
cbar.set_label('Porosity (fraction)', rotation=270, labelpad=20)
fraction_in = np.zeros(bins) # calculate and plot the accuracy plot
p_intervals = np.linspace(0.0,1.0,bins)
for i,p in enumerate(p_intervals):
test_result = (test['Percentile'] > 0.5-0.5*p) & (test['Percentile'] < 0.5+0.5*p)
fraction_in[i] = test_result.sum()/len(test)
plt.subplot(223)
plt.scatter(p_intervals,fraction_in,c='black',marker='o',alpha=0.8)
plt.plot([0.0,1.0],[0.0,1.0],c='black')
plt.xlim([0.0,1.0]); plt.ylim([0,1.0])
plt.title('Uncertainty Model at Unknown Location')
plt.xlabel('Probability Interval'); plt.ylabel('Fraction In the Interval')
plt.subplot(224) # plot the testing percentiles with the training data
plt.scatter(train['X'],train['Y'],s=20,c='black',marker='o',cmap=cmap,vmin=vmin,vmax=vmax,alpha=0.8,linewidths=0.8,
verts=None,edgecolors="black",label="train")
im = plt.scatter(test['X'],test['Y'],s=80.0,c=test['Percentile'],marker='^',cmap=cmap,vmin=0.0,vmax=1.0,alpha=0.8,linewidths=0.8,
verts=None,edgecolors="black",label="test")
plt.title("Cross Validation Percentiles")
plt.xlim([0,1000]); plt.ylim([0,1000])
plt.xlabel('X(m)'); plt.ylabel('Y(m)'); plt.legend()
cbar = plt.colorbar(im, orientation="vertical", ticks=np.linspace(0.0, 1.0, 10),format='%.2f')
cbar.set_label('Porosity (fraction)', rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.2, top=2.2, wspace=0.3, hspace=0.3)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(f_make_krige, {'nug':nug, 'it1':it1, 'azi':azi, 'hmaj1':hmaj1, 'hmin1':hmin1, 'ptest':ptest, 'ndata':ndata})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Interactive Uncertianty Checking Kriging Demostration
* select the variogram model for simple kriging and observe the impact on the uncertainty model
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
### The Inputs
Select the variogram model and the proportion of data withheld for testing.
* **nug**: nugget effect
* **c1**: contributions of the sill
* **hmaj1 / hmin1**: range in the major and minor direction
* **(x1, y1),...(x3,y3)**: spatial data locations
* **test proportion**: proportion of data withheld for testing
```
display(uik, interactive_plot) # display the interactive plot
```
#### Comments
This was an interactive demonstration of simple kriging for spatial data analytics. Much more could be done, I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```
norm.cdf(0,0,1)
```
| github_jupyter |
## Introduction to NumPy
1. An array object of arbitrary homogeneous items
2. Fast mathematical operations over arrays
3. Linear Algebra, Fourier Transforms, Random Number Generation
```
import numpy as np
np.__version__
```
## Where to get help
- http://docs.scipy.org
- Forums: mailing list, http://stackoverflow.com
## Where do I learn more?
- <a href="http://mentat.za.net/numpy/intro/intro.html">NumPy introductory tutorial</a>
- <a href="http://scipy-lectures.github.com">SciPy Lectures</a>
## Familiarize yourself with the notebook environment
- Tab completion
- Docstring inspection
- Magic commands: %timeit, %run
- Two modes: navigate & edit (use the keyboard!)
## NumPy vs pure Python—a speed comparison
```
x = np.random.random(1024)
%timeit [t**2 for t in x]
%timeit x**2
```
<img src="array_vs_list.png" width="60%"/>
From: Python Data Science Handbook by Jake VanderPlas (https://github.com/jakevdp/PythonDataScienceHandbook), licensed CC-BY-NC-ND
## The structure of a NumPy array
<img src="ndarray_struct.png"/>
```
x = np.array([[1, 4], [2, 8]], dtype=np.uint8)
x
x.shape, x.dtype, x.strides, x.size, x.ctypes.data
def memory_at(arr):
import ctypes
return list(ctypes.string_at(arr.ctypes.data, arr.size * arr.itemsize))
memory_at(x)
y = x.T
y.shape, y.dtype, y.strides, y.size, y.ctypes.data
memory_at(y)
```
## Constructing arrays
```
np.zeros((3,3))
np.ones((2, 2))
np.array([[1, 2], [-1, 5]])
np.zeros_like(x)
np.diag([1, 2, 3])
np.eye(3)
rng = np.random.RandomState(42)
rng.random_sample((3, 3))
x = rng.random_sample((2,2,3,2,2))
x.shape
```
## Shape
```
x = np.arange(12)
x
x.reshape((3, 4))
```
## Indexing
```
x = np.array([[1, 2, 3], [3, 2, 1]])
x
x[0, 1]
x[1]
x[:, 1:3]
```
### Fancy indexing—indexing with arrays
```
%matplotlib inline
import matplotlib.pyplot as plt
x = np.arange(100).reshape((10, 10))
plt.imshow(x);
mask = (x < 50)
mask[:5, :5]
mask.shape
x[mask]
x[mask] = 0
plt.imshow(x);
```
### Views
```
x = np.arange(10)
y = x[0:3]
print(x, y)
y.fill(8)
print(x, y)
```
## Data types
```
x = np.array([1,2,3])
x.dtype
x = np.array([1.5, 2, 3])
x.dtype
x = np.array([1, 2, 3], dtype=float)
x.dtype
```
## Broadcasting
### 1D
<img src="broadcast_scalar.svg" width="50%"/>
### 2D
<img src="broadcast_2D.png"/>
### 3D (showing sum of 3 arrays)
<img src="broadcast_3D.png"/>
```
x, y = np.ogrid[:5:0.5, :5:0.5]
print(x.shape)
print(y.shape)
plt.imshow(x**2 + y**2);
```
## Expressions and universal functions
```
x = np.linspace(0, 2 * np.pi, 1000)
y = np.sin(x) ** 3
plt.plot(x, y);
θ = np.deg2rad(1)
cos = np.cos
sin = np.sin
R = np.array([[cos(θ), -sin(θ)],
[sin(θ), cos(θ)]])
v = np.random.random((100, 2))
print(R.shape, v.shape)
print(R.shape, v.T.shape)
v_ = (R @ v.T).T
plt.plot(v[:, 0], v[:, 1], 'r.')
plt.plot(v_[:, 0], v_[:, 1], 'b.');
v = np.random.random((100, 2))
plt.plot(v[:, 0], v[:, 1], 'r.')
v_ = (R @ v.T).T
for i in range(100):
v_ = (R @ v_.T).T
plt.plot(v_[:, 0], v_[:, 1], 'b.', markersize=3, alpha=0.1)
```
## Input/output
```
!cat hand.txt
hand = np.loadtxt('hand.txt')
hand[:5]
plt.plot(hand[:, 0], hand[:, 1]);
# Use the NumPy binary format--do not pickle!
# np.save and np.savez
```
## Reductions
```
a = np.arange(12).reshape((3, 4))
a.mean(axis=0)
a.sum()
x = np.array([1 + 1j, 2 + 2j])
x.real
y = np.array([-0.1, -0.05, 0.35, 0.5, 0.9, 1.1])
y.clip(0, 0.5)
```
## Exercises
Try the three exercises at http://www.scipy-lectures.org/intro/numpy/exercises.html#array-manipulations
## Structured arrays
```
!cat rainfall.txt
dt = np.dtype([('station', 'S4'), ('year', int), ('level', (float, 12))])
x = np.zeros((3,), dtype=dt)
x
r = np.loadtxt('rainfall.txt', dtype=dt)
r['station']
mask = (r['station'] == b'AAEF')
r[mask]
r[mask]['level']
```
If you're heading in this direction, you may want to involve Pandas:
```
import pandas as pd
df = pd.read_csv('rainfall.txt', header=None, sep=' ',
names=('station', 'year',
'jan', 'feb', 'mar', 'apr', 'may', 'jun',
'jul', 'aug', 'sep', 'oct', 'nov', 'dec'))
df
df['station']
aaef_data = df[df['station'] == 'AAEF']
aaef_data
aaef_data.loc[:, 'jan':'dec']
```
If you look at the DataFrame values, what do you see? A structured array!
```
aaef_data.values
```
Pandas makes some things a lot easier, but it's API and underlying model is
much more complex than NumPy's, so YMMV.
| github_jupyter |
```
import numpy as np
```
# `site-analysis` draft documentation
`site_analysis` is a Python module for analysing molucular dynamics trajectories to identify the movement of mobile atoms through sequences of “sites”.
The central objects in `site_analysis` are the set of mobile atoms, represented by `Atom` objects, and the set of sites, represented by `Site` objects. `Site` objects are subclassed according to how site occupations are defined; you can work with `PolyhedralSite`, `SphericalSite`, and `VoronoiSite` types.
## example trajectory analysis
For this example we want to analyse a simulation trajectory using polyhedral sites. An example structure to analyse is defined in the file `POSCAR`, which we read in as a pymatgen `Structure`:
```
from pymatgen.io.vasp import Poscar
structure = Poscar.from_file('POSCAR').structure
print(structure.composition)
```
Our mobile atoms are Na. These are tracked using their index in the structures (`site_analysis` assumes that atom order is preserved throughout a simulation trajectory).
```
from site_analysis.atom import atoms_from_species_string
atoms = atoms_from_species_string(structure, 'Na')
atoms[0:3]
```
We also need to define our sites. In this example we want **polyhedral** sites, where each site is defined as the polyhedral volume enclosed by a set of vertex atoms. We therefore need to find the atom indices of the vertex atoms for each of our sites.
```
from pymatgen.io.vasp import Poscar, Xdatcar
import numpy as np
import operator
from site_analysis.polyhedral_site import PolyhedralSite
from site_analysis.atom import atoms_from_species_string
from site_analysis.trajectory import Trajectory
from site_analysis.tools import get_vertex_indices
```
To do this we load a `POSCAR` file where every octahedral site is occupied by a Na atom.
```
all_na_structure = Poscar.from_file('na_sn_all_na_ext.POSCAR.vasp').structure
vertex_species = 'S'
centre_species = 'Na'
all_na_structure.composition
```
We then use the `get_vertex_indices()` function to find the six closest S to each Na (within a cutoff of 4.3 Å).
This returns a nested list, where each sublist contains the S indices for a single polyhedron. In this case we have 136 Na atoms, but only 88 in our real simulation trajectory, so we subtract 48 to align the vertex indices.
```
# find atom indices (within species) for all polyhedra vertex atoms
vertex_indices = np.array(get_vertex_indices(all_na_structure, centre_species=centre_species,
vertex_species=vertex_species, cutoff=4.3))
print(vertex_indices[:4])
# Our real structures contain 88 Na atoms, so we correct these vertex indices
vertex_indices = vertex_indices - 48
print(vertex_indices[:4])
```
We can now use these vertex ids to define our `Polyhedron` objects.
```
sites = PolyhedralSite.sites_from_vertex_indices(vertex_indices)
sites[:3]
```
We now create a `Trajectory` object:
```
trajectory = Trajectory(sites=sites,
atoms=atoms)
trajectory
```
The `Trajectory` object provides the main interface for working with the `Polyhedron` and `Atom` objects.
To analyse the site occupation for a particular `pymatgen` `Structure`:
```
trajectory.analyse_structure(structure)
```
The list of sites occupied by each atom can now be accessed using `trajectory.atom_sites`
```
np.array(trajectory.atom_sites)
```
e.g. atom index 0 is occupying site index 9 (both counting from zero)
```
s = trajectory.sites[9]
print(s.contains_atom(trajectory.atoms[0]))
print(s.contains_atom(trajectory.atoms[1]))
s = trajectory.site_by_index(9)
print(s.contains_atom(trajectory.atom_by_index(0)))
print(s.contains_atom(trajectory.atom_by_index(1)))
```
The list of atoms occupying each site can be accessed using `trajectory.site_occupations`.
The occupations of each site are stored as a list of lists, as each site can have zero, one, or multiple atoms occupying it.
```
trajectory.site_occupations
```
A *trajectory* consists of a series of site occupations over multiple timesteps. A single timestep can be processed using the `analysis.append_timestep()` method:
```
trajectory.append_timestep(structure)
```
There are two ways to think about a site-projected trajectory:
1. From an atom-centric perspective. Each atom visits a series of sites, and occupies one site each timestep.
2. From a site-centric perspective. Each site is visited by a series of atoms, and has zero, one, or more atoms occupying it at each timestep.
These two trajectory types can be accessed using the `Atom.trajectory` and `Site.trajectory` attributes:
```
print(atoms[3].trajectory)
print(sites[3].trajectory)
trajectory.timesteps
np.array(trajectory.atoms_trajectory)
trajectory.sites_trajectory
```
Example of processing a simulation trajectory using the `XDATCAR` file:
(using `analysis.reset()` to reset the trajectory data.)
```
trajectory.reset()
xdatcar = Xdatcar('XDATCAR')
for timestep, s in enumerate(xdatcar.structures):
trajectory.append_timestep(s, t=timestep)
```
Checking which sites has Na(3) visited:
```
print(trajectory.atom_by_index(3).trajectory) # convert to a numpy array to then use numpy array slicing to extract a single atom trajectory.
```
Na(4) starts in site 14, and moves to site 72 at timestep 5.
The same information can be seen by querying the site occupation data for sites 15 and 73:
(note use of `analysis.st` as shorthand for `analysis.sites_trajectory`)
```
print(trajectory.site_by_index(14).trajectory)
print(trajectory.site_by_index(72).trajectory)
```
| github_jupyter |
# Fun with numpy.random
Exploring numpy.random library as assignment for Programming for Data Analysis, GMIT 2019
Lecturer: dr Brian McGinley
>Author: **Andrzej Kocielski**
>Github: [andkoc001](https://github.com/andkoc001/)
>Email: G00376291@gmit.ie, and.koc001@gmail.com
Created: 11-10-2019
This Notebook should be read in conjunction with the corresponding README.md file at the assignment repository at GitHub: <https://github.com/andkoc001/fun-with-numpy-random/>.
___
## Introduction
NumPy (Numerical Python) is a library of external methods to Python, dedicated to numerical computing. One of its capabilities is pseudo-random number generator - a random sampling package - `numpy.random`. The package can be divided into four sections (as per [NumPy documentation](https://docs.scipy.org/doc/numpy-1.16.0/reference/routines.random.html)):
1. Simple random data,
2. Permutations,
3. Distributions,
4. Random generator.
Below we will take a closer look at each of these sections.
Note: In this Notebook terms 'function', 'method', 'routine' and 'subroutine' are used interchangeably.
### Setting up the scene
Importing numpy.random library and version check.
```
import numpy as np # NumPy package
import matplotlib.pyplot as plt # plotting engine
# below command will allow for the plots being displayed inside the Notebook, rather than in a separate screen.
%matplotlib inline
np.version.version # NumPy version check
```
A built-in help is available, accessible through the following commands:
`dir()` prints out available functionalities of the parsed method
`help()` shows doc-string of the parsed method
```
# dir(np.random) # commented out for clarity
# help(np.random.randint) # commented out for clarity
```
___
## NumPy and numpy.random
### What is NumPy?
Python programming language is acclaimed for its capacity of handling large amount of data in scientific community of different specialisation. One of the reasons behind it is the fact that Python is open source and its natural functionality can be extended by development of external libraries developed for specific purposes. NumPy is an example of such an extension.
NumPy is a package dedicated for numerical computing. It is particularly acclaimed for handling n-dimensional arrays.
Here is an excerpt from Wikipedia's entry on NumPy:
> NumPy (...) is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. - Wikipedia
### Overall purpose of random sampling routine in NumPy
NumPy comes with a large numbers of built-in functionalities, in the library documentation referred as to routines. Random sampling (numpy.random) is an example of such a routine (function).
The general purpose of the package is to generate n-dimensional array of pseudo-random numbers. As per entry on Geeks for Geeks [website](https://www.geeksforgeeks.org/random-sampling-in-numpy-random_sample-function/):
> numpy.random.random_sample() is one of the function for doing random sampling in numpy. It returns an array of specified shape and fills it with random floats in the half-open interval [0.0, 1.0). - Geeks for Geeks
The numpy.random package has a number typical applications. The most popular include:
- random number generation,
- choice from a list,
- creation of dummy data (e.g. for demonstration of some other functionalities),
- simulation of statistical distribution,
- hypothesis testing (assessment of how reasonable the observed data is, assuming a hypothesis is true),
- shuffling existing data (permutation - random reordering of entries in an array),
- simulation of uncertainty,
- simulation of noise.
Intriguingly, Python (pure) has a built-in function `random.random` function. Compared to `numpy.random`, however, there are some differences, according to an [reply](https://stackoverflow.com/a/7030595) on Stack Overflow:
> The `numpy.random` library contains a few extra probability distributions commonly used in scientific research, as well as a couple of convenience functions for generating arrays of random data. The `random.random` library is a little more lightweight, and should be fine if you're not doing scientific research or other kinds of work in statistics. - Stack Overflow
### Random sampling
It is important to mention here that the routine returns numbers that are not random in the true meaning of the word. The numbers are generated using the system's current time, and could be repeated when the routine is called under the same conditions. For that reasons, the generate numbers are referred as to **pseudo random**. It is not suitable for cryptography and security reasons. This concept will be discussed further in the notebook, in section concerning the _seed_.
The technique used by the numpy.random package is the popular Mersenne Twister pseudo-random number generator (PRNG) ([Wikipedia]
(https://en.wikipedia.org/wiki/Mersenne_Twister)).
___
## Simple random data
NumPy library comes with a large numbers of built-in functionalities, in the library documentation referred as to routines. Random sampling (`numpy.random`) is an example of such a subroutine.
**Simple random data** is a collection of methods used for two categories of application:
1) generating of a pseudo random number from a range,
2) random selection of an object from a list.
In the first category, there are several methods, producing different outputs. For instance, the `np.random.random()` generates float numbers from half-open range [0,1), whereas `np.random.randint()` generates integer numbers from a range.
The second category, offers the functionality of random picking of objects from an existing list.
Below we will see example use of a few methods from the Simple random data.
### Function random.random
This method returns random float number(s) from _uniform distribution_ on [0,1), i.e. from 0 (inclusive) to 1 (exclusive)
```
# get a random float number from *uniform distributtion* on [0,1), i.e. from 0 (inclusive) to 1 (exclusive)
np.random.random()
# get 5 random numbers from [0,1) and print out avarage of them
sum = 0
for i in range(5):
x = np.random.random()
sum = sum + x
print(i+1,":",x, end=", ")
print("\nMean:",sum/5)
# get a n-dimensional array (ndarray) of random numbers on [0,1); when no value is parsed, it returns a simple float number
np.random.random((2,3)) # double brackets, because this method takes a single value only - in this case a tuple
```
### Function random.randn
This method generates a n-dimensional array of numbers from the _standard normal distribution_.
```
np.random.randn(2, 4)
```
The probability of a random number occurring far from the centreline decreases rapidly, but never becomes impossible (p=0).
It may be convenient to compare the the results generated by both `random` and `randn` subroutines visualised on a single plot. In the below plot the results are spread randomly - using both uniform (`random`) and normal (`randn`) distributions around the horizontal baseline. Note that the uniform distribution is offset by 0.5 to match the normal distribution. The uniform distribution results are is limited by +-0.5 from the baseline and there is equal probability of occurring in this range. The normal distribution values are not limited, but the occurance probabiliy decreases the farther away from the baseline.
```
# Plotting random distribution vs normal distribution.
x = np.arange(0.0, 101, 1) # set range of x values for plotting
# horizontal lines
y = x/x # constant horizontal line against x - will be used as a baseline for showing random noise
plt.plot(x, y, 'k--', label='baseline', alpha=0.75) # baseline for uniform distribution noise
# random data points
noise_uniform = np.random.random(size=(len(x)))-0.5 # random (uniform) noise in on (0,1] and offset by0.5 in order to centralise about the base line, at y=1
noise_normal = np.random.normal(0.0, 0.5, len(x)) # normal distribution of noise
# Uniform distribution
plt.plot(x, y + noise_uniform, 'mx', label='uniform') # magneta x-es denote random noise value for each sample
# Normal distribution
plt.plot(x, y + noise_normal, 'cx', label='normal', alpha=0.75) # cyan x-es denote normal noise value for each sample
plt.legend()
```
### Funtion random.randint
This method generates integer number(s) in a given range. The provided range is again half-open (left-sided inclusive, right-sided exclusive). The size parameter tells NumPy how many observations are to be generated, and can be organised in a multi-dimensional array.
```
np.random.randint(1,11, size=5) # 5 random integers in range between 1 and 10 inclusive
```
### Function random.choice
In the second category of subroutines of _simple random data_, picks items from a pre-defined set of the objects.
The `random.choice` method returns objects, which does not necessarily have to be numbers, from an existing list or array. It is possible for the objects to be selected more than once.
```
list_1 = ["dog", "cat", "snake", "rat", "crow"] # predefinition of list of objects
np.random.choice(list_1[:2], size=[4,1]) # random selection of objects from the list and indices 0 or 1, arranged into 4x1 array; some results may not appear or may appear more than once
```
It is also possible to assign a probability for each option:
```
np.random.choice(list_1, p=[0.1, 0.1, 0.1, 0.1, 0.6], size=6)
```
### Function random.bytes
Returns string of random characters as bytes literals (prefixed with `b` notation). Some of the characters are encoded with `\x` notation, meaning hexidecimal address of the character in the ASCII table. Here is an extract from Python [documentation](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals):
> Bytes literals are always prefixed with 'b' or 'B'; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes.
This method may be used for a password generator (although it is still a pseudo random value).
The parameter passed to the method is the length of the returned string. For example:
```
np.random.bytes(1) # generate a random 1-long string of byte literals
```
___
## Permutations
This group of methods in NumPy and allow to randomly reorder the objects in the set or in sub-set (range). It consists of two subroutines: `shuffle` and `permutation`.
### Function random.shuffle
`np.random.shuffle` method randomly reorders the items of the entire set _in place_, that is original order is overwritten with the new sequence.
```
list = [1,2,3,4,5]
print(list) # in original order
np.random.shuffle(list)
list # in new order, overwriting the original
```
### Function random.permutation
`np.random.permutation` method returns a new array (copy of the original) of the objects from a list, randomly ordered. The output will contain all the objects from the original array, and will appear precisely in the same quantities as in the orignal set.
```
list_2 = [1,2,3,4,5] # list of objects
np.random.permutation(list_2) # the original list remains intact
```
It is worth noting that `np.random.permutation` is built up on `np.random.shuffle` subroutine, which is used in the former source code. Extra functionality is provided on condition an array is parsed. Other wise, when an integer is parsed, it behaves as ordinary `shuffle` function.
___
## Selected distributions
NumPy comes with a selection of built-in probability density distributions, which are used to sample random data in a specific pattern from statistical science.
In NumPy v1.17.2, there are thirty-five different distributions available. Below we will discuss five of them, namely: uniform, standard_normal, binomial, exponential, weibull.
### 1. Uniform distribution
`numpy.random.uniform` function generates random floating point number(s). Each random value returned from this method is equally probable to occur. The generated numbers are from half-open range - ends defined when the function is called. If the range is not defined, by default it is assumed to be [0,1), in which case the subroutine behaves the same as np.random.random one.
```
np.random.uniform(size=3)
np.mean(np.random.uniform(0.9,1, size=10) * 10)
```
This distribution can be graphically interpreted as a 2-dimensional plane divided into equal areas. For instance, for 1000 random numbers generated (x-axis) in range [1-1000) (y-axis). Each sub-range on y-axis, e.g. 1-200, 201-400, etc, would receive the same amount of hits with equal probability.
In the below plot, the likelihood of number of dots in each grid is equal, which becomes even more clear for larger number of samples generated.
```
plt.figure(figsize=(5,5)) # size of the plot (width, height)
plt.plot(np.random.uniform(1,1000, size=1000), 'b.', alpha=0.5)
plt.grid()
```
The uniform distribution shows a property to fill out histograms bins uniformly, which becomes clearer for increasing number of samples. In other words, the more samples, the more equally filled is each bin on the histogram.
```
plt.figure(figsize=(14,4))
plt.subplot(1, 3, 1)
plt.hist(np.random.uniform(0,100, size=10**2)) # 100 samples
plt.subplot(1, 3, 2)
plt.hist(np.random.uniform(0,100, size=10**4)) # 10000 samples
plt.subplot(1, 3, 3)
plt.hist(np.random.uniform(0,100, size=10**6)) # 1000000 samples
plt.show # this command will hide away numeral values of the output - for clarity
```
### 2. Standard normal distribution
This distribution is a special case of a normal ditribution. `numpy.random.standard_normal` has its own subroutine under NumPy. It draws a standard normal (Gaussian) distribution for mean=0 and deviation=1 (another NumPy distribution, `np.random.normal`, allows to change these parameters).
Normal distribution shows a strong tendency to take values close to the center of the distribution and a sharp decrease in the number of deviations as they increase. There are great many examples where this distribution closely fits the observations in nature, e.g. the height of people - most of them have height close to the mean, with fewer and fewer people much shorter or taller. Extreme values are possible, but of low probability.
The more samples, the more "ideal" shape of the distribution takes.
```
std_normal = np.random.standard_normal(size=1000) # standard normal distribution generation for n=1000 samples
plt.hlines(0,0,1000, colors='r') # baseline
plt.grid()
plt.plot(std_normal, 'b.', alpha=.5)
# for reference, actual distribution parameters from the generated set
print("Actual minimum: ", np.min(std_normal))
print("Actual maximum: ", np.max(std_normal))
print("Actual mean: ", np.mean(std_normal))
print("Actual standard diviation: ", np.std(std_normal))
plt.hist(std_normal, bins=40)
plt.show # this command will hide away numeral values of the output - for clarity
```
For comparison, in the below plot, the same histogram is overlayed with the normal probability density function (pdf). In order the histogram to fit, it must be re-scaled so that the y-axis correspondes to the probability of occurance rather than counter of occurances. This re-scaling is called normalisation, so that the total area of the histogram is 1.
```
# Adopted from https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.norm.html
# Importing additional function `norm` from `scipy.stats` library
from scipy.stats import norm
fig, ax = plt.subplots(1, 1) # two plots on single figure
# defining limits for the normal distribution
x = np.linspace(norm.ppf(0.001), norm.ppf(0.999), 100) # generates 100 data points in range where the probabilityof occurance is 0.998 (between 0.001 and 0.999)
# normal distribution (probability density function pdf)
ax.plot(x, norm.pdf(x), 'b', label='normal pdf')
#And compare the histogram:
ax.hist(std_normal, bins=40, density=True, alpha=0.25, label='histogram') # parameter 'density' toggled on makes the area of the histogram be normalised to 1
ax.legend()
plt.show()
```
### 3. Binomial distribution
`np.random.binomial` samples from binomial distribution, where the results are two-folds, often represented as a _success_ or a _failure_. This distribution gives number of successful trials of `n`-total number of trials. Each result has a predefined `p`-probability of success.
For a large number of repetition (represented in NumPy by parameter _size_), and equal probability of winning (p=0.5), the result resembles a normal distribution.
```
np.random.binomial(1,0.5, size=10) # for 10 attempts, what is the result of the test, 1 - success, 0 - failure
a = np.random.binomial(10,0.5, size=10000) # how many success will be in n=10 trials of tossing a 'fair' coin (p=0.5), test repetead 10000 times
print("Actual mean: ", np.mean(a))
print("Actual standard diviation: ", np.std(a))
plt.hist(a, bins=21) # density of successes in 10-trials binomial tests repeated 10000 times
plt.show
```
### 4. Exponential distribution
`numpy.random.exponential` function draws results from exponential distribution.
This type of distribution is typically used to describe situations where an object with a constant probability can change its state from one to another in a given unit of time. In other words, if independent events are to occur per unit of time, the exponential distribution describes the time intervals between successive events. There is many application of the distriution in the real situations, for instance the probability of a customer entering a shop store.
The NumPy function takes two parameters, the first one being the _scale_, which relates to the time period, and the other one - _size_, which is a number of experiments (by default, _size_ = 10).
Exponential distribution is a special case of gamma distribution, and is related to Poisson distribution (which is used to describe low-probability events, e.g. a number of accidents per person).
Below is an example of the distribution and the graphical interpretation.
```
plt.figure(figsize=(12,6)) # size of the plot (width, height)
expon = np.random.exponential(1, [1000,2]) # the first parameter (scale) relates to time period, the second one (size) - to number of experiments (2 sets of 1000 experiments)
plt.hist(expon, bins=40, alpha=0.8, rwidth=0.6) # the values in bins are grouped by sets of experiment, each in represented by different colour
plt.show
```
Similarly to normal distribution in section 2. above, in the below plot, the same histogram is overlayed with the normal probability density function (pdf), for comparison. Again, for the histogram to fit, it must be re-scaled so that the y-axis correspondes to the probability of occurance rather than counter of occurances.
```
# adapted from https://stackoverflow.com/a/47324702
# exponential funtion
def dist_func(x, a):
return(a*np.exp(-a*x))
fig = plt.figure(figsize=(12, 6)) # size of the plot (width, height)
ax = fig.add_subplot(1, 1, 1 )
# histogram takes data from the value generated in the above Notebook cell
ax.hist(expon, bins=40, alpha=0.5, rwidth=0.6, density=True) # the same format as above
# ideal exponential pdf curve, for comparison
max_e = np.max(expon) # the largest of the generated values
ax.plot(np.linspace(0, max_e, 100), dist_func(np.linspace(0, max_e, 100), 1), 'k', alpha=0.9) #
plt.show()
```
### 5. Weibull distribution
`numpy.random.weibull` subroutine produces results from weilbull distribution. This probability distribution found a numerous applications in technical science. For instance, it is often used in equipment reliability study, e.g. to evaluate mean time between failures (MTBF) - maintenance and reliability related concepts were discussed in the [paper](http://system.logistics-and-transport.eu/index.php/main/article/view/509) I published.
The function takes two parameters: shape and size.
The _shape_ parameter is a positive float number must be greater than one.
Its value will dictate the the slope of the regressed line in probability plot' (Realiawiki), corresponding to certain conditions being simulated. For shape > 1 the probability density distribution resembles normal
distribution, but skewed; for shape < 1 it resembles exponential distribution.
For example, below is a plot showing the _shape_ value .5, 1.0, 1.5 and 3.0.
```
# adopted from NumPy documentation, https://docs.scipy.org/doc/numpy-1.16.0/reference/generated/numpy.random.weibull.html#numpy.random.weibull
x_range = np.arange(0, 2, 0.01) # range of x-axis (start, stop, step)
def weib_pdf(x_range, a): # a - shape of the weibull distribution, adapted from https://numpy.org/doc/1.17/reference/random/generated/numpy.random.Generator.weibull.html
return (a / 1) * (x / 1)**(a - 1) * np.exp(-(x / 1)**a) # scale=1
plt.plot(x, weib_pdf(x, 0.5), label='shape = 0.5')
plt.plot(x, weib_pdf(x, 1.0), label='shape = 1.0')
plt.plot(x, weib_pdf(x, 1.5), label='shape = 1.5')
plt.plot(x, weib_pdf(x, 3.0), label='shape = 3.0')
plt.xlim(0, 2)
plt.ylim(0, 2)
plt.legend()
plt.show()
```
The _size_ parameter represents the how large is the experiment. Below plot illustrate the Weibull density distribution for two sets of experiments, of 500 observations each, discriminated in the plots by colour.
```
weib = np.random.weibull(2, [500,2]) # the first parameter (shape), the second one (size) - to number of experiments (2 sets of 500 experiments)
plt.figure(figsize=(9,4)) # size of the plot (width, height)
plt.subplot(1, 2, 1)
plt.hist(weib, bins=14, alpha=0.75) # results of the two sets of experiments are represented by differnt colours
plt.subplot(1, 2, 2)
plt.plot(weib, '.', alpha=0.75)
plt.hlines((np.mean(weib)),0,500, colors='k') # draws a horizontal line at mean-value for comparison purpose
plt.show
```
___
## Seed
#### The concept of Pseudo Randomness
Randomness is an interesting question in computer science in general, and posses some technical challenges (source: [Quora](https://www.quora.com/Why-is-it-so-hard-to-generate-actually-random-numbers)).
In practice, the uncertainty is simulated by application of algorithms. The resulting numbers are obtained in deterministic way. For k various inputs there may be only k possible outputs. There is also a finite amount of computer memory, limiting the calculations. The generated results are not truly random, and so are called pseudo-random. The technique of acquiring a number via such algorithms is referred as to _Pseudo Random Number Generator_ (PRNG). Of course, much effort is put into ensuring the generated results appear satisfactorily random. The algorithms are complex and for most of typical applications, excluding security and cryptography (not discussed here), the simulated randomness is sufficient.
Python (and NumPy) uses PRNG the Mersenne Twister algorithm. Below is a short exert from OverStack on the algorithm (source: [Stack Overflow](https://stackoverflow.com/a/7030595)).
> [Python and NumPy] both use the Mersenne twister sequence to generate their random numbers, and they're both completely deterministic - that is, if you know a few key bits of information, it's possible to predict with absolute certainty what number will come next. For this reason, neither numpy.random nor random.random is suitable for any serious cryptographic uses. But because the sequence is so very very long, both are fine for generating random numbers in cases where you aren't worried about people trying to reverse-engineer your data. This is also the reason for the necessity to seed the random value - if you start in the same place each time, you'll always get the same sequence of random numbers!
#### Seed
The above quote gives and insight into the concept of _seed_. It is, essentially, a numeric input, on which the algorithm produces the pseudo-random output. The seed defines the initial conditions for the generator. If the initial conditions are the same, the result will also be the same. The PRNG, naturally, modifies the value of the seed used on continuous basis. The randomness, often referred as to entropy, is simulated by inputs from the outside, an element of chaos in the natural world. There are various sources of the inputs form the hardware, like mouse movements or the noise generated by a fan [Wikipedia](https://en.wikipedia.org/wiki/Entropy_(computing)).
Often, the seed is taken from the system clock. The system clock in Unix-based systems is stored in the form of seconds passed from 1st January 1970. As the passing time is monitored with a high precision (CPU dependent), every time the pseudo-random number that is generated may appear different and random.
#### System clock
The current time in seconds passed from 1st January 1970 can be obtained with `time()` method from Python `time` library.
```
import time;
time.time()
np.random.random()
```
#### Seed reuse
Furthermore, as per another [Stack Overflow response](https://stackoverflow.com/a/32172816), it is possible to retrieve the _seed_ used with the help of `numpy.random.get_state` method, and hence reproduces the same result.
User can pass a value of _seed_ to the random function to control the generated data, for example in order to produce identical output.
```
# set seed number
np.random.seed(1) # seed = 1
# generate pseudo-random numbers
for i in range(4):
print(np.random.random(), end=" ")
print("\n") # new line
# use the same seed value again to get the same output
np.random.seed(1)
# generate pseudo-random numbers
for i in range(4):
print(np.random.random(), end=" ")
```
___
## References
### General Python
- [GMIT _Programming for Data Analysis_ module materials](https://learnonline.gmit.ie/course/view.php?id=1127)
- [Python 3 official tutorial](https://docs.python.org/3/tutorial/)
### General NumPy
- [NumPy official documentation](https://docs.scipy.org/)
- [NumPy on repository on GitHub](https://github.com/numpy/numpy)
### Random sampling in NumPy
- [NumPy v1.17 - Random sampling](https://docs.scipy.org/doc/numpy/reference/random/index.html)
- [NumPy v1.16 - Random sampling routines](https://numpy.org/doc/1.16/reference/routines.random.html)
- [Geeksforeeks - Random sampling in numpy](https://www.geeksforgeeks.org/random-sampling-in-numpy-random_sample-function/)
- [Medium - Incredibly Fast Random Sampling in Python](https://medium.com/ibm-watson/incredibly-fast-random-sampling-in-python-baf154bd836a)
- [Tutorialspoint - Generate pseudo-random numbers in Python](https://www.tutorialspoint.com/generate-pseudo-random-numbers-in-python)
- [Quora - Why is it so hard to generate actually random numbers?](https://www.quora.com/Why-is-it-so-hard-to-generate-actually-random-numbers)
- [Wikipedia - Random numbers generation](https://en.wikipedia.org/wiki/Random_number_generation)
- [Stack Overflow - Differences between numpy.random and random.random in Python](https://stackoverflow.com/a/7030595)
- [NumPy documentation - Mersenne Twister](https://docs.scipy.org/doc/numpy/reference/random/bit_generators/mt19937.html)
- [Sharp Sight - NumPy Random Seed Explained](https://www.sharpsightlabs.com/blog/numpy-random-seed/)
- [Machine Learning Mastery - How to generate random numbers in Python](https://machinelearningmastery.com/how-to-generate-random-numbers-in-python/)
### Statistics
- [DataCamp - Statistical thinking in Python course, part 2](https://www.datacamp.com/courses/statistical-thinking-in-python-part-2)
- [Quora - What is the difference between an exponential, gamma and poisson distribution?](https://www.quora.com/What-is-the-difference-between-an-exponential-gamma-and-poisson-distribution)
- [Realiawiki - The Weibull distribution](http://reliawiki.org/index.php/The_Weibull_Distribution)
### Others
- [Software Carpentery - Version Control with Git](http://swcarpentry.github.io/git-novice/)
- [Mastering Markdown](https://guides.github.com/features/mastering-markdown/)
- [Modern Machine Learning Algorithms: Strengths and Weaknesses](https://elitedatascience.com/machine-learning-algorithms)
___
Andrzej Kocielski
| github_jupyter |
```
import pandas as pd
import os
import shutil
from PIL import Image
import cv2
iou = 0.4
depth = 30
df = pd.read_csv("/home/rinat/Downloads/videos/sample/sample.csv")
df
def add_cluster_members(index, df, cluster_number, depth, iou):
row = df.loc[[index]]
for i in range(depth):
candidate_rows = df[(df['frame_number'] == row.iloc[0]['frame_number'] + i + 1) &
(df['class'] == row.iloc[0]['class'])]
if not candidate_rows.empty:
row_iou = calculate_highest_iou(row, candidate_rows)
if row_iou and row_iou['iou'] >= iou:
df.loc[row_iou['row'].index[0], 'cluster'] = cluster_number
add_cluster_members(row_iou['row'].index[0], df, cluster_number, depth, iou)
break
def add_cluster_members_new(index, df, cluster_number, depth, iou, max_claster):
stop = False
recursive_depth = 1
while not stop:
#clear_output(wait=True)
print('MAX CLASTER: ' + str(max_claster) + ' cluster_number: ' + str(cluster_number) + ' RECURSIVE DEPTH: ' + str(recursive_depth))
print('INDEX: ' + str(index))
row = df.loc[[index]]
df_depth = df[index:index+depth]
base_frame_number = row.iloc[0]['frame_number']
base_class = row.iloc[0]['class']
print('FRAME NUMBER: ' + str(base_frame_number), 'BASE CLASS: ' + str(base_class))
for i in range(depth):
print('DEPTH: ' + str(i))
#print('FRAME NUMBER: ' + str(base_frame_number + i + 1))
candidate_rows = df_depth[(df_depth['frame_number'] == base_frame_number + i + 1) &
(df_depth['class'] == row.iloc[0]['class'])]
if not candidate_rows.empty:
row_iou = calculate_highest_iou(row, candidate_rows)
if row_iou and row_iou['iou'] >= iou:
df.loc[row_iou['row'].index[0], 'cluster'] = cluster_number
index = row_iou['row'].index[0]
recursive_depth +=1
stop = False
#print('BREAK!')
break
stop = True
def calculate_highest_iou(row, candidate_rows):
row_iou = {}
iou_candidate = 0
for i in range(candidate_rows.shape[0]):
candidate = candidate_rows.iloc[[i]]
if iou_candidate < calculate_iou(row, candidate):
iou_candidate = calculate_iou(row, candidate)
row_iou['row'] = candidate
row_iou['iou'] = iou_candidate
return row_iou
def calculate_iou(row, candidate):
boxA = string_to_list(row.iloc[0]['box'])
boxB = string_to_list(candidate.iloc[0]['box'])
#Following code was copied from https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
# determine the (x, y)-coordinates of the intersection rectangle
xA = max(boxA[0], boxB[0])
yA = max(boxA[1], boxB[1])
xB = min(boxA[2], boxB[2])
yB = min(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = interArea / float(boxAArea + boxBArea - interArea)
# return the intersection over union value
return iou
def string_to_list(box):
box = box[1:][:-1].split()
return [float(i) for i in box]
df_clusters = df.copy()
df_clusters['cluster'] = -1
max_claster = df_clusters.shape[0]
for i in range(df_clusters.shape[0]):
if df_clusters.loc[i, 'cluster'] == -1:
df_clusters.loc[i, 'cluster'] = i
add_cluster_members(i, df_clusters, i, depth, iou)
#add_cluster_members_new(i, df_clusters, i, depth, iou, max_claster)
df_clusters
df_clusters.to_csv("/home/rinat/Downloads/videos/sample/sample_clusters.csv")
df_filtered = df_clusters[df_clusters['cluster'] != -1].copy()
df_filtered
df_filtered = df_filtered[(df_clusters['class'] == 'person')]
#|
#(df_clusters['class'] == 'car') |
#(df_clusters['class'] == 'truck') |
#(df_clusters['class'] == 'bus')]
df_filtered_by_frequency = df_filtered.copy()
df_filtered_by_frequency.groupby('cluster').size()
df_filtered_by_frequency['cluster_size'] = df_filtered_by_frequency.groupby('cluster')['cluster'].transform('size')
df_filtered_by_frequency = df_filtered_by_frequency[(df_filtered_by_frequency['cluster_size'] >= 6)]
df_normal = df_filtered_by_frequency.copy()
df_normal['file_path'] = df_normal['file_path'].str.replace(r'../wonderfull_life/wl_bw', '/home/rinat/wl/wl_bw', regex=True)
df_normal.groupby('cluster').size()
def save_df_files(df, folder, file_path = 'file_path', subfolder = None):
for index, row in df.iterrows():
file_name = row[file_path].split('/')[-1]
directory = folder
if subfolder:
directory = directory + '/' + str(row[subfolder])
if not os.path.exists(directory):
os.makedirs(directory)
shutil.copyfile(row[file_path], directory + '/' + file_name)
# Sort by cluster and frame number before using
def cluster_video(df, folder, file_path = 'file_path', video_col = 'cluster'):
cluster = -1
video = None
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
for index, row in df.iterrows():
img = cv2.imread(row[file_path])
if row[video_col] != cluster:
if video is not None:
video.release()
height, width, layers = img.shape
print('-----Enter cluster-------')
print(img[0][0])
if not os.path.exists(folder):
os.makedirs(folder)
video_path = folder + '/' + str(row[video_col]) + '.mp4'
print(video_path)
video = cv2.VideoWriter(video_path, fourcc, 10, (width, height))
print(video)
cluster = row[video_col]
print('------------')
print(img[0][0])
video.write(img)
video.release()
# Sort by cluster and frame number before using
def make_video(df, folder, file_path = 'file_path', video_col = 'analyzed'):
img = cv2.imread(df.loc[0, file_path])
height, width, layers = img.shape
if not os.path.exists(folder):
os.makedirs(folder)
video_path = folder + '/' + str(row[video_col]) + '.mp4'
print(video_path)
video = cv2.VideoWriter(video_path, fourcc, 10, (width, height))
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
for index, row in df.iterrows():
img = cv2.imread(row[file_path])
video.write(img)
video.release()
def crop_df_files(df, folder, crop, source_prefix = '', file_path = 'file_path', subfolder = None):
for index, row in df.iterrows():
file_name = row[file_path].split('/')[-1]
directory = folder
if subfolder:
directory = directory + '/' + str(row[subfolder])
if not os.path.exists(directory):
os.makedirs(directory)
im = Image.open(source_prefix + row[file_path])
top, left, bottom, right = string_to_list(row[crop])
crop_im = im.crop((left, top, right, bottom))
crop_im.save( directory + '/' + file_name, "JPEG")
crop_df_files(df_filtered_by_frequency, '/home/rinat/yolo_demo/filtered_cars_croped', source_prefix = '/home/rinat/keras-yolo3/', crop = 'box', subfolder = 'cluster')
!ls
save_df_files(df_normal, '../data/wl_analysis/filtered_benches_clustered', file_path = 'file_path', subfolder = 'cluster')
cluster_video(df_normal, '../data/wl_analysis/benches_video', file_path = 'file_path', video_col = 'cluster')
image = Image.open('/home/rinat/wl/wl_bw/7368truck_train_bus_car_person_person_person.jpg')
print(image)
im = cv2.imread('/home/rinat/wl/wl_bw/7368truck_train_bus_car_person_person_person.jpg')
im
cluster_video(df_normal, '/home/rinat/Downloads/videos/sample/', file_path = 'file_path')
```
| github_jupyter |
# Deutsch-Jozsa Algorithm
In this section, we first introduce the Deutsch-Jozsa problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run it on a simulator and device.
## Contents
1. [Introduction](#introduction)
1.1 [Deutsch-Jozsa Problem](#djproblem)
1.2 [Deutsch-Jozsa Algorithm](#classical-solution)
1.3 [The Quantum Solution](#quantum-solution)
1.4 [Why Does This Work?](#why-does-this-work)
2. [Worked Example](#example)
3. [Creating Quantum Oracles](#creating-quantum-oracles)
4. [Qiskit Implementation](#implementation)
4.1 [Constant Oracle](#const_oracle)
4.2 [Balanced Oracle](#balanced_oracle)
4.3 [The Full Algorithm](#full_alg)
4.4 [Generalised Circuit](#general_circs)
5. [Running on Real Devices](#device)
6. [Problems](#problems)
7. [References](#references)
## 1. Introduction <a id='introduction'></a>
The Deutsch-Jozsa algorithm, first introduced in Reference [1], was the first example of a quantum algorithm that performs better than the best classical algorithm. It showed that there can be advantages to using a quantum computer as a computational tool for a specific problem.
### 1.1 Deutsch-Jozsa Problem <a id='djproblem'> </a>
We are given a hidden Boolean function $f$, which takes as input a string of bits, and returns either $0$ or $1$, that is:
$$
f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ , where } x_n \textrm{ is } 0 \textrm{ or } 1$$
The property of the given Boolean function is that it is guaranteed to either be balanced or constant. A constant function returns all $0$'s or all $1$'s for any input, while a balanced function returns $0$'s for exactly half of all inputs and $1$'s for the other half. Our task is to determine whether the given function is balanced or constant.
Note that the Deutsch-Jozsa problem is an $n$-bit extension of the single bit Deutsch problem.
### 1.2 The Classical Solution <a id='classical-solution'> </a>
Classically, in the best case, two queries to the oracle can determine if the hidden Boolean function, $f(x)$, is balanced:
e.g. if we get both $f(0,0,0,...)\rightarrow 0$ and $f(1,0,0,...) \rightarrow 1$, then we know the function is balanced as we have obtained the two different outputs.
In the worst case, if we continue to see the same output for each input we try, we will have to check exactly half of all possible inputs plus one in order to be certain that $f(x)$ is constant. Since the total number of possible inputs is $2^n$, this implies that we need $2^{n-1}+1$ trial inputs to be certain that $f(x)$ is constant in the worst case. For example, for a $4$-bit string, if we checked $8$ out of the $16$ possible combinations, getting all $0$'s, it is still possible that the $9^\textrm{th}$ input returns a $1$ and $f(x)$ is balanced. Probabilistically, this is a very unlikely event. In fact, if we get the same result continually in succession, we can express the probability that the function is constant as a function of $k$ inputs as:
$$ P_\textrm{constant}(k) = 1 - \frac{1}{2^{k-1}} \qquad \textrm{for } k \leq 2^{n-1}$$
Realistically, we could opt to truncate our classical algorithm early, say if we were over x% confident. But if we want to be 100% confident, we would need to check $2^{n-1}+1$ inputs.
### 1.3 Quantum Solution <a id='quantum-solution'> </a>
Using a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$, provided we have the function $f$ implemented as a quantum oracle, which maps the state $\vert x\rangle \vert y\rangle $ to $ \vert x\rangle \vert y \oplus f(x)\rangle$, where $\oplus$ is addition modulo $2$. Below is the generic circuit for the Deutsh-Jozsa algorithm.

Now, let's go through the steps of the algorithm:
<ol>
<li>
Prepare two quantum registers. The first is an $n$-qubit register initialised to $|0\rangle$, and the second is a one-qubit register initialised to $|1\rangle$:
$$\vert \psi_0 \rangle = \vert0\rangle^{\otimes n} \vert 1\rangle$$
</li>
<li>
Apply a Hadamard gate to each qubit:
$$\vert \psi_1 \rangle = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle \left(|0\rangle - |1 \rangle \right)$$
</li>
<li>
Apply the quantum oracle $\vert x\rangle \vert y\rangle$ to $\vert x\rangle \vert y \oplus f(x)\rangle$:
$$
\begin{aligned}
\lvert \psi_2 \rangle
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle (\vert f(x)\rangle - \vert 1 \oplus f(x)\rangle) \\
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1}(-1)^{f(x)}|x\rangle ( |0\rangle - |1\rangle )
\end{aligned}
$$
since for each $x,f(x)$ is either $0$ or $1$.
</li>
<li>
At this point the second single qubit register may be ignored. Apply a Hadamard gate to each qubit in the first register:
$$
\begin{aligned}
\lvert \psi_3 \rangle
& = \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)}
\left[ \sum_{y=0}^{2^n-1}(-1)^{x \cdot y}
\vert y \rangle \right] \\
& = \frac{1}{2^n}\sum_{y=0}^{2^n-1}
\left[ \sum_{x=0}^{2^n-1}(-1)^{f(x)}(-1)^{x \cdot y} \right]
\vert y \rangle
\end{aligned}
$$
where $x \cdot y = x_0y_0 \oplus x_1y_1 \oplus \ldots \oplus x_{n-1}y_{n-1}$ is the sum of the bitwise product.
</li>
<li>
Measure the first register. Notice that the probability of measuring $\vert 0 \rangle ^{\otimes n} = \lvert \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)} \rvert^2$, which evaluates to $1$ if $f(x)$ is constant and $0$ if $f(x)$ is balanced.
</li>
</ol>
### 1.4 Why Does This Work? <a id='why-does-this-work'> </a>
- **Constant Oracle**
When the oracle is *constant*, it has no effect (up to a global phase) on the input qubits, and the quantum states before and after querying the oracle are the same. Since the H-gate is its own inverse, in Step 4 we reverse Step 2 to obtain the initial quantum state of $|00\dots 0\rangle$ in the first register.
$$
H^{\otimes n}\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
\quad \xrightarrow{\text{after } U_f} \quad
H^{\otimes n}\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
$$
- **Balanced Oracle**
After step 2, our input register is an equal superposition of all the states in the computational basis. When the oracle is *balanced*, phase kickback adds a negative phase to exactly half these states:
$$
U_f \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} -1 \\ 1 \\ -1 \\ \vdots \\ 1 \end{bmatrix}
$$
The quantum state after querying the oracle is orthogonal to the quantum state before querying the oracle. Thus, in Step 4, when applying the H-gates, we must end up with a quantum state that is orthogonal to $|00\dots 0\rangle$. This means we should never measure the all-zero state.
## 2. Worked Example <a id='example'></a>
Let's go through a specific example for a two bit balanced function:
<ol>
<li> The first register of two qubits is initialized to $|00\rangle$ and the second register qubit to $|1\rangle$
(Note that we are using subscripts 1, 2, and 3 to index the qubits. A subscript of "12" indicates the state of the register containing qubits 1 and 2)
$$\lvert \psi_0 \rangle = \lvert 0 0 \rangle_{12} \otimes \lvert 1 \rangle_{3} $$
</li>
<li> Apply Hadamard on all qubits
$$\lvert \psi_1 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle + \lvert 0 1 \rangle + \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)_{12} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} $$
</li>
<li> The oracle function can be implemented as $\text{Q}_f = CX_{13}CX_{23}$,
$$
\begin{align*}
\lvert \psi_2 \rangle = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 0 \rangle - \lvert 1 \oplus 0 \oplus 0 \rangle \right)_{3} \\
+ \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 1 \rangle - \lvert 1 \oplus 0 \oplus 1 \rangle \right)_{3} \\
+ \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 0 \rangle - \lvert 1 \oplus 1 \oplus 0 \rangle \right)_{3} \\
+ \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 1 \rangle - \lvert 1 \oplus 1 \oplus 1 \rangle \right)_{3} \right]
\end{align*}
$$
</li>
<li>Simplifying this, we get the following:
$$
\begin{aligned}
\lvert \psi_2 \rangle & = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} + \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \right] \\
& = \frac{1}{2} \left( \lvert 0 0 \rangle - \lvert 0 1 \rangle - \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)_{12} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \\
& = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{1} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{2} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3}
\end{aligned}
$$
</li>
<li> Apply Hadamard on the first register
$$ \lvert \psi_3\rangle = \lvert 1 \rangle_{1} \otimes \lvert 1 \rangle_{2} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} $$
</li>
<li> Measuring the first two qubits will give the non-zero $11$, indicating a balanced function.
</li>
</ol>
You can try out similar examples using the widget below. Press the buttons to add H-gates and oracles, re-run the cell and/or set `case="constant"` to try out different oracles.
```
from qiskit_textbook.widgets import dj_widget
dj_widget(size="small", case="balanced")
```
## 3. Creating Quantum Oracles <a id='creating-quantum-oracles'> </a>
Let's see some different ways we can create a quantum oracle.
For a constant function, it is simple:
$\qquad$ 1. if f(x) = 0, then apply the $I$ gate to the qubit in register 2.
$\qquad$ 2. if f(x) = 1, then apply the $X$ gate to the qubit in register 2.
For a balanced function, there are many different circuits we can create. One of the ways we can guarantee our circuit is balanced is by performing a CNOT for each qubit in register 1, with the qubit in register 2 as the target. For example:

In the image above, the top three qubits form the input register, and the bottom qubit is the output register. We can see which input states give which output in the table below:
| Input states that output 0 | Input States that output 1 |
|:--------------------------:|:--------------------------:|
| 000 | 001 |
| 011 | 100 |
| 101 | 010 |
| 110 | 111 |
We can change the results while keeping them balanced by wrapping selected controls in X-gates. For example, see the circuit and its results table below:

| Input states that output 0 | Input states that output 1 |
|:--------------------------:|:--------------------------:|
| 001 | 000 |
| 010 | 011 |
| 100 | 101 |
| 111 | 110 |
## 4. Qiskit Implementation <a id='implementation'></a>
We now implement the Deutsch-Jozsa algorithm for the example of a three-bit function, with both constant and balanced oracles. First let's do our imports:
```
# initialization
import numpy as np
# importing Qiskit
from qiskit import IBMQ, BasicAer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
```
Next, we set the size of the input register for our oracle:
```
# set the length of the n-bit input string.
n = 3
```
### 4.1 Constant Oracle <a id='const_oracle'></a>
Let's start by creating a constant oracle, in this case the input has no effect on the ouput so we just randomly set the output qubit to be 0 or 1:
```
# set the length of the n-bit input string.
n = 3
const_oracle = QuantumCircuit(n+1)
output = np.random.randint(2)
if output == 1:
const_oracle.x(n)
const_oracle.draw()
```
### 4.2 Balanced Oracle <a id='balanced_oracle'></a>
```
balanced_oracle = QuantumCircuit(n+1)
```
Next, we create a balanced oracle. As we saw in section 1b, we can create a balanced oracle by performing CNOTs with each input qubit as a control and the output bit as the target. We can vary the input states that give 0 or 1 by wrapping some of the controls in X-gates. Let's first choose a binary string of length `n` that dictates which controls to wrap:
```
b_str = "101"
```
Now we have this string, we can use it as a key to place our X-gates. For each qubit in our circuit, we place an X-gate if the corresponding digit in `b_str` is `1`, or do nothing if the digit is `0`.
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
balanced_oracle.draw()
```
Next, we do our controlled-NOT gates, using each input qubit as a control, and the output qubit as a target:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
balanced_oracle.draw()
```
Finally, we repeat the code from two cells up to finish wrapping the controls in X-gates:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Show oracle
balanced_oracle.draw()
```
We have just created a balanced oracle! All that's left to do is see if the Deutsch-Joza algorithm can solve it.
### 4.3 The Full Algorithm <a id='full_alg'></a>
Let's now put everything together. This first step in the algorithm is to initialise the input qubits in the state $|{+}\rangle$ and the output qubit in the state $|{-}\rangle$:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
dj_circuit.draw()
```
Next, let's apply the oracle. Here we apply the `balanced_oracle` we created above:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
dj_circuit.draw()
```
Finally, we perform H-gates on the $n$-input qubits, and measure our input register:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
# Repeat H-gates
for qubit in range(n):
dj_circuit.h(qubit)
dj_circuit.barrier()
# Measure
for i in range(n):
dj_circuit.measure(i, i)
# Display circuit
dj_circuit.draw()
```
Let's see the output:
```
# use local simulator
backend = BasicAer.get_backend('qasm_simulator')
shots = 1024
results = execute(dj_circuit, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
```
We can see from the results above that we have a 0% chance of measuring `000`. This correctly predicts the function is balanced.
### 4.4 Generalised Circuits <a id='general_circs'></a>
Below, we provide a generalised function that creates Deutsch-Joza oracles and turns them into quantum gates. It takes the `case`, (either `'balanced'` or '`constant`', and `n`, the size of the input register:
```
def dj_oracle(case, n):
# We need to make a QuantumCircuit object to return
# This circuit has n+1 qubits: the size of the input,
# plus one output qubit
oracle_qc = QuantumCircuit(n+1)
# First, let's deal with the case in which oracle is balanced
if case == "balanced":
# First generate a random number that tells us which CNOTs to
# wrap in X-gates:
b = np.random.randint(1,2**n)
# Next, format 'b' as a binary string of length 'n', padded with zeros:
b_str = format(b, '0'+str(n)+'b')
# Next, we place the first X-gates. Each digit in our binary string
# corresponds to a qubit, if the digit is 0, we do nothing, if it's 1
# we apply an X-gate to that qubit:
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Do the controlled-NOT gates for each qubit, using the output qubit
# as the target:
for qubit in range(n):
oracle_qc.cx(qubit, n)
# Next, place the final X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Case in which oracle is constant
if case == "constant":
# First decide what the fixed output of the oracle will be
# (either always 0 or always 1)
output = np.random.randint(2)
if output == 1:
oracle_qc.x(n)
oracle_gate = oracle_qc.to_gate()
oracle_gate.name = "Oracle" # To show when we display the circuit
return oracle_gate
```
Let's also create a function that takes this oracle gate and performs the Deutsch-Joza algorithm on it:
```
def dj_algorithm(oracle, n):
dj_circuit = QuantumCircuit(n+1, n)
# Set up the output qubit:
dj_circuit.x(n)
dj_circuit.h(n)
# And set up the input register:
for qubit in range(n):
dj_circuit.h(qubit)
# Let's append the oracle gate to our circuit:
dj_circuit.append(oracle, range(n+1))
# Finally, perform the H-gates again and measure:
for qubit in range(n):
dj_circuit.h(qubit)
for i in range(n):
dj_circuit.measure(i, i)
return dj_circuit
```
Finally, let's use these functions to play around with the algorithm:
```
n = 4
oracle_gate = dj_oracle('balanced', n)
dj_circuit = dj_algorithm(oracle_gate, n)
dj_circuit.draw()
```
And see the results of running this circuit:
```
results = execute(dj_circuit, backend=backend, shots=1024).result()
answer = results.get_counts()
plot_histogram(answer)
```
## 5. Experiment with Real Devices <a id='device'></a>
We can run the circuit on the real device as shown below. We first look for the least-busy device that can handle our circuit.
```
# Load our saved IBMQ accounts and get the least busy backend device with greater than or equal to (n+1) qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= (n+1) and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
job = execute(dj_circuit, backend=backend, shots=shots, optimization_level=3)
job_monitor(job, interval = 2)
# Get the results of the computation
results = job.result()
answer = results.get_counts()
plot_histogram(answer)
```
As we can see, the most likely result is `1111`. The other results are due to errors in the quantum computation.
## 6. Problems <a id='problems'></a>
1. Are you able to create a balanced or constant oracle of a different form?
```
from qiskit_textbook.problems import dj_problem_oracle
oracle = dj_problem_oracle(1)
```
2. The function `dj_problem_oracle` (shown above) returns a Deutsch-Joza oracle for `n = 4` in the form of a gate. The gate takes 5 qubits as input where the final qubit (`q_4`) is the output qubit (as with the example oracles above). You can get different oracles by giving `dj_problem_oracle` different integers between 1 and 5. Use the Deutsch-Joza algorithm to decide whether each oracle is balanced or constant (**Note:** It is highly recommended you try this example using the `qasm_simulator` instead of a real device).
## 7. References <a id='references'></a>
1. David Deutsch and Richard Jozsa (1992). "Rapid solutions of problems by quantum computation". Proceedings of the Royal Society of London A. 439: 553–558. [doi:10.1098/rspa.1992.0167](https://doi.org/10.1098%2Frspa.1992.0167).
2. R. Cleve; A. Ekert; C. Macchiavello; M. Mosca (1998). "Quantum algorithms revisited". Proceedings of the Royal Society of London A. 454: 339–354. [doi:10.1098/rspa.1998.0164](https://doi.org/10.1098%2Frspa.1998.0164).
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Operationalizing Machine Learning models with Azure ML and Power BI
> I presented this session at Power Break on March 22, 2021. Topics coevered - limitations of using Python query in Power BI, importance of MLOps, deploying ML models in Azure ML and consuming them in Power BI
- toc: true
- badges: true
- comments: true
- categories: [azureml, mlops, powerbi]
- hide: false
### Machine Learning in Power BI
Power BI and Azure ML have native integration with each other, which means not only that you can consume the deployed models in Power BI but also use the resources/tools in Azure ML to manage the model lifecycle. There is a misconception that you need Power BI Premium to use Azure ML. In this session I show that's not the case. Below are the topics covered:
- Limitation of using Python or R in Power BI
- Formula firewall
- Privacy setting of the datasource must be Public
- 30 min query timeout
- Have to use personal gateway
- Cannot use enhanced metadata
- Dependency management
- Not scaleable
- No MLOps
- Steps to access Azure ML models in Power BI
- Give Azure ML access to Power BI tenant in IAM
- Use scoring script with input/output schema

#### Topics Covered:
- Invoking Azure ML models in Power Query in Desktop and using dataflow in service
- Thoughts on batch-scoring
- Other considerations:
- Azure ML models can only be called for real-time inference.
- Use ACI for small models and AKS for large models that requore GPU, scaleable compute, low latency, high availability
- Real-time inferencing is expensive! Use batch scoring if real-time is not needed
- Incremental refreshes in Power BI will work with models invoked from Azure ML but watch out for (query) performance
- With Azure Arc, you can use your own Kubernets service! Deploy the model in Azure ML but use your own K8 (AWS, GCP etc.)
- Premium Per User gives you access to Auto ML. If you have access to PPU, it may be worth first experimenting with AutoML before using Azure ML model to save cost. PPU Automl doesn't offer the same level of MLOps capabilities though. I will be presenting a technical deep-dive on using Power BI AutoML on May 26 at Portland Power BI User Group. Sign up on their page if you are interested in the topic.
>youtube: https://youtu.be/oLdMFJIxWDo
- **References:**
- https://docs.microsoft.com/en-us/power-bi/connect-data/service-aml-integrate
- https://pawarbi.github.io/blog/powerbi/r/python/2020/05/15/powerbi-python-r-tips.html
- https://azure.microsoft.com/en-us/blog/innovate-across-hybrid-and-multicloud-with-new-azure-arc-capabilities/
- https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts
- https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-custom-model
| github_jupyter |
```
import os
import numpy as np
from eolearn.core import EOPatch, EOTask
from fs_s3fs import S3FS
from matplotlib import pyplot as plt
from sentinelhub import BBox, SHConfig
from sg_utils.processing import multiprocess
```
# Config
```
config = SHConfig()
config.instance_id = ''
config.aws_access_key_id = ''
config.aws_secret_access_key = ''
filesystem = S3FS(bucket_name='',
aws_access_key_id=config.aws_access_key_id,
aws_secret_access_key=config.aws_secret_access_key)
```
# Execute sampling
```
class SamplePatchlets(EOTask):
MS4_DEIMOS_SCALING = 4
def __init__(self, s2_patchlet_size: int, num_samples: int):
self.s2_patchlet_size = s2_patchlet_size
self.num_samples = num_samples
def _calculate_sampled_bbox(self, bbox: BBox, r: int, c: int, s: int, resolution: float) -> BBox:
return BBox(((bbox.min_x + resolution * c, bbox.max_y - resolution * (r + s)),
(bbox.min_x + resolution * (c + s), bbox.max_y - resolution * r)),
bbox.crs)
def _sample_s2(self, eop: EOPatch, row: int, col: int, size: int, resolution: float = 10):
sampled_eop = EOPatch(timestamp=eop.timestamp, scalar=eop.scalar, meta_info=eop.meta_info)
sampled_eop.data['CLP'] = eop.data['CLP'][:, row:row + size, col:col + size, :]
sampled_eop.mask['CLM'] = eop.mask['CLM'][:, row:row + size, col:col + size, :]
sampled_eop.mask['IS_DATA'] = eop.mask['IS_DATA'][:, row:row + size, col:col + size, :]
sampled_eop.data['BANDS'] = eop.data['BANDS'][:, row:row + size, col:col + size, :]
sampled_eop.scalar_timeless['PATCHLET_LOC'] = np.array([row, col, size])
sampled_eop.bbox = self._calculate_sampled_bbox(eop.bbox, r=row, c=col, s=size, resolution=resolution)
sampled_eop.meta_info['size_x'] = size
sampled_eop.meta_info['size_y'] = size
return sampled_eop
def _sample_deimos(self, eop: EOPatch, row: int, col: int, size: int, resolution: float = 2.5):
sampled_eop = EOPatch(timestamp=eop.timestamp, scalar=eop.scalar, meta_info=eop.meta_info)
sampled_eop.data['BANDS-DEIMOS'] = eop.data['BANDS-DEIMOS'][:, row:row + size, col:col + size, :]
sampled_eop.mask['CLM'] = eop.mask['CLM'][:, row:row + size, col:col + size, :]
sampled_eop.mask['IS_DATA'] = eop.mask['IS_DATA'][:, row:row + size, col:col + size, :]
sampled_eop.scalar_timeless['PATCHLET_LOC'] = np.array([row, col, size])
sampled_eop.bbox = self._calculate_sampled_bbox(eop.bbox, r=row, c=col, s=size, resolution=resolution)
sampled_eop.meta_info['size_x'] = size
sampled_eop.meta_info['size_y'] = size
return sampled_eop
def execute(self, eopatch_s2, eopatch_deimos, buffer=20, seed=42):
_, n_rows, n_cols, _ = eopatch_s2.data['BANDS'].shape
np.random.seed(seed)
eops_out = []
for patchlet_num in range(0, self.num_samples):
row = np.random.randint(buffer, n_rows - self.s2_patchlet_size - buffer)
col = np.random.randint(buffer, n_cols - self.s2_patchlet_size - buffer)
sampled_s2 = self._sample_s2(eopatch_s2, row, col, self.s2_patchlet_size)
sampled_deimos = self._sample_deimos(eopatch_deimos,
row*self.MS4_DEIMOS_SCALING,
col*self.MS4_DEIMOS_SCALING,
self.s2_patchlet_size*self.MS4_DEIMOS_SCALING)
eops_out.append((sampled_s2, sampled_deimos))
return eops_out
def sample_patch(eop_path_s2: str, eop_path_deimos,
sampled_folder_s2, sampled_folder_deimos,
s2_patchlet_size, num_samples, filesystem, buffer=20) -> None:
task = SamplePatchlets(s2_patchlet_size=s2_patchlet_size, num_samples=num_samples)
eop_name = os.path.basename(eop_path_s2)
try:
eop_s2 = EOPatch.load(eop_path_s2, filesystem=filesystem, lazy_loading=True)
eop_deimos = EOPatch.load(eop_path_deimos, filesystem=filesystem, lazy_loading=True)
patchlets = task.execute(eop_s2, eop_deimos, buffer=buffer)
for i, (patchlet_s2, patchlet_deimos) in enumerate(patchlets):
patchlet_s2.save(os.path.join(sampled_folder_s2, f'{eop_name}_{i}'),
filesystem=filesystem)
patchlet_deimos.save(os.path.join(sampled_folder_deimos, f'{eop_name}_{i}'),
filesystem=filesystem)
except KeyError as e:
print(f'Key error. Could not find key: {e}')
except ValueError as e:
print(f'Value error. Value does not exist: {e}')
EOPS_S2 = ''
EOPS_DEIMOS = ''
SAMPLED_S2_PATH = ''
SAMPLED_DEIMOS_3M_PATH = ''
eop_names = filesystem.listdir(EOPS_DEIMOS) # Both folder have the same EOPatches
def sample_single(eop_name):
path_s2 = os.path.join(EOPS_S2, eop_name)
path_deimos = os.path.join(EOPS_DEIMOS, eop_name)
sample_patch(path_s2, path_deimos, SAMPLED_S2_PATH, SAMPLED_DEIMOS_3M_PATH,
s2_patchlet_size=32, num_samples=140, filesystem=filesystem, buffer=20)
multiprocess(sample_single, eop_names, max_workers=16)
```
# Look at an example
```
sampled_s2 = EOPatch.load(os.path.join(SAMPLED_S2_PATH, 'eopatch-0000_122'), filesystem=filesystem)
sampled_deimos = EOPatch.load(os.path.join(SAMPLED_DEIMOS_3M_PATH, 'eopatch-0000_122'), filesystem=filesystem)
def _get_closest_timestamp_idx(eop, ref_timestamp):
closest_idx = 0
for i, ts in enumerate(eop.timestamp):
if abs((ts - ref_timestamp).days) < abs((eop.timestamp[closest_idx] - ref_timestamp).days):
closest_idx = i
return closest_idx
fig, ax = plt.subplots(ncols=2, figsize=(15, 15))
idx_deimos = 1
closest_idx = _get_closest_timestamp_idx(sampled_s2, sampled_deimos.timestamp[idx_deimos])
ax[0].imshow(sampled_s2.data['BANDS'][closest_idx][..., [2, 1, 0]] / 10000*3.5)
ax[1].imshow(sampled_deimos.data['BANDS-DEIMOS'][idx_deimos][..., [2, 1, 0]] / 12000)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import path
import matplotlib.patches as patches
from skimage import draw
import scipy.ndimage as ndimage
import Utils
import georasters as gr
import cv2
from Utils import doubleMADsfromMedian
from skimage.transform import resize
import pickle
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from IPython.display import clear_output, display
import tensorflow as tf
path_T = "El_Aguila_2020/Thermo.tif"
path_save_list = "El_Aguila_2020/List_strings_panels_PV03.txt"
path_save_list_print = "El_Aguila_2020/List_strings_panels_print_PV03.txt"
GR_T = gr.from_file(path_T)
## Load List in coordinate latitud and longitude to pixels ###
with open(path_save_list, "rb") as fp:
L_strings_coord_load = pickle.load(fp)
plt.figure(0)
plt.figure(figsize=(8, 8))
plt.imshow(GR_T.raster.data[0,:,:], cmap = 'gray')
```
# Load Classifier
```
'''
path_dataset = "Classifier/dataset_prueba_1/"
output_recognizer = path_dataset + "model_SVM/recognizer.pickle"
output_label = path_dataset + "model_SVM/le.pickle"
# load the actual face recognition model along with the label encoder
classifier = pickle.loads(open(output_recognizer, "rb").read())
le = pickle.loads(open(output_label, "rb").read())
img_width, img_height = 224, 224
base_model = tf.keras.applications.Xception(input_shape=(img_height, img_width, 3), weights='imagenet', include_top=False)
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
base_model = tf.keras.models.Model(inputs=base_model.input, outputs=x)
'''
```
# Panel Classification
```
'''
epsilon = 0
matrix_expand_bounds = [[-epsilon, -epsilon],[+epsilon, -epsilon], [+epsilon, +epsilon], [-epsilon, +epsilon]]
#geot = GR_RGB.geot
geot = GR_T.geot
for string_key in ['6', '10']:# L_strings_coord_load.keys():
print(string_key)
string = L_strings_coord_load[string_key]
for panel_key in string['panels'].keys():
panel = string['panels'][panel_key]
Points = Utils.gps2pixel(panel['points'], geot) + matrix_expand_bounds
if not GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]].size == 0:
Im = np.zeros((img_height, img_width, 3))
Im[:,:,0] = cv2.resize(GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]], (img_width, img_height))
Im[:,:,1] = Im[:,:,0].copy()
Im[:,:,2] = Im[:,:,0].copy()
v = base_model.predict(tf.keras.backend.expand_dims(Im,0)).flatten()
falla = le.classes_[classifier.predict(tf.keras.backend.expand_dims(v,0))][0]
panel['prob'] = np.max(classifier.predict_proba(tf.keras.backend.expand_dims(v,0)))
if panel['prob'] < .5 and falla != le.classes_[0]:
panel['status'] = le.classes_[0]
else:
panel['status'] = falla
else:
print('problem with coords panel: ', string_key, '_', panel_key)
'''
```
# T° mean string and panel
```
epsilon = 0
matrix_expand_bounds = [[-epsilon, -epsilon],[+epsilon, -epsilon], [+epsilon, +epsilon], [-epsilon, +epsilon]]
t_min = np.min(GR_T.raster.data)
t_max = np.max(GR_T.raster.data)
t_new_min = 0
t_new_max = 70
geot = GR_T.geot
for string_key in L_strings_coord_load.keys():
clear_output(wait=True)
print(string_key)
string = L_strings_coord_load[string_key]
Points = Utils.gps2pixel(string['points'], geot) + matrix_expand_bounds
string['T'] = t_new_min + (t_new_max - t_new_min) * np.mean(GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]]) / (t_max - t_min)
for panel_key in string['panels'].keys():
panel = string['panels'][panel_key]
Points = Utils.gps2pixel(panel['points'], geot) + matrix_expand_bounds
panel['T'] = t_new_min + (t_new_max - t_new_min) * np.mean(GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]]) / (t_max - t_min)
geot = GR_T.geot
plt.figure(0)
plt.figure(figsize=(16, 16))
plt.imshow(GR_T.raster.data[0,:,:])
ax = plt.gca()
for Poly_key in L_strings_coord_load.keys():
Poly = L_strings_coord_load[Poly_key]
poly = patches.Polygon( Utils.gps2pixel(Poly['points'], geot),
linewidth=2,
edgecolor='red',
alpha=0.5,
fill = True)
plt.text(np.mean([x[0] for x in Utils.gps2pixel(Poly['points'], geot)]), np.mean([y[1] for y in Utils.gps2pixel(Poly['points'], geot)]) , str(Poly['id']), bbox=dict(facecolor='red', alpha=0.8))
ax.add_patch(poly)
string = L_strings_coord_load['2']
panels = string['panels']
Points = Utils.gps2pixel(string['points'], geot)
plt.figure(1)
plt.figure(figsize=(16, 16))
plt.imshow(GR_T.raster[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]])
#plt.imshow((GR_RGB.raster[:3,:,:]).transpose((1, 2, 0))[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0],:])
plt.title('Subdive Origin Image')
ax = plt.gca()
for Poly_key in panels.keys():
Poly = panels[Poly_key]
poly = patches.Polygon(Utils.gps2pixel(Poly['points'],geot) - (Points[0][0], Points[0][1]),
linewidth=2,
edgecolor='red',
fill = False)
plt.text(np.mean([x[0] for x in Utils.gps2pixel(Poly['points'],geot) - (Points[0][0], Points[0][1])]), np.mean([y[1] for y in Utils.gps2pixel(Poly['points'],geot) - (Points[0][0], Points[0][1])]) ,
str(Poly['id']), bbox=dict(facecolor='red', alpha=0.0), fontsize=8)
ax.add_patch(poly)
for i in [17]:#[12,13,14,15,16,17]:
L_strings_coord_load['2']['panels'][str(i)]['status'] = '3.Más de 5 celdas afect.'
with open(path_save_list, "wb") as fp:
pickle.dump(L_strings_coord_load, fp)
with open(path_save_list_print, 'w') as f:
print(L_strings_coord_load, file=f)
epsilon = 10
matrix_expand_bounds = [[-epsilon, -epsilon],[+epsilon, -epsilon], [+epsilon, +epsilon], [-epsilon, +epsilon]]
Points = Utils.gps2pixel(L_strings_coord_load['1']['points'], geot) + matrix_expand_bounds
plt.figure(figsize=(6, 6))
plt.imshow(GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]],cmap = 'gray')
#plt.figure(0)
#plt.figure(figsize=(6, 6))
#plt.imshow(cv2.resize(GR_T.raster.data[0,Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]], (224, 224)),cmap = 'gray')
#plt.imshow((GR_RGB.raster[:3,:,:]).transpose((1, 2, 0))[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0],:])
```
| github_jupyter |
# Mask R-CNN - Train on Shapes Dataset
This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.
The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
```
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
```
## Configurations
```
class ShapesConfig(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 8
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 128
IMAGE_MAX_DIM = 128
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 5
config = ShapesConfig()
config.display()
```
## Notebook Preferences
```
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
```
## Dataset
Create a synthetic dataset
Extend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:
* load_image()
* load_mask()
* image_reference()
```
class ShapesDataset(utils.Dataset):
"""Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
"""
def load_shapes(self, count, height, width):
"""Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
"""
# Add classes
self.add_class("shapes", 1, "square")
self.add_class("shapes", 2, "circle")
self.add_class("shapes", 3, "triangle")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
for i in range(count):
bg_color, shapes = self.random_image(height, width)
self.add_image("shapes", image_id=i, path=None,
width=width, height=height,
bg_color=bg_color, shapes=shapes)
def load_image(self, image_id):
"""Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but
in this case it generates the image on the fly from the
specs in image_info.
"""
info = self.image_info[image_id]
bg_color = np.array(info['bg_color']).reshape([1, 1, 3])
image = np.ones([info['height'], info['width'], 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for shape, color, dims in info['shapes']:
image = self.draw_shape(image, shape, dims, color)
return image
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
shapes = info['shapes']
count = len(shapes)
mask = np.zeros([info['height'], info['width'], count], dtype=np.uint8)
for i, (shape, _, dims) in enumerate(info['shapes']):
mask[:, :, i:i+1] = self.draw_shape(mask[:, :, i:i+1].copy(),
shape, dims, 1)
# Handle occlusions
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
for i in range(count-2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
# Map class names to class IDs.
class_ids = np.array([self.class_names.index(s[0]) for s in shapes])
return mask.astype(np.bool), class_ids.astype(np.int32)
def draw_shape(self, image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, s = dims
if shape == 'square':
cv2.rectangle(image, (x-s, y-s), (x+s, y+s), color, -1)
elif shape == "circle":
cv2.circle(image, (x, y), s, color, -1)
elif shape == "triangle":
points = np.array([[(x, y-s),
(x-s/math.sin(math.radians(60)), y+s),
(x+s/math.sin(math.radians(60)), y+s),
]], dtype=np.int32)
cv2.fillPoly(image, points, color)
return image
def random_shape(self, height, width):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
shape = random.choice(["square", "circle", "triangle"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
# Center x, y
buffer = 20
y = random.randint(buffer, height - buffer - 1)
x = random.randint(buffer, width - buffer - 1)
# Size
s = random.randint(buffer, height//4)
return shape, color, (x, y, s)
def random_image(self, height, width):
"""Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
"""
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
shapes = []
boxes = []
N = random.randint(1, 4)
for _ in range(N):
shape, color, dims = self.random_shape(height, width)
shapes.append((shape, color, dims))
x, y, s = dims
boxes.append([y-s, x-s, y+s, x+s])
# Apply non-max suppression wit 0.3 threshold to avoid
# shapes covering each other
keep_ixs = utils.non_max_suppression(np.array(boxes), np.arange(N), 0.3)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
return bg_color, shapes
# Training dataset
dataset_train = ShapesDataset()
dataset_train.load_shapes(500, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_train.prepare()
# Validation dataset
dataset_val = ShapesDataset()
dataset_val.load_shapes(50, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_val.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
## Ceate Model
```
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
```
## Training
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
```
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=2,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
# model.keras_model.save_weights(model_path)
```
## Detection
```
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
```
## Evaluation
```
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
```
| github_jupyter |
# Goal

1. monthly-active - get from stats.wikimedia, do some subtraction of newcomers.
2. active users - active last 90 and have minimum 4 edits (before ores) - get from candidates table
3. meet quality standards* - get from candidates table
4. two-week-rention of control
5. daily lab
```
import os
# import civilservant
import pandas as pd
# cant get this working at the moment
# since its just static i will do it in sql and copy paste
# from civilservant import db
# db_engine = db.init_engine()
# print(db_engine)
# db_engine.execute('show databases;').fetchall()
#db_engine.execute('select count(*) from candidates;').fetchall()
l_arrays = [['Newcomer', 'Newcomer', 'Newcomer', 'Newcomer', 'Experienced', 'Experienced', 'Experienced'],
['table.1.index.2', 'table.1.index.3', 'table.1.index.7', 'table.1.index.1', 'table.1.index.6', 'table.1.index.7', 'table.1.index.5']]
mi = pd.MultiIndex.from_arrays(l_arrays, names=('Experience level', 'Language'))
mau = pd.DataFrame([[12281],[8555],[2414], [12281+8555+2414] ,[1899],[1985], [1899+1985]], index=mi, columns=['table.1.subtitle.left.1'])
# select lang, user_experience_level='bin_0', count(*) from candidates where lang != 'en' group by lang, user_experience_level='bin_0' order by user_experience_level='bin_0' desc;
# lang,user_experience_level='bin_0',count(*)
# ar,1,5936
# de,1,10476
# pl,1,2756
# fa,0,4648
# pl,0,8189
edit_qualifying_array = [
[5936],
[10476],
[2756],
[5936+10476+2756],
[4648],
[8189],
[4648+8189],
]
edit_qualifying = pd.DataFrame(data=edit_qualifying_array,
index=mi,
columns=['table.1.subtitle.left.2']
)
# select lang, user_experience_level='bin_0', count(*) from candidates
# where lang != 'en' and user_editcount_quality>=4
# group by lang, user_experience_level='bin_0' order by user_experience_level='bin_0' desc, lang;
# lang,user_experience_level='bin_0',count(*)
# ar,1,3142
# de,1,5834
# pl,1,1653
# fa,0,2537
# pl,0,3384
quality_qualifying_array = [
[3142],
[5834],
[1653],
[3142+5834+1653],
[2537],
[3384],
[2537+3384],
]
quality_qualifying = pd.DataFrame(data=quality_qualifying_array,
index=mi,
columns=['table.1.subtitle.left.3']
)
perc_treatment_thanked_array = [
[0.13],
[0.48],
[0.14],
[0.31],
[0.06],
[0.72],
[0.43],
]
perc_treatment_thanked = pd.DataFrame(data=perc_treatment_thanked_array,
index=mi,
columns=['table.1.subtitle.left.4']
)
datadir = os.getenv('TRESORDIR')
tresorpath = 'CivilServant/projects/wikipedia-integration/gratitude-study/Data Drills/thankee/post_experiment_analysis'
fname = 'grat-thankee-all-pre-post-treatment-vars-max-cols.csv'
fpath = os.path.join(datadir, tresorpath, fname)
all_partici = pd.read_csv(fpath)
deleted_cond = all_partici['thanks.not.received.user.deleted']==True
multi_cond = all_partici['received.multiple.thanks']==True
unobservable_blocks = all_partici[(deleted_cond) | (multi_cond)]['randomization.block.id']
partici = all_partici[all_partici['randomization.block.id'].apply(lambda b: b not in unobservable_blocks)]
t_f = os.path.join(datadir, 'CivilServant/projects/wikipedia-integration/gratitude-study/datasets/debrief-translations/l10ns_thanker_thankee_all.csv')
translations = pd.read_csv(t_f, index_col='key')
f'Len all participatns {len(all_partici)}, observable only {len(partici)}'
control = partici[partici['randomization.arm']==0]
assert len(control)*2 == len(partici)
full_lang_names = {'ar':'table.1.index.2', 'de': 'table.1.index.3', 'pl': 'table.1.index.7', 'fa': 'table.1.index.6'}
control['Language'] = control['lang'].apply(lambda l: full_lang_names[l])
control['Experience level'] = control['prev.experience.assignment'].apply(lambda b: 'Newcomer' if b=='bin_0' else 'Experienced')
control_aggs = control.groupby(['Experience level', 'Language',]).agg({'two.week.retention':pd.np.mean,
'labor.hours.per.day.pre.treatment':pd.np.mean,
# 'labor.hours.per.day.post.treatment':pd.np.mean,
'labor.hours.per.day.diff':pd.np.mean,
'thanks.sent':pd.np.mean})
control_all_aggs = control.groupby(['Experience level',]).agg({'two.week.retention':pd.np.mean,
'labor.hours.per.day.pre.treatment':pd.np.mean,
# 'labor.hours.per.day.post.treatment':pd.np.mean,
'labor.hours.per.day.diff':pd.np.mean,
'thanks.sent':pd.np.mean})
control_aggs = control_aggs.rename(columns={'labor.hours.per.day.pre.treatment':"table.1.subtitle.right.2",
# 'labor.hours.per.day.post.treatment':"daily labor hours, post-treatment",
'labor.hours.per.day.diff':"table.1.subtitle.right.3",
'two.week.retention':"table.1.subtitle.right.1",
'thanks.sent':"table.1.subtitle.right.4"}, )
control_all_aggs = control_all_aggs.rename(columns={'labor.hours.per.day.pre.treatment':"table.1.subtitle.right.2",
# 'labor.hours.per.day.post.treatment':"daily labor hours, post-treatment",
'labor.hours.per.day.diff':"table.1.subtitle.right.3",
'two.week.retention':"table.1.subtitle.right.1",
'thanks.sent':"table.1.subtitle.right.4"}, )
c_arrays = [
['table.1.title.right']*len(control_aggs.columns),
control_aggs.columns.values
]
control_aggs_cols = pd.MultiIndex.from_arrays(c_arrays)
control_aggs.columns = control_aggs_cols
control_all_aggs.columns = control_aggs_cols
eligible_participants = pd.concat([mau, edit_qualifying, quality_qualifying, perc_treatment_thanked], axis=1)
cc_arrays = [
['table.1.title.left']*len(eligible_participants.columns),
eligible_participants.columns.values
]
eligible_participants_cols = pd.MultiIndex.from_arrays(cc_arrays)
eligible_participants.columns = eligible_participants_cols
eligible_participants
control_aggs
control_all_aggs
c_a_arrays = [control_all_aggs.index.values, ['table.1.index.5', 'table.1.index.1',] ]
control_all_aggs.index = pd.MultiIndex.from_arrays(c_a_arrays)
control_all_aggs
caggs = pd.concat([control_all_aggs, control_aggs])
caggs
eligible_participants
# eligible_participants
table1 = pd.concat([eligible_participants, caggs], axis=1)
table1 = table1.sort_index(level=0,ascending=False, sort_remaining=True)
table1[('table.1.title.left','table.1.subtitle.left.4')] = table1[('table.1.title.left','table.1.subtitle.left.4')].apply(lambda x: "{0:.0%}".format(x))
table1[('table.1.title.right','table.1.subtitle.right.1')] = table1[('table.1.title.right','table.1.subtitle.right.1')].apply(lambda x: "{0:.0%}".format(x))
table1
def make_blog_html(lang):
translation_dict = translations[lang].to_dict()
print(translation_dict)
# translation_dict['Newcomer']='' #no string, but keep the spacing
# translation_dict['Experienced']='' #no string but keep the spacing
table1_lang = table1.rename(translation_dict, axis=1).rename(translation_dict, axis=0)
# table1.index = table1.index.get_level_values(0)
table1_html = table1_lang.to_html(float_format='{:,.3f}'.format)
table1_captions_html = f'<caption>{translations.loc["table.1.caption"][lang]}</caption>'
header_tag = '<table border="1" class="dataframe">'
output_html = table1_html.replace(header_tag, f'{header_tag}{table1_captions_html}')
print(output_html)
fname = f'table1_{lang}.html'
table1_html_f = os.path.join(datadir,
'CivilServant/projects/wikipedia-integration/gratitude-study/datasets/debrief-translations/blog_assets/thankee_table_html',
fname)
with open(table1_html_f, 'w') as f:
f.write(output_html)
for lang in translations.columns:
make_blog_html(lang)
```
## thanker table
1. make the multiindexes
2. put in the localisation strings
2. make_blog_tabls_thanker
```
tresorpath_thanker = 'CivilServant/projects/wikipedia-integration/gratitude-study/datasets/misc'
fname_thanker = 'thanker.table.1.csv'
fpath_thanker = os.path.join(datadir, tresorpath_thanker, fname_thanker)
# print(fpath_thanker)
tdf = pd.read_csv(fpath_thanker,index_col=0)
tdf = tdf.set_index('Group.1')
tdf['total'] = ['3%', '37%', '37%', '23%']
correct_order_cols = ['total','num.reverts.56.pre.treatment','previous.supportive.actions','labor.hours.56.pre.treatment','pre.newcomer.capability','pre.newcomer.intent']
correct_order_index = ['Monitor & Mentor','Mentor Only','Monitor Only', 'Neither']
tdf = tdf.reindex(correct_order_index)
tdf = tdf[correct_order_cols]
top_level_cols = [' ', 'average activity', 'average activity', 'average activity', 'average survey', 'average survey']
tdf.columns = pd.MultiIndex.from_arrays([top_level_cols, tdf.columns])
tdf.index.name=None
view_to_label = {'average activity':'thanker.Table.1.column.title.1',
'title':'thanker.Table.1.title',
'':'thanker.Table.1.title.highlight.1',
'':'thanker.Table.1.title.highlight.2',
'':'thanker.Table.1.title.highlight.3',
'average survey':'thanker.Table.1.column.title.2',
'total':'thanker.Table.1.column.label.1',
'num.reverts.56.pre.treatment':'thanker.Table.1.column.label.2',
'previous.supportive.actions':'thanker.Table.1.column.label.3',
'labor.hours.56.pre.treatment':'thanker.Table.1.column.label.4',
'pre.newcomer.capability':'thanker.Table.1.column.label.5',
'pre.newcomer.intent':'thanker.Table.1.column.label.6',
'Monitor & Mentor':'thanker.Table.1.row.1',
'Mentor Only':'thanker.Table.1.row.2',
'Monitor Only':'thanker.Table.1.row.3',
'Neither':'thanker.Table.1.row.4',}
def make_thanker_blog_html(lang):
translation_dict = translations[lang].to_dict()
table1_label = tdf.rename(view_to_label, axis=1).rename(view_to_label, axis=0)
table1_lang = table1_label.rename(translation_dict, axis=1).rename(translation_dict, axis=0)
table1_html = table1_lang.to_html(float_format='{:,.1f}'.format)
table1_captions_html = f'<caption><h4>{translations.loc["thanker.Table.1.title"][lang]}</h4></caption>'
header_tag = '<table border="1" class="dataframe">'
output_html = table1_html.replace(header_tag, f'{header_tag}{table1_captions_html}')
fname = f'table1_{lang}.html'
table1_html_f = os.path.join(datadir,
'CivilServant/projects/wikipedia-integration/gratitude-study/datasets/debrief-translations/blog_assets/thanker_table_html',
fname)
with open(table1_html_f, 'w') as f:
f.write(output_html)
for lang in translations.columns:
make_thanker_blog_html(lang)
```
| github_jupyter |
# Hierarchical Clustering Lab
In this notebook, we will be using sklearn to conduct hierarchical clustering on the [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris) which contains 4 dimensions/attributes and 150 samples. Each sample is labeled as one of the three type of Iris flowers.
In this exercise, we'll ignore the labeling and cluster based on the attributes, then we'll compare the results of different hierarchical clustering techniques with the original labels to see which one does a better job in this scenario. We'll then proceed to visualize the resulting cluster hierarchies.
## 1. Importing the Iris dataset
```
from sklearn import datasets
iris = datasets.load_iris()
```
A look at the first 10 samples in the dataset
```
iris.data[:10]
```
```iris.target``` contains the labels that indicate which type of Iris flower each sample is
```
iris.target
```
## 2. Clustering
Let's now use sklearn's [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to conduct the heirarchical clustering
```
from sklearn.cluster import AgglomerativeClustering
# Hierarchical clustering
# Ward is the default linkage algorithm, so we'll start with that
ward = AgglomerativeClustering(n_clusters=3)
ward_pred = ward.fit_predict(iris.data)
```
Let's also try complete and average linkages
**Exercise**:
* Conduct hierarchical clustering with complete linkage, store the predicted labels in the variable ```complete_pred```
* Conduct hierarchical clustering with average linkage, store the predicted labels in the variable ```avg_pred```
Note: look at the documentation of [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to find the appropriate value to pass as the ```linkage``` value
```
# Hierarchical clustering using complete linkage
# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters
complete = AgglomerativeClustering(n_clusters=3, linkage="complete")
# Fit & predict
# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels
complete_pred = complete.fit_predict(iris.data)
# Hierarchical clustering using average linkage
# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters
avg = AgglomerativeClustering(n_clusters=3, linkage="average")
# Fit & predict
# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels
avg_pred = avg.fit_predict(iris.data)
```
To determine which clustering result better matches the original labels of the samples, we can use ```adjusted_rand_score``` which is an *external cluster validation index* which results in a score between -1 and 1, where 1 means two clusterings are identical of how they grouped the samples in a dataset (regardless of what label is assigned to each cluster).
Cluster validation indices are discussed later in the course.
```
from sklearn.metrics import adjusted_rand_score
ward_ar_score = adjusted_rand_score(iris.target, ward_pred)
```
**Exercise**:
* Calculate the Adjusted Rand score of the clusters resulting from complete linkage and average linkage
```
# TODO: Calculated the adjusted Rand score for the complete linkage clustering labels
complete_ar_score = adjusted_rand_score(iris.target, complete_pred)
# TODO: Calculated the adjusted Rand score for the average linkage clustering labels
avg_ar_score = adjusted_rand_score(iris.target, avg_pred)
```
Which algorithm results in the higher Adjusted Rand Score?
```
print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score)
```
## 3. The Effect of Normalization on Clustering
Can we improve on this clustering result?
Let's take another look at the dataset
```
iris.data[:15]
```
Looking at this, we can see that the forth column has smaller values than the rest of the columns, and so its variance counts for less in the clustering process (since clustering is based on distance). Let us [normalize](https://en.wikipedia.org/wiki/Feature_scaling) the dataset so that each dimension lies between 0 and 1, so they have equal weight in the clustering process.
This is done by subtracting the minimum from each column then dividing the difference by the range.
sklearn provides us with a useful utility called ```preprocessing.normalize()``` that can do that for us
```
from sklearn import preprocessing
normalized_X = preprocessing.normalize(iris.data)
normalized_X[:10]
```
Now all the columns are in the range between 0 and 1. Would clustering the dataset after this transformation lead to a better clustering? (one that better matches the original labels of the samples)
```
ward = AgglomerativeClustering(n_clusters=3)
ward_pred = ward.fit_predict(normalized_X)
complete = AgglomerativeClustering(n_clusters=3, linkage="complete")
complete_pred = complete.fit_predict(normalized_X)
avg = AgglomerativeClustering(n_clusters=3, linkage="average")
avg_pred = avg.fit_predict(normalized_X)
ward_ar_score = adjusted_rand_score(iris.target, ward_pred)
complete_ar_score = adjusted_rand_score(iris.target, complete_pred)
avg_ar_score = adjusted_rand_score(iris.target, avg_pred)
print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score)
```
## 4. Dendrogram visualization with scipy
Let's visualize the highest scoring clustering result.
To do that, we'll need to use Scipy's [```linkage```](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html) function to perform the clusteirng again so we can obtain the linkage matrix it will later use to visualize the hierarchy
```
# Import scipy's linkage function to conduct the clustering
from scipy.cluster.hierarchy import linkage
# Specify the linkage type. Scipy accepts 'ward', 'complete', 'average', as well as other values
# Pick the one that resulted in the highest Adjusted Rand Score
linkage_type = 'ward'
linkage_matrix = linkage(normalized_X, linkage_type)
```
Plot using scipy's [dendrogram](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.cluster.hierarchy.dendrogram.html) function
```
from scipy.cluster.hierarchy import dendrogram
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(22,18))
# plot using 'dendrogram()'
dendrogram(linkage_matrix)
plt.show()
```
## 5. Visualization with Seaborn's ```clustermap```
The [seaborn](http://seaborn.pydata.org/index.html) plotting library for python can plot a [clustermap](http://seaborn.pydata.org/generated/seaborn.clustermap.html), which is a detailed dendrogram which also visualizes the dataset in more detail. It conducts the clustering as well, so we only need to pass it the dataset and the linkage type we want, and it will use scipy internally to conduct the clustering
```
import seaborn as sns
sns.clustermap(normalized_X, figsize=(12,18), method=linkage_type, cmap='viridis')
# Expand figsize to a value like (18, 50) if you want the sample labels to be readable
# Draw back is that you'll need more scrolling to observe the dendrogram
plt.show()
```
Looking at the colors of the dimensions can you observe how they differ between the three type of flowers? You should at least be able to notice how one is vastly different from the two others (in the top third of the image).
| github_jupyter |
<a href="https://colab.research.google.com/github/ahmedoumar/ArSarcasm/blob/master/Fine_tune_MARBERT_from_checkpoint.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Download MARBERT checkpoint
```
!wget https://huggingface.co/UBC-NLP/MARBERT/resolve/main/MARBERT_pytorch_verison.tar.gz
!tar -xvf MARBERT_pytorch_verison.tar.gz
!wget https://raw.githubusercontent.com/UBC-NLP/marbert/main/examples/UBC_AJGT_final_shuffled_train.tsv
!wget https://raw.githubusercontent.com/UBC-NLP/marbert/main/examples/UBC_AJGT_final_shuffled_test.tsv
!mkdir -p AJGT
!mv UBC_AJGT_final_shuffled_train.tsv ./AJGT/UBC_AJGT_final_shuffled_train.tsv
!mv UBC_AJGT_final_shuffled_test.tsv ./AJGT/UBC_AJGT_final_shuffled_test.tsv
!pip install GPUtil pytorch_pretrained_bert transformers
```
# Fine-tuning code
```
# (1)load libraries
import json, sys, regex
import torch
import GPUtil
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from pytorch_pretrained_bert import BertTokenizer, BertConfig, BertAdam, BertForSequenceClassification
from tqdm import tqdm, trange
import pandas as pd
import os
import numpy as np
from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score, classification_report, confusion_matrix
##----------------------------------------------------
from transformers import *
from transformers import XLMRobertaConfig
from transformers import XLMRobertaModel
from transformers import AutoTokenizer, AutoModelWithLMHead
from transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer, XLMRobertaModel
from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors
from transformers import AdamW, get_linear_schedule_with_warmup
from transformers import AutoTokenizer, AutoModel
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print ("your device ", device)
def create_label2ind_file(file, label_col):
labels_json={}
#load train_dev_test file
df = pd.read_csv(file, sep="\t")
df.head(5)
#get labels and sort it A-Z
labels = df[label_col].unique()
labels.sort()
#convert labels to indexes
for idx in range(0, len(labels)):
labels_json[labels[idx]]=idx
#save labels with indexes to file
with open(label2idx_file, 'w') as json_file:
json.dump(labels_json, json_file)
def data_prepare_BERT(file_path, lab2ind, tokenizer, content_col, label_col, MAX_LEN):
# Use pandas to load dataset
df = pd.read_csv(file_path, delimiter='\t', header=0)
df = df[df[content_col].notnull()]
df = df[df[label_col].notnull()]
print("Data size ", df.shape)
# Create sentence and label lists
sentences = df[content_col].values
sentences = ["[CLS] " + sentence+ " [SEP]" for sentence in sentences]
print ("The first sentence:")
print (sentences[0])
# Create sentence and label lists
labels = df[label_col].values
#print (labels)
labels = [lab2ind[i] for i in labels]
# Import the BERT tokenizer, used to convert our text into tokens that correspond to BERT's vocabulary.
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print ("Tokenize the first sentence:")
print (tokenized_texts[0])
#print("Label is ", labels[0])
# Use the BERT tokenizer to convert the tokens to their index numbers in the BERT vocabulary
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
print ("Index numbers of the first sentence:")
print (input_ids[0])
# Pad our input seqeunce to the fixed length (i.e., max_len) with index of [PAD] token
# ~ input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
pad_ind = tokenizer.convert_tokens_to_ids(['[PAD]'])[0]
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN+2, dtype="long", truncating="post", padding="post", value=pad_ind)
print ("Index numbers of the first sentence after padding:\n",input_ids[0])
# Create attention masks
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i > 0) for i in seq]
attention_masks.append(seq_mask)
# Convert all of our data into torch tensors, the required datatype for our model
inputs = torch.tensor(input_ids)
labels = torch.tensor(labels)
masks = torch.tensor(attention_masks)
return inputs, labels, masks
# Function to calculate the accuracy of our predictions vs labels
# def flat_accuracy(preds, labels):
# pred_flat = np.argmax(preds, axis=1).flatten()
# labels_flat = labels.flatten()
# return np.sum(pred_flat == labels_flat) / len(labels_flat)
def flat_pred(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return pred_flat.tolist(), labels_flat.tolist()
def train(model, iterator, optimizer, scheduler, criterion):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
input_ids, input_mask, labels = batch
outputs = model(input_ids, input_mask, labels=labels)
loss, logits = outputs[:2]
# delete used variables to free GPU memory
del batch, input_ids, input_mask, labels
optimizer.zero_grad()
if torch.cuda.device_count() == 1:
loss.backward()
epoch_loss += loss.cpu().item()
else:
loss.sum().backward()
epoch_loss += loss.sum().cpu().item()
optimizer.step()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore
# optimizer.step()
scheduler.step()
# free GPU memory
if device == 'cuda':
torch.cuda.empty_cache()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
all_pred=[]
all_label = []
with torch.no_grad():
for i, batch in enumerate(iterator):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
input_ids, input_mask, labels = batch
outputs = model(input_ids, input_mask, labels=labels)
loss, logits = outputs[:2]
# delete used variables to free GPU memory
del batch, input_ids, input_mask
if torch.cuda.device_count() == 1:
epoch_loss += loss.cpu().item()
else:
epoch_loss += loss.sum().cpu().item()
# identify the predicted class for each example in the batch
probabilities, predicted = torch.max(logits.cpu().data, 1)
# put all the true labels and predictions to two lists
all_pred.extend(predicted)
all_label.extend(labels.cpu())
accuracy = accuracy_score(all_label, all_pred)
f1score = f1_score(all_label, all_pred, average='macro')
recall = recall_score(all_label, all_pred, average='macro')
precision = precision_score(all_label, all_pred, average='macro')
report = classification_report(all_label, all_pred)
return (epoch_loss / len(iterator)), accuracy, f1score, recall, precision
def fine_tuning(config):
#---------------------------------------
print ("[INFO] step (1) load train_test config file")
# config_file = open(config_file, 'r', encoding="utf8")
# config = json.load(config_file)
task_name = config["task_name"]
content_col = config["content_col"]
label_col = config["label_col"]
train_file = config["data_dir"]+config["train_file"]
dev_file = config["data_dir"]+config["dev_file"]
sortby = config["sortby"]
max_seq_length= int(config["max_seq_length"])
batch_size = int(config["batch_size"])
lr_var = float(config["lr"])
model_path = config['pretrained_model_path']
num_epochs = config['epochs'] # Number of training epochs (authors recommend between 2 and 4)
global label2idx_file
label2idx_file = config["data_dir"]+config["task_name"]+"_labels-dict.json"
#-------------------------------------------------------
print ("[INFO] step (2) convert labels2index")
create_label2ind_file(train_file, label_col)
print (label2idx_file)
#---------------------------------------------------------
print ("[INFO] step (3) check checkpoit directory and report file")
ckpt_dir = config["data_dir"]+task_name+"_bert_ckpt/"
report = ckpt_dir+task_name+"_report.tsv"
sorted_report = ckpt_dir+task_name+"_report_sorted.tsv"
if not os.path.exists(ckpt_dir):
os.mkdir(ckpt_dir)
#-------------------------------------------------------
print ("[INFO] step (4) load label to number dictionary")
lab2ind = json.load(open(label2idx_file))
print ("[INFO] train_file", train_file)
print ("[INFO] dev_file", dev_file)
print ("[INFO] num_epochs", num_epochs)
print ("[INFO] model_path", model_path)
print ("max_seq_length", max_seq_length, "batch_size", batch_size)
#-------------------------------------------------------
print ("[INFO] step (5) Use defined funtion to extract tokanize data")
# tokenizer from pre-trained BERT model
print ("loading BERT setting")
tokenizer = BertTokenizer.from_pretrained(model_path)
train_inputs, train_labels, train_masks = data_prepare_BERT(train_file, lab2ind, tokenizer,content_col, label_col, max_seq_length)
validation_inputs, validation_labels, validation_masks = data_prepare_BERT(dev_file, lab2ind, tokenizer, content_col, label_col,max_seq_length)
# Load BertForSequenceClassification, the pretrained BERT model with a single linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=len(lab2ind))
#--------------------------------------
print ("[INFO] step (6) Create an iterator of data with torch DataLoader.")
# This helps save on memory during training because, unlike a for loop,\
# with an iterator the entire dataset does not need to be loaded into memory")
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_dataloader = DataLoader(train_data, batch_size=batch_size)
#---------------------------
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_dataloader = DataLoader(validation_data, batch_size=batch_size)
#------------------------------------------
print ("[INFO] step (7) run with parallel GPUs")
if torch.cuda.is_available():
if torch.cuda.device_count() == 1:
print("Run", "with one GPU")
model = model.to(device)
else:
n_gpu = torch.cuda.device_count()
print("Run", "with", n_gpu, "GPUs with max 4 GPUs")
device_ids = GPUtil.getAvailable(limit = 4)
torch.backends.cudnn.benchmark = True
model = model.to(device)
model = nn.DataParallel(model, device_ids=device_ids)
else:
print("Run", "with CPU")
model = model
#---------------------------------------------------
print ("[INFO] step (8) set Parameters, schedules, and loss function")
global max_grad_norm
max_grad_norm = 1.0
warmup_proportion = 0.1
num_training_steps = len(train_dataloader) * num_epochs
num_warmup_steps = num_training_steps * warmup_proportion
### In Transformers, optimizer and schedules are instantiated like this:
# Note: AdamW is a class from the huggingface library
# the 'W' stands for 'Weight Decay"
optimizer = AdamW(model.parameters(), lr=lr_var, correct_bias=False)
# schedules
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) # PyTorch scheduler
# We use nn.CrossEntropyLoss() as our loss function.
criterion = nn.CrossEntropyLoss()
#---------------------------------------------------
print ("[INFO] step (9) start fine_tuning")
for epoch in trange(num_epochs, desc="Epoch"):
train_loss = train(model, train_dataloader, optimizer, scheduler, criterion)
val_loss, val_acc, val_f1, val_recall, val_precision = evaluate(model, validation_dataloader, criterion)
# print (train_loss, val_acc)
# Create checkpoint at end of each epoch
if not os.path.exists(ckpt_dir + 'model_' + str(int(epoch + 1)) + '/'): os.mkdir(ckpt_dir + 'model_' + str(int(epoch + 1)) + '/')
model.save_pretrained(ckpt_dir+ 'model_' + str(int(epoch + 1)) + '/')
epoch_eval_results = {"epoch_num":int(epoch + 1),"train_loss":train_loss,
"val_acc":val_acc, "val_recall":val_recall, "val_precision":val_precision, "val_f1":val_f1,"lr":lr_var }
with open(report,"a") as fOut:
fOut.write(json.dumps(epoch_eval_results)+"\n")
fOut.flush()
#------------------------------------
report_df = pd.read_json(report, orient='records', lines=True)
report_df.sort_values(by=[sortby],ascending=False, inplace=True)
report_df.to_csv(sorted_report,sep="\t",index=False)
return report_df
```
# Run fine-tuning for 5 epochs
```
config={"task_name": "AJGT_MARBERT", #output directory name
"data_dir": "./AJGT/", #data directory
"train_file": "UBC_AJGT_final_shuffled_train.tsv", #train file path
"dev_file": "UBC_AJGT_final_shuffled_test.tsv", #dev file path or test file path
"pretrained_model_path": 'MARBERT_pytorch_verison', #MARBERT checkpoint path
"epochs": 5, #number of epochs
"content_col": "content", #text column
"label_col": "label", #label column
"lr": 2e-06, #learning rate
"max_seq_length": 128, #max sequance length
"batch_size": 32, #batch shize
"sortby":"val_acc"} #sort results based on val_acc or val_f1
report_df = fine_tuning(config)
report_df.head(5)
```
| github_jupyter |
[**Blueprints for Text Analysis Using Python**](https://github.com/blueprints-for-text-analytics-python/blueprints-text)
Jens Albrecht, Sidharth Ramachandran, Christian Winkler
# Chapter 6:<div class='tocSkip'/>
# How to use classification algorithms to label text into multiple categories
## Remark<div class='tocSkip'/>
The code in this notebook differs slightly from the printed book. For example we frequently use pretty print (`pp.pprint`) instead of `print` and `tqdm`'s `progress_apply` instead of Pandas' `apply`.
Moreover, several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book.
You may also find some lines marked with three hashes ###. Those are not in the book as well as they don't contribute to the concept.
All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting.
## Setup<div class='tocSkip'/>
Set directory locations. If working on Google Colab: copy files and install required libraries.
```
import sys, os
ON_COLAB = 'google.colab' in sys.modules
if ON_COLAB:
GIT_ROOT = 'https://github.com/blueprints-for-text-analytics-python/blueprints-text/raw/master'
os.system(f'wget {GIT_ROOT}/ch06/setup.py')
%run -i setup.py
```
## Load Python Settings<div class="tocSkip"/>
```
%run "$BASE_DIR/settings.py"
%reload_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'png'
# to print output of all statements and not just the last
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# otherwise text between $ signs will be interpreted as formula and printed in italic
pd.set_option('display.html.use_mathjax', False)
# path to import blueprints packages
sys.path.append(BASE_DIR + '/packages')
import matplotlib.pyplot as plt
import html
import re
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.svm import LinearSVC
from sklearn.dummy import DummyClassifier
```
## What you'll learn and what we will build
# Introducing the Java Development Tools Bug Dataset
```
file = "eclipse_jdt.csv"
file = f"{BASE_DIR}/data/jdt-bugs-dataset/eclipse_jdt.csv.gz" ### real location
df = pd.read_csv(file)
print (df.columns)
df[['Issue_id','Priority','Component','Title','Description']].sample(2, random_state=42)
df = df.drop(columns=['Duplicated_issue']) ###
pd.set_option('display.max_colwidth', -1)
df.sample(1, random_state=123).T
df['Priority'].value_counts().sort_index().plot(kind='bar')
df['Component'].value_counts()
```
# Blueprint: Building a Text Classification system
## Step 1 - Data Preparation
```
df = df[['Title','Description','Priority']]
df = df.dropna()
df['text'] = df['Title'] + ' ' + df['Description']
df = df.drop(columns=['Title','Description'])
df.columns
from blueprints.preparation import clean
df['text'] = df['text'].apply(clean)
df = df[df['text'].str.len() > 50]
df.sample(2, random_state=0)
```
## Step 2 - Train-Test Split
```
X_train, X_test, Y_train, Y_test = train_test_split(df['text'],
df['Priority'],
test_size=0.2,
random_state=42,
stratify=df['Priority'])
print('Size of Training Data ', X_train.shape[0])
print('Size of Test Data ', X_test.shape[0])
```
## Step 3 - Training the machine learning model
```
tfidf = TfidfVectorizer(min_df = 10, ngram_range=(1,2), stop_words="english")
X_train_tf = tfidf.fit_transform(X_train)
model1 = LinearSVC(random_state=0, tol=1e-5)
model1.fit(X_train_tf, Y_train)
```
## Step 4 - Model Evaluation
```
X_test_tf = tfidf.transform(X_test)
Y_pred = model1.predict(X_test_tf)
print ('Accuracy Score - ', accuracy_score(Y_test, Y_pred))
clf = DummyClassifier(strategy='most_frequent', random_state=42)
clf.fit(X_train, Y_train)
Y_pred_baseline = clf.predict(X_test)
print ('Accuracy Score - ', accuracy_score(Y_test, Y_pred_baseline))
```
### Precision and Recall
```
Y_pred = model1.predict(X_test_tf)
confusion_matrix(Y_test, Y_pred)
plot_confusion_matrix(model1,X_test_tf,
Y_test, values_format='d',
cmap=plt.cm.Blues)
plt.show()
print(classification_report(Y_test, Y_pred))
```
### Class Imbalance
```
# Filter bug reports with priority P3 and sample 4000 rows from it
df_sampleP3 = df[df['Priority'] == 'P3'].sample(n=4000, random_state=123)
# Create a separate dataframe containing all other bug reports
df_sampleRest = df[df['Priority'] != 'P3']
# Concatenate the two dataframes to create the new balanced bug reports dataset
df_balanced = pd.concat([df_sampleRest, df_sampleP3])
# Check the status of the class imbalance
df_balanced['Priority'].value_counts()
```
# Final Blueprint for Text Classification
```
# Loading the balanced dataframe
df = df_balanced[['text', 'Priority']]
df = df.dropna()
# Step 1 - Data Preparation
df['text'] = df['text'].apply(clean)
# Step 2 - Train-Test Split
X_train, X_test, Y_train, Y_test = train_test_split(df['text'],
df['Priority'],
test_size=0.2,
random_state=42,
stratify=df['Priority'])
print('Size of Training Data ', X_train.shape[0])
print('Size of Test Data ', X_test.shape[0])
# Step 3 - Training the Machine Learning model
tfidf = TfidfVectorizer(min_df=10, ngram_range=(1, 2), stop_words="english")
X_train_tf = tfidf.fit_transform(X_train)
model1 = LinearSVC(random_state=0, tol=1e-5)
model1.fit(X_train_tf, Y_train)
# Step 4 - Model Evaluation
X_test_tf = tfidf.transform(X_test)
Y_pred = model1.predict(X_test_tf)
print('Accuracy Score - ', accuracy_score(Y_test, Y_pred))
print(classification_report(Y_test, Y_pred))
clf = DummyClassifier(strategy='stratified', random_state=21)
clf.fit(X_train, Y_train)
Y_pred_baseline = clf.predict(X_test)
print ('Accuracy Score - ', accuracy_score(Y_test, Y_pred_baseline))
## Create a dataframe combining the Title and Description,
## Actual and Predicted values that we can explore
frame = { 'text': X_test, 'actual': Y_test, 'predicted': Y_pred }
result = pd.DataFrame(frame)
result[((result['actual'] == 'P1') | (result['actual'] == 'P2')) &
(result['actual'] == result['predicted'])].sample(2, random_state=22)
result[((result['actual'] == 'P1') | (result['actual'] == 'P2')) &
(result['actual'] != result['predicted'])].sample(2, random_state=33)
```
# Cross-Validation
```
# Vectorization
tfidf = TfidfVectorizer(min_df = 10, ngram_range=(1,2), stop_words="english")
df_tf = tfidf.fit_transform(df['text']).toarray()
# Cross Validation with 5 folds
scores = cross_val_score(estimator=model1,
X=df_tf,
y=df['Priority'],
cv=5)
print ("Validation scores from each iteration of the cross validation ", scores)
print ("Mean value across of validation scores ", scores.mean())
print ("Standard deviation of validation scores ", scores.std())
```
# Hyperparameter Tuning with Grid Search
```
training_pipeline = Pipeline(
steps=[('tfidf', TfidfVectorizer(
stop_words="english")), ('model',
LinearSVC(random_state=21, tol=1e-5))])
grid_param = [{
'tfidf__min_df': [5, 10],
'tfidf__ngram_range': [(1, 3), (1, 6)],
'model__penalty': ['l2'],
'model__loss': ['hinge'],
'model__max_iter': [10000]
}, {
'tfidf__min_df': [5, 10],
'tfidf__ngram_range': [(1, 3), (1, 6)],
'model__C': [1, 10],
'model__tol': [1e-2, 1e-3]
}]
gridSearchProcessor = GridSearchCV(estimator=training_pipeline,
param_grid=grid_param,
cv=5)
gridSearchProcessor.fit(df['text'], df['Priority'])
best_params = gridSearchProcessor.best_params_
print("Best alpha parameter identified by grid search ", best_params)
best_result = gridSearchProcessor.best_score_
print("Best result identified by grid search ", best_result)
gridsearch_results = pd.DataFrame(gridSearchProcessor.cv_results_)
gridsearch_results[['rank_test_score', 'mean_test_score',
'params']].sort_values(by=['rank_test_score'])[:5]
```
# Blueprint recap and conclusion
```
# Flag that determines the choice of SVC (True) and LinearSVC (False)
runSVC = True
# Loading the dataframe
file = "eclipse_jdt.csv"
file = f"{BASE_DIR}/data/jdt-bugs-dataset/eclipse_jdt.csv.gz" ### real location
df = df[['Title', 'Description', 'Component']]
df = df.dropna()
df['text'] = df['Title'] + df['Description']
df = df.drop(columns=['Title', 'Description'])
# Step 1 - Data Preparation
df['text'] = df['text'].apply(clean)
df = df[df['text'].str.len() > 50]
if (runSVC):
# Sample the data when running SVC to ensure reasonable run-times
df = df.groupby('Component', as_index=False).apply(pd.DataFrame.sample,
random_state=42,
frac=.2)
# Step 2 - Train-Test Split
X_train, X_test, Y_train, Y_test = train_test_split(df['text'],
df['Component'],
test_size=0.2,
random_state=42,
stratify=df['Component'])
print('Size of Training Data ', X_train.shape[0])
print('Size of Test Data ', X_test.shape[0])
# Step 3 - Training the Machine Learning model
tfidf = TfidfVectorizer(stop_words="english")
if (runSVC):
model = SVC(random_state=42, probability=True)
grid_param = [{
'tfidf__min_df': [5, 10],
'tfidf__ngram_range': [(1, 3), (1, 6)],
'model__C': [1, 100],
'model__kernel': ['linear']
}]
else:
model = LinearSVC(random_state=42, tol=1e-5)
grid_param = {
'tfidf__min_df': [5, 10],
'tfidf__ngram_range': [(1, 3), (1, 6)],
'model__C': [1, 100],
'model__loss': ['hinge']
}
training_pipeline = Pipeline(
steps=[('tfidf', TfidfVectorizer(stop_words="english")), ('model', model)])
gridSearchProcessor = GridSearchCV(estimator=training_pipeline,
param_grid=grid_param,
cv=5)
gridSearchProcessor.fit(X_train, Y_train)
best_params = gridSearchProcessor.best_params_
print("Best alpha parameter identified by grid search ", best_params)
best_result = gridSearchProcessor.best_score_
print("Best result identified by grid search ", best_result)
best_model = gridSearchProcessor.best_estimator_
# Step 4 - Model Evaluation
Y_pred = best_model.predict(X_test)
print('Accuracy Score - ', accuracy_score(Y_test, Y_pred))
print(classification_report(Y_test, Y_pred))
clf = DummyClassifier(strategy='most_frequent', random_state=21)
clf.fit(X_train, Y_train)
Y_pred_baseline = clf.predict(X_test)
print ('Accuracy Score - ', accuracy_score(Y_test, Y_pred_baseline))
## Create a dataframe combining the Title and Description,
## Actual and Predicted values that we can explore
frame = { 'text': X_test, 'actual': Y_test, 'predicted': Y_pred }
result = pd.DataFrame(frame)
result[result['actual'] == result['predicted']].sample(2, random_state=21)
result[result['actual'] != result['predicted']].sample(2, random_state=42)
```
# Closing Remarks
# Further Reading
| github_jupyter |
```
odd=[1,3,4,7,9]
!dir
a=[1,2,3]
a=[1,2,3]
a
a=[1,2,3
a]
a=[1,2,3]
a[0]
a[0]+a[2]
a[-1]
a = [1, 2, 3, ['a','b','c']]
a=[0]
a=[0
]
a=[1,2,3['a','b','c']]
a[0]
a=[1,2,3,['a','b','b']]
a[0]
a[-1][0]
a= [ 1,2,3,4,5]
a[0:2]
b=[a:2]
b=a[:2]
c=a[2:]
b
c
a= [1,2,3,['a','b','c'],4,5]
a[2:5]
a[3][:2]
a=[1,2,3]
b=[4,5,6]
a+b
a=[1,2,3]
a*3
a=[1,2,3]
len(a
)
a=[1,2,3]
len(a
)
a=[1,2,3]
a[2]=4
a
a
a[2]=1
a
a = [1,2,3]
del a[1]
a
a=[1,2,3,4,5]
ded a[2:]
a
a=[1,2,3,4,5]
del a[2:]
a
a=[1,2,3]
a.append(4)
a
b
a
a.append([5,6]
)
a
a=[1,4,3,2]
a.short()
a
a=[1,4,3,2]
a.sort()
a
a=['a','c','b']
a.sort()
a
a=['a','c','b']
a.reverse()
a
a = [1,2,3]
a.index(3)
a=[1,2,3]
a.insert(0,4)
a
a= [1, 2, 3, 1, 2, 3]
a.remove(3)
a
a=[1,2,3]
a.pop()
a
a=[1,2,3]
a.pop()
a
a = [1,2,3]
a.pop(1)
2
a
[1,3]
a = [1,2,3]
a.pop(1)
a
a=[1,2,3,1]
a.count(1)
a=[1,2,3]
a.extend([4,5])
a
b=[6,7]
a.extend(b)
a
t1=()
t2=(1,)
t3=(1,2,3)
t4=1,2,3
t5=('a','b',('ab','cd'))
t=1(1,2,'a','b')
t1[0]
1
t=1(1,2,'a','b')
t1[0]
t1=(1,2,'a','b')
t1[0]
t1=(1,2,'a','b')
t1[1:]
(2,'a','b')
t2=(3,4)
t1+t2
t2*3
t1=(1,2,'a','b')
len(t1)
a = {1:'a'}
a[2]='b' {2:'b'}
a
{1:'a',2:'b'}
a = {1:'a'}
a[2]='b' {2:'b'}
a
a = {1:'a'}
a[2]='b'
a
a['name'] = 'pey'
a
a[3]=[1,2,3]
a
del a[1]
a
grade = {'pey':10,'julliet':99}
grade['pey']
10
grade = {'pey':10,'julliet':99}
grade['pey']
grade['julliet']
a = {1:'a',2:'b'}
a[1]
a[2]
a = {'a':1, 'b'=2}
a=['a']
a = {'a':1, 'b'=2}
a['a']
a = {'a':1, 'b':2}
a['a']
a['b']
a={1:'a',1:'b'}
a
a = { 'name':'pey', 'phone':'0119993323', 'birth':'1118' }
a.keys()
for k in a.keys():
for k in a.keys():print(k)
list(a.keys())
['name','phone','birth']
a.values()
a.item()
a.items()
a.clear()
a
a = { 'name':'pey', 'phone':0119993323', 'birth':'1118'}
a.get('name')
a = { 'name':'pey', 'phone':'0119993323', 'birth':'1118'}
a.get('name')
a.get('phone')
a.get('foo','bar')
a = { 'name':'pey', 'phone':'0119993323', 'birth':'1118'}
'name'in a True
a = { 'name':'pey', 'phone':'0119993323', 'birth':'1118'}
'name'in a
'email' in a
s1=set([1,2,3])
s1
s2=set("Hello")
s2
s1= set([1,2,3])
l1=list(s1)
l1
l1[0]
t1=tuple(s1)
t1
t1[0]
s1=set([1,2,3,4,5,6])
s2=set([5,6,7,8,9])
s1&s2
s1 & s2
s1 & s2
s1=set([1,2,3,4,5,6])
s2=set([4, 5,6,7,8,9])
s1&s2
s1&s2
s1=set([1,2,3,4,5,6])
s2=set([4, 5,6,7,8,9])
s1&s2
s1.intersection(s2)
s1 | s2
s1.union(s2)
s1 - s2
s2- s1
s1.difference(s2)
s2.difference(s1)
s1= set([1,2,3])
s1.add(4)
s1
s1=set([1,2,3])
s1.update([4,{1, 2, 3, 4, 5, 6}5,6])
s1
s1=set([1,2,3])
s1.remove(2)
s1
a=True
b=False
type(a)
type(b)
1 == 1
2 ==
2 < 1
2 < 1
2 > 1
a = [1,2,3,4]
while a:
a.pop()
a = [1,2,3,4]
while a:
... a.pop()
...
a = [1,2,3,4]
while a:
... a.pop()
...
a = [1,2,3,4]
while a:
... a.pop()
a = [1,2,3,4]
while a:
a.pop()
a = [1,2,3,4]
while a:
a.pop()
bool('python')
bool('')
bool([1,2,3])
bool([])
bool(0)
bool(3)
a = [1,2,3]
id(a)
a=[1,2,3]b=a
a=[1,2,3]
b=a
id(a)
ㅇ
a is b
a[1]=4
a
b
a=[1,2,3]
b=a[:]
a[1]=4
a
b
from copy import copy
a = [1,2,3]
b= copy(a)
a
b
b is a
a,b = ( 'python','life')
(a,b)='python','life'
[a,b]=['python','life']
a=b='python'
a=3
b=5
a,b=b,a
a
b
money = True
if money:
... print("택시를 타고 가라")
...else:
... print("걸어 가라")
...
money = True
if money:
print("택시를 타고 가라")
else:
print("걸어 가라")
money = True
if money:
... print("택시를 타고 가라")
...else:
... print("걸어 가라")
money = True
if money:
... print("택시를 타고 가라")
...else:
print("걸어 가라")
money = True
if money:
... print("택시를 타고 가라")
else:
print("걸어 가라")
money = True
if money:
print("택시를 타고 가라")
... else:
print("걸어 가라")
money = True
if money:
print("택시를 타고 가라")
... else:
print("걸어 가라")
money = True
if money:
... print("택시를 타고 가라")
... else:
print("걸어 가라")
money = True
if money:
... print("택시를 타고 가라")
...else:
... print("걸어 가라")
...
money = True
if money
print("택시를")
print("타고")
print("가라")
money = True
if money
print("택시를")
print("타고")
print("가라")
money = True
if money
print("택시를")
print("타고")
print("가라")
money = True
if money:
print("택시를")
print("타고")
print("가라")
money = True
if money:
print("택시를")
print("타고")
print("가라")
money = True
if money:
print("택시를")
print("타고")
print("가라")
money = True
if money:
print("택시를")
print("타고")
print("가라")
money = True
if money:
money = True
if money:
x=3
y=3
x>y
x=3
y=2
x>3
x=3
y=2
x>y
money=2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
money=2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
money=2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
money = 2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
money=2000
if money >= 3000:
... print("택시를 타고 가라")
... else:
... print("걸어 가라")
money=2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
money=2000
if money >= 3000:
print("택시를 타고 가라")
else:
print("걸어 가라")
```
| github_jupyter |
850 hPa Temperature Advection
=============================
Plot an 850 hPa map with calculating advection using MetPy.
Beyond just plotting 850-hPa level data, this uses calculations from `metpy.calc` to find
the temperature advection. Currently, this needs an extra helper function to calculate
the distance between lat/lon grid points.
Imports
```
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
import numpy as np
import scipy.ndimage as ndimage
from cftime import num2pydate
from metpy.units import units
from siphon.catalog import TDSCatalog
```
Helper functions
```
# Helper function for finding proper time variable
def find_time_var(var, time_basename='time'):
for coord_name in var.coordinates.split():
if coord_name.startswith(time_basename):
return coord_name
raise ValueError('No time variable found for ' + var.name)
```
Create NCSS object to access the NetcdfSubset
---------------------------------------------
Data from NCEI GFS 0.5 deg Analysis Archive
```
dt = datetime(2017, 4, 5, 12)
# Assemble our URL to the THREDDS Data Server catalog,
# and access our desired dataset within via NCSS
base_url = 'https://www.ncei.noaa.gov/thredds/model-gfs-g4-anl-files-old/'
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
ncss = cat.datasets[f'gfsanl_4_{dt:%Y%m%d}_{dt:%H}00_000.grb2'].subset()
# Create NCSS query for our desired time, region, and data variables
query = ncss.query()
query.time(dt)
query.lonlat_box(north=65, south=15, east=310, west=220)
query.accept('netcdf')
query.variables('Geopotential_height_isobaric',
'Temperature_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric')
# Obtain the queried data
data = ncss.get_data(query)
# Pull out variables you want to use
hght_var = data.variables['Geopotential_height_isobaric']
temp_var = data.variables['Temperature_isobaric']
u_wind_var = data.variables['u-component_of_wind_isobaric']
v_wind_var = data.variables['v-component_of_wind_isobaric']
time_var = data.variables[find_time_var(temp_var)]
lat_var = data.variables['lat']
lon_var = data.variables['lon']
# Get actual data values and remove any size 1 dimensions
lat = lat_var[:].squeeze()
lon = lon_var[:].squeeze()
hght = hght_var[:].squeeze()
temp = units.Quantity(temp_var[:].squeeze(), temp_var.units)
u_wind = units.Quantity(u_wind_var[:].squeeze(), u_wind_var.units)
v_wind = units.Quantity(v_wind_var[:].squeeze(), v_wind_var.units)
# Convert number of hours since the reference time into an actual date
time = num2pydate(time_var[:].squeeze(), time_var.units)
lev_850 = np.where(data.variables['isobaric'][:] == 850*100)[0][0]
hght_850 = hght[lev_850]
temp_850 = temp[lev_850]
u_wind_850 = u_wind[lev_850]
v_wind_850 = v_wind[lev_850]
# Combine 1D latitude and longitudes into a 2D grid of locations
lon_2d, lat_2d = np.meshgrid(lon, lat)
# Gridshift for barbs
lon_2d[lon_2d > 180] = lon_2d[lon_2d > 180] - 360
```
Begin data calculations
-----------------------
```
# Use helper function defined above to calculate distance
# between lat/lon grid points
dx, dy = mpcalc.lat_lon_grid_deltas(lon_var, lat_var)
# Calculate temperature advection using metpy function
adv = mpcalc.advection(temp_850, [u_wind_850, v_wind_850],
(dx, dy), dim_order='yx')
# Smooth heights and advection a little
# Be sure to only put in a 2D lat/lon or Y/X array for smoothing
Z_850 = mpcalc.smooth_gaussian(hght_850, 2)
adv = mpcalc.smooth_gaussian(adv, 2)
```
Begin map creation
------------------
```
# Set Projection of Data
datacrs = ccrs.PlateCarree()
# Set Projection of Plot
plotcrs = ccrs.LambertConformal(central_latitude=[30, 60], central_longitude=-100)
# Create new figure
fig = plt.figure(figsize=(11, 8.5))
gs = gridspec.GridSpec(2, 1, height_ratios=[1, .02], bottom=.07, top=.99,
hspace=0.01, wspace=0.01)
# Add the map and set the extent
ax = plt.subplot(gs[0], projection=plotcrs)
plt.title(f'850mb Temperature Advection for {time:%d %B %Y %H:%MZ}', fontsize=16)
ax.set_extent([235., 290., 20., 55.])
# Add state/country boundaries to plot
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS)
# Plot Height Contours
clev850 = np.arange(900, 3000, 30)
cs = ax.contour(lon_2d, lat_2d, Z_850, clev850, colors='black', linewidths=1.5,
linestyles='solid', transform=datacrs)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Temperature Contours
clevtemp850 = np.arange(-20, 20, 2)
cs2 = ax.contour(lon_2d, lat_2d, temp_850.to(units('degC')), clevtemp850,
colors='grey', linewidths=1.25, linestyles='dashed',
transform=datacrs)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Colorfill of Temperature Advection
cint = np.arange(-8, 9)
cf = ax.contourf(lon_2d, lat_2d, 3*adv.to(units('delta_degC/hour')), cint[cint != 0],
extend='both', cmap='bwr', transform=datacrs)
cax = plt.subplot(gs[1])
cb = plt.colorbar(cf, cax=cax, orientation='horizontal', extendrect=True, ticks=cint)
cb.set_label(r'$^{o}C/3h$', size='large')
# Plot Wind Barbs
ax.barbs(lon_2d, lat_2d, u_wind_850.magnitude, v_wind_850.magnitude,
length=6, regrid_shape=20, pivot='middle', transform=datacrs)
```
| github_jupyter |
# Get Predictions from [AIforThai (xiaofan)](https://aiforthai.in.th/aiplatform/#/chinesetothai)
Prediction results with AIforThai xiaofan run on 1,500 randomly selected sentence pairs of the test set, due to limitations of the AIforThai API, on 2021-06-21 done by [@wannaphong](http://python3.wannaphong.com/)
```
import pandas as pd
import requests
from urllib.parse import quote_plus
headers = {
'Apikey': "", # It is my cat.
}
import sqlite3
conn = sqlite3.connect('ch2.db')
print("เปิดฐานข้อมูลสำเร็จ")
conn.execute('''CREATE TABLE zhth
(ID INTEGER PRIMARY KEY AUTOINCREMENT,
DFID INT NOT NULL,
ZH TEXT NOT NULL,
TH TEXT NOT NULL);''')
conn.execute('''CREATE TABLE thzh
(ID INTEGER PRIMARY KEY AUTOINCREMENT,
DFID INT NOT NULL,
TH TEXT NOT NULL,
ZH TEXT NOT NULL);''')
conn.commit()
print("สร้างตารางสำเร็จ :D ")
conn.close()
df = pd.read_csv('test1500_seed42.csv')
df
conn = sqlite3.connect('ch2.db', check_same_thread=False)
th2zhlist = []
def th2zh(text,id):
url = 'https://api.aiforthai.in.th/th-zh-nmt/' +quote_plus(text)
response = requests.get(url, headers=headers)
if "The upstream server is timing out" in response.text:
response = requests.get(url, headers=headers)
conn.execute("INSERT INTO thzh (DFID,TH,ZH) VALUES (?, ?,?)",(id,text,response.text))
conn.commit()
th2zh("ผมรักคุณ",-2)
#th2zh("แมว",0)
th2zhlist
from concurrent.futures import ThreadPoolExecutor
from tqdm import tqdm
executor = ThreadPoolExecutor(max_workers=5)
th2zhlist = []
j=0
for i in tqdm(df['th'].tolist()):
executor.submit(th2zh,i,j)
j+=1
len(th2zhlist)
zh2thlist = []
def zh2th(text,id):
url = 'https://api.aiforthai.in.th/zh-th-nmt/' +quote_plus(text)
response = requests.get(url, headers=headers)
if "The upstream server is timing out" in response.text:
response = requests.get(url, headers=headers)
conn.execute("INSERT INTO zhth (DFID,ZH,TH) VALUES (?, ?,?)",(id,text,response.text))
conn.commit()
def zh2th_reload(text,id):
url = 'https://api.aiforthai.in.th/zh-th-nmt/' +quote_plus(text)
response = requests.get(url, headers=headers)
if "The upstream server is timing out" in response.text:
response = requests.get(url, headers=headers)
conn.execute("UPDATE zhth set TH=? WHERE id=?",(response.text,id))
conn.commit()
def th2zh_reload(text,id):
url = 'https://api.aiforthai.in.th/th-zh-nmt/' +quote_plus(text)
response = requests.get(url, headers=headers)
if "The upstream server is timing out" in response.text:
response = requests.get(url, headers=headers)
conn.execute("UPDATE thzh set ZH=? WHERE id=?",(response.text,id))
conn.commit()
# zh2thlist = []
# jj=0
# for i in tqdm(df['zh_cn'].tolist()):
# executor2.submit(zh2th,i,jj)
# jj+=1
cursor = conn.execute("SELECT id,zh from zhth WHERE th='The upstream server is timing out\n';")
for row in tqdm(cursor):
#zh2th_reload(row[1],row[0])
executor.submit(zh2th_reload,row[1],row[0])
cursor2 = conn.execute("SELECT id,th from thzh WHERE zh='The upstream server is timing out\n';")
executor2 = ThreadPoolExecutor(max_workers=5)
for row in tqdm(cursor2):
#print(row)
executor2.submit(th2zh_reload,row[1],row[0])
zh_th_cursor = conn.execute("SELECT DFID,zh,th from zhth")
zh_th_tuples = []
for row in tqdm(zh_th_cursor):
if row[0]>=0:
zh_th_tuples.append(row)
zh_th_tuples[0]
zh_th_tuples[13]
zh_th_tuples = sorted(zh_th_tuples, key=lambda x: x[0])
zh_th_tuples[13]
th_zh_cursor = conn.execute("SELECT DFID,zh,th from thzh")
th_zh_tuples = []
for row in tqdm(th_zh_cursor):
if row[0]>=0:
th_zh_tuples.append(row)
th_zh_tuples[0]
len(th_zh_tuples)
th_zh_tuples[13]
th_zh_tuples = sorted(th_zh_tuples, key=lambda x: x[0])
th_zh_tuples[13]
def check_error(txt):
if 'The' in txt or 'the' in txt:
return None
else:
return txt
dict_data = {
'TH': [i[2] for i in th_zh_tuples],
'ZH': [i[1] for i in zh_th_tuples],
'pred_th': [check_error(i[2]) for i in zh_th_tuples],
'pred_zh': [check_error(i[1]) for i in th_zh_tuples]
}
test_aiforthai=pd.DataFrame.from_dict(dict_data)
test_aiforthai
test_aiforthai.to_csv('aiforthai.csv',index=False)
```
## Benchmark
```
df = pd.read_csv('../data/aiforthai.csv')
df.columns = ['th','zh','pred_th','pred_zh']
df.shape, df.dropna().shape
#choose only those without NAs
df = df.dropna().reset_index(drop=True)
df
df.to_csv('test1226.csv',index=False)
from datasets import load_metric
metric = load_metric("sacrebleu")
#benchmark zh->th
predictions = df.loc[:,'pred_th'].tolist()
references = [[i] for i in df.loc[:,'th'].tolist()]
predictions[:10]
references[:10]
%%time
metric.compute(predictions=predictions,references=references,tokenize='th_syllable')
%%time
metric.compute(predictions=predictions,references=references,tokenize='th_word')
#benchmark zh->th
predictions = df.loc[:,'pred_zh'].tolist()
references = [[i] for i in df.loc[:,'zh'].tolist()]
%%time
metric.compute(predictions=predictions,references=references,tokenize='zh')
predictions[:10]
references[:10]
```
| github_jupyter |
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)
https://github.com/rasbt/python-machine-learning-book
[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt)
# Python Machine Learning - Code Examples
# Chapter 8 - Applying Machine Learning To Sentiment Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
```
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.*
<br>
<br>
### Overview
- [Obtaining the IMDb movie review dataset](#Obtaining-the-IMDb-movie-review-dataset)
- [Introducing the bag-of-words model](#Introducing-the-bag-of-words-model)
- [Transforming words into feature vectors](#Transforming-words-into-feature-vectors)
- [Assessing word relevancy via term frequency-inverse document frequency](#Assessing-word-relevancy-via-term-frequency-inverse-document-frequency)
- [Cleaning text data](#Cleaning-text-data)
- [Processing documents into tokens](#Processing-documents-into-tokens)
- [Training a logistic regression model for document classification](#Training-a-logistic-regression-model-for-document-classification)
- [Working with bigger data – online algorithms and out-of-core learning](#Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)
- [Summary](#Summary)
<br>
<br>
```
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
```
# Obtaining the IMDb movie review dataset
The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute
`tar -zxf aclImdb_v1.tar.gz`
B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive.
### Compatibility Note:
I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute
>>> import sys
>>> sys.getdefaultencoding()
If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.
Alternatively, you can replace the lines
with open(os.path.join(path, file), 'r') as infile:
...
pd.read_csv('./movie_data.csv')
...
df.to_csv('./movie_data.csv', index=False)
by
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
...
pd.read_csv('./movie_data.csv', encoding='utf-8')
...
df.to_csv('./movie_data.csv', index=False, encoding='utf-8')
in the following cells to achieve the desired effect.
```
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
```
Shuffling the DataFrame:
```
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
```
Optional: Saving the assembled data as CSV file:
```
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
```
<hr>
### Note
If you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at
https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie
<hr>
<br>
<br>
# Introducing the bag-of-words model
...
## Transforming documents into feature vectors
By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
```
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
```
print(count.vocabulary_)
```
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created:
Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
```
print(bag.toarray())
```
<br>
## Assessing word relevancy via term frequency-inverse document frequency
```
np.set_printoptions(precision=2)
```
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
Here the tf(t, d) is the term frequency that we introduced in the previous section,
and the inverse document frequency *idf(t, d)* can be calculated as:
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.
Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
```
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
```
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that was implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.
By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:
$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
To make sure that we understand how TfidfTransformer works, let us walk
through an example and calculate the tf-idf of the word is in the 3rd document.
The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
```
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
```
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows:
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$
$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$
$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
```
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
```
<br>
## Cleaning text data
```
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
```
<br>
## Processing documents into tokens
```
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
```
<br>
<br>
# Training a logistic regression model for document classification
Strip HTML and punctuation to speed up the GridSearch later:
```
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
```
**Note:** Some readers [encountered problems](https://github.com/rasbt/python-machine-learning-book/issues/50) running the following code on Windows. Unfortunately, problems with multiprocessing on Windows are not uncommon. So, if the following code cell should result in issues on your machine, try setting `n_jobs=1` (instead of `n_jobs=-1` in the previous code cell).
```
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
```
<hr>
<hr>
#### Start comment:
Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
```
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
```
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
```
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
```
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier.
Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
```
gs.best_score_
```
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
```
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
```
#### End comment.
<hr>
<hr>
<br>
<br>
# Working with bigger data - online algorithms and out-of-core learning
```
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
```
<br>
<br>
# Summary
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
GOOGLE_COLAB = True
%reload_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random
import pickle
import sys
if GOOGLE_COLAB:
sys.path.append('drive/My Drive/yelp_sentiment_analysis')
else:
sys.path.append('../')
from yelpsent import data
from yelpsent import features
from yelpsent import metrics
from yelpsent import visualization
from yelpsent import models
import importlib
def reload():
importlib.reload(data)
importlib.reload(features)
importlib.reload(metrics)
importlib.reload(visualization)
importlib.reload(models)
```
# Load Dataset
```
if GOOGLE_COLAB:
data_train, data_test = data.load_dataset("drive/My Drive/yelp_sentiment_analysis/data/yelp_train_balanced.json",
"drive/My Drive/yelp_sentiment_analysis/data/yelp_test.json")
else:
data_train, data_test = data.load_dataset("../data/yelp_train.json",
"../data/yelp_test.json")
X_train = data_train['review'].tolist()
y_train = data_train['sentiment'].tolist()
X_test = data_test['review'].tolist()
y_test = data_test['sentiment'].tolist()
```
# Load DTMs
```
with open('drive/My Drive/yelp_sentiment_analysis/pickles/vectorizer.pickle', 'rb') as f:
vectorizer = pickle.load(f)
with open('drive/My Drive/yelp_sentiment_analysis/pickles/X_train_dtm.pickle', 'rb') as f:
X_train_dtm = pickle.load(f)
with open('drive/My Drive/yelp_sentiment_analysis/pickles/X_test_dtm.pickle', 'rb') as f:
X_test_dtm = pickle.load(f)
```
# XGBoost Classifier
```
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
# Grid-search
params = {'max_depth' : [3, 5, 7]}
gscv = GridSearchCV(XGBClassifier(num_classes = 3,
objective = 'multi:softmax',
eval_metric = "mlogloss",
n_estimators = 500),
params,
scoring='f1_macro',
cv=3,
verbose=1,
n_jobs=-1)
gscv.fit(X_train_dtm, y_train)
print(gscv.best_params_)
# Final model
model = XGBClassifier(num_classes = 3,
objective = 'multi:softmax',
eval_metric = 'mlogloss',
max_depth=7,
n_estimators = 800,
seed = 647,
n_jobs = -1)
%time model.fit(X_train_dtm, y_train)
# with open('drive/My Drive/yelp_sentiment_analysis/models/xgboost.pickle', 'wb') as f:
# pickle.dump(model, f)
with open('drive/My Drive/yelp_sentiment_analysis/models/xgboost.pickle', 'rb') as f:
model = pickle.load(f)
y_train_pred, y_test_pred, f1_train, f1_test =\
models.evaluate_pipeline(X_train = X_train_dtm,
y_train = y_train,
X_test = X_test_dtm,
y_test = y_test,
pipeline = model)
print("Macro F1 Scores: \n Training: {0:.3f} \n Testing: {1:.3f}\n\n".format(f1_train, f1_test))
```
| github_jupyter |
### learning rate = 0.005, used convolution for extracting features
```
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.WARN)
import pickle
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
import os
from tensorflow.python.client import device_lib
from collections import Counter
import time
f = open('../../Glove/word_embedding_glove', 'rb')
word_embedding = pickle.load(f)
f.close()
word_embedding = word_embedding[: len(word_embedding)-1]
f = open('../../Glove/vocab_glove', 'rb')
vocab = pickle.load(f)
f.close()
word2id = dict((w, i) for i,w in enumerate(vocab))
id2word = dict((i, w) for i,w in enumerate(vocab))
unknown_token = "UNKNOWN_TOKEN"
# Model Description
model_name = 'model-aw-lex-1-3'
model_dir = '../output/all-word/' + model_name
save_dir = os.path.join(model_dir, "save/")
log_dir = os.path.join(model_dir, "log")
if not os.path.exists(model_dir):
os.mkdir(model_dir)
if not os.path.exists(save_dir):
os.mkdir(save_dir)
if not os.path.exists(log_dir):
os.mkdir(log_dir)
with open('/data/aviraj/dataset/train_val_data_fine/all_word_lex','rb') as f:
train_data, val_data = pickle.load(f)
# Parameters
mode = 'train'
num_senses = 45
num_pos = 12
batch_size = 64
vocab_size = len(vocab)
unk_vocab_size = 1
word_emb_size = len(word_embedding[0])
max_sent_size = 200
hidden_size = 256
num_filter = 256
kernel_size = 5
keep_prob = 0.3
l2_lambda = 0.001
init_lr = 0.001
decay_steps = 500
decay_rate = 0.99
clip_norm = 1
clipping = True
moving_avg_deacy = 0.999
num_gpus = 6
def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
# Note that each grad_and_vars looks like the following:
# ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
grads = []
for g, _ in grad_and_vars:
# Add 0 dimension to the gradients to represent the tower.
expanded_g = tf.expand_dims(g, 0)
# Append on a 'tower' dimension which we will average over below.
grads.append(expanded_g)
# Average over the 'tower' dimension.
grad = tf.concat(grads, 0)
grad = tf.reduce_mean(grad, 0)
# Keep in mind that the Variables are redundant because they are shared
# across towers. So .. we will just return the first tower's pointer to
# the Variable.
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads
# MODEL
device_num = 0
tower_grads = []
losses = []
predictions = []
predictions_pos = []
x = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="x")
y = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y")
y_pos = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y")
x_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='x_mask')
sense_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='sense_mask')
is_train = tf.placeholder('bool', [], name='is_train')
word_emb_mat = tf.placeholder('float', [None, word_emb_size], name='emb_mat')
input_keep_prob = tf.cond(is_train,lambda:keep_prob, lambda:tf.constant(1.0))
pretrain = tf.placeholder('bool', [], name="pretrain")
global_step = tf.Variable(0, trainable=False, name="global_step")
learning_rate = tf.train.exponential_decay(init_lr, global_step, decay_steps, decay_rate, staircase=True)
summaries = []
def global_attention(input_x, input_mask, W_att):
h_masked = tf.boolean_mask(input_x, input_mask)
h_tanh = tf.tanh(h_masked)
u = tf.matmul(h_tanh, W_att)
a = tf.nn.softmax(u)
c = tf.reduce_sum(tf.multiply(h_tanh, a), 0)
return c
with tf.variable_scope("word_embedding"):
unk_word_emb_mat = tf.get_variable("word_emb_mat", dtype='float', shape=[unk_vocab_size, word_emb_size], initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=0, dtype=tf.float32))
final_word_emb_mat = tf.concat([word_emb_mat, unk_word_emb_mat], 0)
with tf.variable_scope(tf.get_variable_scope()):
for gpu_idx in range(num_gpus):
if gpu_idx>int(num_gpus/2)-1:
device_num = 1
with tf.name_scope("model_{}".format(gpu_idx)) as scope, tf.device('/gpu:%d' % device_num):
if gpu_idx > 0:
tf.get_variable_scope().reuse_variables()
with tf.name_scope("word"):
Wx = tf.nn.embedding_lookup(final_word_emb_mat, x[gpu_idx])
x_len = tf.reduce_sum(tf.cast(x_mask[gpu_idx], 'int32'), 1)
with tf.variable_scope("lstm1"):
cell_fw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw1 = tf.contrib.rnn.DropoutWrapper(cell_fw1, input_keep_prob=input_keep_prob)
d_cell_bw1 = tf.contrib.rnn.DropoutWrapper(cell_bw1, input_keep_prob=input_keep_prob)
(fw_h1, bw_h1), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw1, d_cell_bw1, Wx, sequence_length=x_len, dtype='float', scope='lstm1')
h1 = tf.concat([fw_h1, bw_h1], 2)
with tf.variable_scope("lstm2"):
cell_fw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw2 = tf.contrib.rnn.DropoutWrapper(cell_fw2, input_keep_prob=input_keep_prob)
d_cell_bw2 = tf.contrib.rnn.DropoutWrapper(cell_bw2, input_keep_prob=input_keep_prob)
(fw_h2, bw_h2), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw2, d_cell_bw2, h1, sequence_length=x_len, dtype='float', scope='lstm2')
h = tf.concat([fw_h2, bw_h2], 2)
with tf.variable_scope("global_attention"):
W_att = tf.get_variable("W_att", shape=[2*hidden_size, 1], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*10))
c = tf.expand_dims(global_attention(h[0], x_mask[gpu_idx][0], W_att), 0)
for i in range(1, batch_size):
c = tf.concat([c, tf.expand_dims(global_attention(h[i], x_mask[gpu_idx][i], W_att), 0)], 0)
cc = tf.expand_dims(c, 1)
c_final = tf.tile(cc, [1, max_sent_size, 1])
rev_bw_h2 = tf.reverse(bw_h2, [1])
with tf.variable_scope("convolution"):
conv1_fw = tf.layers.conv1d(inputs=fw_h2, filters=num_filter, kernel_size=[kernel_size], padding='same', activation=tf.nn.relu)
conv2_fw = tf.layers.conv1d(inputs=conv1_fw, filters=num_filter, kernel_size=[kernel_size], padding='same')
conv1_bw_rev = tf.layers.conv1d(inputs=rev_bw_h2, filters=num_filter, kernel_size=[kernel_size], padding='same', activation=tf.nn.relu)
conv2_bw_rev = tf.layers.conv1d(inputs=conv1_bw_rev, filters=num_filter, kernel_size=[kernel_size], padding='same')
conv2_bw = tf.reverse(conv2_bw_rev, [1])
h_final = tf.concat([c_final, conv2_fw, conv2_bw], 2)
flat_h_final = tf.reshape(h_final, [-1, tf.shape(h_final)[2]])
with tf.variable_scope("hidden_layer"):
W = tf.get_variable("W", shape=[4*hidden_size, 2*hidden_size], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20))
b = tf.get_variable("b", shape=[2*hidden_size], initializer=tf.zeros_initializer())
drop_flat_h_final = tf.nn.dropout(flat_h_final, input_keep_prob)
flat_hl = tf.matmul(drop_flat_h_final, W) + b
with tf.variable_scope("softmax_layer"):
W = tf.get_variable("W", shape=[2*hidden_size, num_senses], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20))
b = tf.get_variable("b", shape=[num_senses], initializer=tf.zeros_initializer())
drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob)
flat_logits_sense = tf.matmul(drop_flat_hl, W) + b
logits = tf.reshape(flat_logits_sense, [batch_size, max_sent_size, num_senses])
predictions.append(tf.arg_max(logits, 2))
with tf.variable_scope("softmax_layer_pos"):
W = tf.get_variable("W", shape=[2*hidden_size, num_pos], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*30))
b = tf.get_variable("b", shape=[num_pos], initializer=tf.zeros_initializer())
drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob)
flat_logits_pos = tf.matmul(drop_flat_hl, W) + b
logits_pos = tf.reshape(flat_logits_pos, [batch_size, max_sent_size, num_pos])
predictions_pos.append(tf.arg_max(logits_pos, 2))
float_sense_mask = tf.cast(sense_mask[gpu_idx], 'float')
float_x_mask = tf.cast(x_mask[gpu_idx], 'float')
loss = tf.contrib.seq2seq.sequence_loss(logits, y[gpu_idx], float_sense_mask, name="loss")
loss_pos = tf.contrib.seq2seq.sequence_loss(logits_pos, y_pos[gpu_idx], float_x_mask, name="loss_")
l2_loss = l2_lambda * tf.losses.get_regularization_loss()
total_loss = tf.cond(pretrain, lambda:loss_pos, lambda:loss + loss_pos + l2_loss)
summaries.append(tf.summary.scalar("loss_{}".format(gpu_idx), loss))
summaries.append(tf.summary.scalar("loss_pos_{}".format(gpu_idx), loss_pos))
summaries.append(tf.summary.scalar("total_loss_{}".format(gpu_idx), total_loss))
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_vars = optimizer.compute_gradients(total_loss)
clipped_grads = grads_vars
if(clipping == True):
clipped_grads = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in clipped_grads]
tower_grads.append(clipped_grads)
losses.append(total_loss)
tower_grads = average_gradients(tower_grads)
losses = tf.add_n(losses)/len(losses)
apply_grad_op = optimizer.apply_gradients(tower_grads, global_step=global_step)
summaries.append(tf.summary.scalar('total_loss', losses))
summaries.append(tf.summary.scalar('learning_rate', learning_rate))
for var in tf.trainable_variables():
summaries.append(tf.summary.histogram(var.op.name, var))
variable_averages = tf.train.ExponentialMovingAverage(moving_avg_deacy, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
train_op = tf.group(apply_grad_op, variables_averages_op)
saver = tf.train.Saver(tf.global_variables())
summary = tf.summary.merge(summaries)
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0,1"
# print (device_lib.list_local_devices())
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer()) # For initializing all the variables
summary_writer = tf.summary.FileWriter(log_dir, sess.graph) # For writing Summaries
save_period = 100
log_period = 100
def model(xx, yy, yy_pos, mask, smask, train_cond=True, pretrain_cond=False):
num_batches = int(len(xx)/(batch_size*num_gpus))
_losses = 0
temp_loss = 0
preds_sense = []
true_sense = []
preds_pos = []
true_pos = []
for j in range(num_batches):
s = j * batch_size * num_gpus
e = (j+1) * batch_size * num_gpus
xx_re = xx[s:e].reshape([num_gpus, batch_size, -1])
yy_re = yy[s:e].reshape([num_gpus, batch_size, -1])
yy_pos_re = yy_pos[s:e].reshape([num_gpus, batch_size, -1])
mask_re = mask[s:e].reshape([num_gpus, batch_size, -1])
smask_re = smask[s:e].reshape([num_gpus, batch_size, -1])
feed_dict = {x:xx_re, y:yy_re, y_pos:yy_pos_re, x_mask:mask_re, sense_mask:smask_re, pretrain:pretrain_cond, is_train:train_cond, input_keep_prob:keep_prob, word_emb_mat:word_embedding}
if(train_cond==True):
_, _loss, step, _summary = sess.run([train_op, losses, global_step, summary], feed_dict)
summary_writer.add_summary(_summary, step)
temp_loss += _loss
if((j+1)%log_period==0):
print("Steps: {}".format(step), "Loss:{0:.4f}".format(temp_loss/log_period), ", Current Loss: {0:.4f}".format(_loss))
temp_loss = 0
if((j+1)%save_period==0):
saver.save(sess, save_path=save_dir)
else:
_loss, pred, pred_pos = sess.run([total_loss, predictions, predictions_pos], feed_dict)
for i in range(num_gpus):
preds_sense.append(pred[i][smask_re[i]])
true_sense.append(yy_re[i][smask_re[i]])
preds_pos.append(pred_pos[i][mask_re[i]])
true_pos.append(yy_pos_re[i][mask_re[i]])
_losses +=_loss
if(train_cond==False):
sense_preds = []
sense_true = []
pos_preds = []
pos_true = []
for preds in preds_sense:
for ps in preds:
sense_preds.append(ps)
for trues in true_sense:
for ts in trues:
sense_true.append(ts)
for preds in preds_pos:
for ps in preds:
pos_preds.append(ps)
for trues in true_pos:
for ts in trues:
pos_true.append(ts)
return _losses/num_batches, sense_preds, sense_true, pos_preds, pos_true
return _losses/num_batches, step
def eval_score(yy, pred, yy_pos, pred_pos):
f1 = f1_score(yy, pred, average='macro')
accu = accuracy_score(yy, pred)
f1_pos = f1_score(yy_pos, pred_pos, average='macro')
accu_pos = accuracy_score(yy_pos, pred_pos)
return f1*100, accu*100, f1_pos*100, accu_pos*100
x_id_train = train_data['x']
mask_train = train_data['x_mask']
sense_mask_train = train_data['sense_mask']
y_train = train_data['y']
y_pos_train = train_data['pos']
x_id_val = val_data['x']
mask_val = val_data['x_mask']
sense_mask_val = val_data['sense_mask']
y_val = val_data['y']
y_pos_val = val_data['pos']
num_epochs = 1
val_period = 1
loss_collection = []
val_collection = []
pre_train_cond = True
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
for v in val_collection:
print("Val: F1 Score:{0:.2f}".format(v[0]), "Accuracy:{0:.2f}".format(v[1]), " POS: F1 Score:{0:.2f}".format(v[2]), "Accuracy:{0:.2f}".format(v[3]))
loss_collection
sess.run(learning_ratening_ratening_rate)
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append(train_loss)
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
for v in val_collection:
print("Val: F1 Score:{0:.2f}".format(v[0]), "Accuracy:{0:.2f}".format(v[1]), " POS: F1 Score:{0:.2f}".format(v[2]), "Accuracy:{0:.2f}".format(v[3]))
loss_collection
for v in val_collection:
print("Val: F1 Score:{0:.2f}".format(v[0]), "Accuracy:{0:.2f}".format(v[1]), " POS: F1 Score:{0:.2f}".format(v[2]), "Accuracy:{0:.2f}".format(v[3]))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append([step, train_loss])
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
num_epochs = 10
val_period = 2
loss_collection = []
val_collection = []
pre_train_cond = False
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
sense_mask_train = sense_mask_train[random]
y_pos_train = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
loss_collection.append([step, train_loss])
print("Epoch: {}".format(i+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
if((i+1)%val_period==0):
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
val_collection.append([f1_, accu_, f1_pos_, accu_pos_])
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
start_time = time.time()
train_loss, train_pred, train_true, train_pred_pos, train_true_pos = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = etrain_score(train_true, train_pred, train_true_pos, train_pred_pos)
time_taken = time.time() - start_time
print("train: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.restore(sess, save_dir)
```
| github_jupyter |
# Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
**Instructions:**
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**Expected Output for m_train, m_test and num_px**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
**Expected Output**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
```
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
<font color='blue'>
**What you need to remember:**
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
**Mathematical expression of the algorithm**:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
**Expected Output**:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = -1/m * np.sum(np.multiply(np.log(A), Y) + np.multiply(1 - Y, np.log(1 - A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1/m * np.dot(X, (A - Y).T)
db = 1/m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
### 4.4 - Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0,i] > 0.5:
Y_prediction[0][i] = 1
else:
Y_prediction[0][i] = 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
```
**Expected Output**:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
**What to remember:**
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction_test for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```
**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
## 7 - Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
<font color='blue'>
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!
Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
| github_jupyter |
```
# # from google.colab import drive
# drive.mount('/content/drive')
# from google.colab import drive
# drive.mount('/content/drive')
!pwd
path = '/kaggle/working/run_1'
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
n_seed = 0
k = 0.005
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark= False
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
#foreground_classes = {'bird', 'cat', 'deer'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
#background_classes = {'plane', 'car', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]])#.type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx])#.type("torch.DoubleTensor"))
label = foreground_label[fg_idx]- fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
plt.imshow(torch.transpose(mosaic_list_of_images[0][1],dim0= 0,dim1 = 2))
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0,bias=False)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0,bias=False)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=0,bias=False)
self.conv4 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0,bias=False)
self.conv5 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=0,bias=False)
self.conv6 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1,bias=False)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32,track_running_stats=False)
self.batch_norm2 = nn.BatchNorm2d(64,track_running_stats=False)
self.batch_norm3 = nn.BatchNorm2d(256,track_running_stats=False)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.fc1 = nn.Linear(256,64,bias=False)
self.fc2 = nn.Linear(64, 32,bias=False)
self.fc3 = nn.Linear(32, 10,bias=False)
self.fc4 = nn.Linear(10, 1,bias=False)
torch.nn.init.xavier_normal_(self.conv1.weight)
torch.nn.init.xavier_normal_(self.conv2.weight)
torch.nn.init.xavier_normal_(self.conv3.weight)
torch.nn.init.xavier_normal_(self.conv4.weight)
torch.nn.init.xavier_normal_(self.conv5.weight)
torch.nn.init.xavier_normal_(self.conv6.weight)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.xavier_normal_(self.fc2.weight)
torch.nn.init.xavier_normal_(self.fc3.weight)
torch.nn.init.xavier_normal_(self.fc4.weight)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
#ftr = torch.zeros([batch,9,3,32,32])
y = y.to("cuda")
x = x.to("cuda")
#ftr = ftr.to("cuda")
for i in range(9):
out = self.helper(z[:,i])
#print(out.shape)
x[:,i] = out[:,0]
#ftr[:,i] = ftrs
log_x = F.log_softmax(x,dim=1)
x = F.softmax(x,dim=1)
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
return x, y,log_x #alpha,avg_data
def helper(self, x):
#x1 = x
#x1 =x
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm3(x))
x = self.conv6(x)
x = F.relu(x)
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
torch.manual_seed(n_seed)
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=128, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1)
self.conv5 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, padding=1)
self.conv6 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2,padding=1)
self.batch_norm1 = nn.BatchNorm2d(128,track_running_stats=False)
self.batch_norm2 = nn.BatchNorm2d(256,track_running_stats=False)
self.batch_norm3 = nn.BatchNorm2d(512,track_running_stats=False)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.global_average_pooling = nn.AvgPool2d(kernel_size=4)
self.fc1 = nn.Linear(512,128)
# self.fc2 = nn.Linear(128, 64)
# self.fc3 = nn.Linear(64, 10)
self.fc2 = nn.Linear(128, 3)
torch.nn.init.xavier_normal_(self.conv1.weight)
torch.nn.init.xavier_normal_(self.conv2.weight)
torch.nn.init.xavier_normal_(self.conv3.weight)
torch.nn.init.xavier_normal_(self.conv4.weight)
torch.nn.init.xavier_normal_(self.conv5.weight)
torch.nn.init.xavier_normal_(self.conv6.weight)
torch.nn.init.zeros_(self.conv1.bias)
torch.nn.init.zeros_(self.conv2.bias)
torch.nn.init.zeros_(self.conv3.bias)
torch.nn.init.zeros_(self.conv4.bias)
torch.nn.init.zeros_(self.conv5.bias)
torch.nn.init.zeros_(self.conv6.bias)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.xavier_normal_(self.fc2.weight)
torch.nn.init.zeros_(self.fc1.bias)
torch.nn.init.zeros_(self.fc2.bias)
def forward(self, x):
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm3(x))
x = (F.relu(self.conv6(x)))
x = self.pool(x)
#print(x.shape)
x = self.global_average_pooling(x)
x = x.squeeze()
#x = x.view(x.size(0), -1)
#print(x.shape)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
#x = F.relu(self.fc2(x))
#x = self.dropout2(x)
#x = F.relu(self.fc3(x))
x = self.fc2(x)
return x
torch.manual_seed(n_seed)
classify = Classification().double()
classify = classify.to("cuda")
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
np.random.seed(i+30000)
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
import torch.optim as optim
# criterion_classify = nn.CrossEntropyLoss()
optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.001)#, momentum=0.9)
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001)#, momentum=0.9)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
col14 = [] # train average sparsity
col15 = [] # test average sparsity
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total))
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
col14.append(sparse_val)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %f %%' % (
100 * correct / total))
print("total correct", correct)
print("total test set images", total)
print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total))
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
col15.append(sparse_val)
nos_epochs = 100
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
focus_net.train()
classify.train()
tr_loss = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_net.train()
classify.train()
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images,log_alphas = focus_net(inputs)
outputs = classify(avg_images)
# outputs, alphas, avg_images = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss,_,_ = my_cross_entropy(outputs, labels,alphas,log_alphas,k)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 60
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 1 == 0:
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
tr_loss.append(np.mean(epoch_loss))
if epoch % 1 == 0:
col1.append(epoch+1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
col14.append(sparse_val)
#************************************************************************
#testing data set
focus_net.eval()
classify.eval()
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images,log_alphas = focus_net(inputs)
outputs = classify(avg_images)
#outputs, alphas, avg_images = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
col15.append(sparse_val)
if(np.mean(epoch_loss) <= 0.05):
break;
print('Finished Training')
torch.save(focus_net.state_dict(),path+"weights_focus_0.pt")
torch.save(classify.state_dict(),path+"weights_classify_0.pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ,"sparse_val"]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
len(col1),col9
plt.plot(np.arange(1,epoch+2),tr_loss)
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("Loss", fontsize=14, fontweight = 'bold')
plt.title("Train Loss")
plt.grid()
plt.show()
np.save("train_loss.npy",{"training_loss":tr_loss})
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_train[columns[7]] = col14
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_test[columns[7]] = col15
df_train
df_train.to_csv(path+"_train.csv",index=False)
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col4)/300, label ="FTPT ")
plt.plot(col1,np.array(col5)/300, label ="FFPT ")
plt.plot(col1,np.array(col6)/300, label ="FTPF ")
plt.plot(col1,np.array(col7)/300, label ="FFPF ")
plt.title("On Training set")
#plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
# plt.xlabel("epochs")
# plt.ylabel("training data")
plt.legend()
plt.savefig(path + "_train.png",bbox_inches="tight")
plt.savefig(path + "_train.pdf",bbox_inches="tight")
plt.grid()
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col14)/30000, label ="sparsity_val")
plt.title("On Training set")
#plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("average sparsity value", fontsize=14, fontweight = 'bold')
# plt.xlabel("epochs")
# plt.ylabel("sparsity_value")
plt.savefig(path + "sparsity_train.png",bbox_inches="tight")
plt.savefig(path + "sparsity_train.pdf",bbox_inches="tight")
plt.grid()
plt.show()
df_test
df_test.to_csv(path+"_test.csv")
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col10)/100, label ="FTPT ")
plt.plot(col1,np.array(col11)/100, label ="FFPT ")
plt.plot(col1,np.array(col12)/100, label ="FTPF ")
plt.plot(col1,np.array(col13)/100, label ="FFPF ")
plt.title("On Testing set")
#plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage test data", fontsize=14, fontweight = 'bold')
# plt.xlabel("epochs")
# plt.ylabel("training data")
plt.legend()
#plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
# plt.ylabel("Testing data")
plt.savefig(path + "_test.png",bbox_inches="tight")
plt.savefig(path + "_test.pdf",bbox_inches="tight")
plt.grid()
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col15)/10000, label ="sparsity_val")
plt.title("On Testing set")
#plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("average sparsity value", fontsize=14, fontweight = 'bold')
plt.grid()
plt.savefig(path + "sparsity_test.png",bbox_inches="tight")
plt.savefig(path + "sparsity_test.pdf",bbox_inches="tight")
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total))
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
sparse_val = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item()
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %f %%' % (
100 * correct / total))
print("total correct", correct)
print("total test set images", total)
print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total))
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
focus_net.eval()
classify.eval()
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images,_ = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %f %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
```
| github_jupyter |
```
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
```
# Fit a mesh via rendering
This tutorial shows how to:
- Load a mesh and textures from an `.obj` file.
- Create a synthetic dataset by rendering a textured mesh from multiple viewpoints
- Fit a mesh to the observed synthetic images using differential silhouette rendering
- Fit a mesh and its textures using differential textured rendering
## 0. Install and Import modules
Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
```
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.10.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import matplotlib.pyplot as plt
from pytorch3d.utils import ico_sphere
import numpy as np
from tqdm.notebook import tqdm
# Util function for loading meshes
from pytorch3d.io import load_objs_as_meshes, save_obj
from pytorch3d.loss import (
chamfer_distance,
mesh_edge_loss,
mesh_laplacian_smoothing,
mesh_normal_consistency,
)
# Data structures and functions for rendering
from pytorch3d.structures import Meshes
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLPerspectiveCameras,
PointLights,
DirectionalLights,
Materials,
RasterizationSettings,
MeshRenderer,
MeshRasterizer,
SoftPhongShader,
SoftSilhouetteShader,
SoftPhongShader,
TexturesVertex
)
# add path for demo utils functions
import sys
import os
sys.path.append(os.path.abspath(''))
```
If using **Google Colab**, fetch the utils file for plotting image grids:
```
!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py
from plot_image_grid import image_grid
```
OR if running **locally** uncomment and run the following cell:
```
# from utils.plot_image_grid import image_grid
```
### 1. Load a mesh and texture file
Load an `.obj` file and its associated `.mtl` file and create a **Textures** and **Meshes** object.
**Meshes** is a unique datastructure provided in PyTorch3D for working with batches of meshes of different sizes.
**TexturesVertex** is an auxiliary datastructure for storing vertex rgb texture information about meshes.
**Meshes** has several class methods which are used throughout the rendering pipeline.
If running this notebook using **Google Colab**, run the following cell to fetch the mesh obj and texture files and save it at the path `data/cow_mesh`:
If running locally, the data is already available at the correct path.
```
!mkdir -p data/cow_mesh
!wget -P data/cow_mesh https://dl.fbaipublicfiles.com/pytorch3d/data/cow_mesh/cow.obj
!wget -P data/cow_mesh https://dl.fbaipublicfiles.com/pytorch3d/data/cow_mesh/cow.mtl
!wget -P data/cow_mesh https://dl.fbaipublicfiles.com/pytorch3d/data/cow_mesh/cow_texture.png
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "cow_mesh/cow.obj")
# Load obj file
mesh = load_objs_as_meshes([obj_filename], device=device)
# We scale normalize and center the target mesh to fit in a sphere of radius 1
# centered at (0,0,0). (scale, center) will be used to bring the predicted mesh
# to its original center and scale. Note that normalizing the target mesh,
# speeds up the optimization but is not necessary!
verts = mesh.verts_packed()
N = verts.shape[0]
center = verts.mean(0)
scale = max((verts - center).abs().max(0)[0])
mesh.offset_verts_(-center)
mesh.scale_verts_((1.0 / float(scale)));
```
## 2. Dataset Creation
We sample different camera positions that encode multiple viewpoints of the cow. We create a renderer with a shader that performs texture map interpolation. We render a synthetic dataset of images of the textured cow mesh from multiple viewpoints.
```
# the number of different viewpoints from which we want to render the mesh.
num_views = 20
# Get a batch of viewing angles.
elev = torch.linspace(0, 360, num_views)
azim = torch.linspace(-180, 180, num_views)
# Place a point light in front of the object. As mentioned above, the front of
# the cow is facing the -z direction.
lights = PointLights(device=device, location=[[0.0, 0.0, -3.0]])
# Initialize an OpenGL perspective camera that represents a batch of different
# viewing angles. All the cameras helper methods support mixed type inputs and
# broadcasting. So we can view the camera from the a distance of dist=2.7, and
# then specify elevation and azimuth angles for each viewpoint as tensors.
R, T = look_at_view_transform(dist=2.7, elev=elev, azim=azim)
cameras = OpenGLPerspectiveCameras(device=device, R=R, T=T)
# We arbitrarily choose one particular view that will be used to visualize
# results
camera = OpenGLPerspectiveCameras(device=device, R=R[None, 1, ...],
T=T[None, 1, ...])
# Define the settings for rasterization and shading. Here we set the output
# image to be of size 128X128. As we are rendering images for visualization
# purposes only we will set faces_per_pixel=1 and blur_radius=0.0. Refer to
# rasterize_meshes.py for explanations of these parameters. We also leave
# bin_size and max_faces_per_bin to their default values of None, which sets
# their values using heuristics and ensures that the faster coarse-to-fine
# rasterization method is used. Refer to docs/notes/renderer.md for an
# explanation of the difference between naive and coarse-to-fine rasterization.
raster_settings = RasterizationSettings(
image_size=128,
blur_radius=0.0,
faces_per_pixel=1,
)
# Create a Phong renderer by composing a rasterizer and a shader. The textured
# Phong shader will interpolate the texture uv coordinates for each vertex,
# sample from a texture image and apply the Phong lighting model
renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=camera,
raster_settings=raster_settings
),
shader=SoftPhongShader(
device=device,
cameras=camera,
lights=lights
)
)
# Create a batch of meshes by repeating the cow mesh and associated textures.
# Meshes has a useful `extend` method which allows us do this very easily.
# This also extends the textures.
meshes = mesh.extend(num_views)
# Render the cow mesh from each viewing angle
target_images = renderer(meshes, cameras=cameras, lights=lights)
# Our multi-view cow dataset will be represented by these 2 lists of tensors,
# each of length num_views.
target_rgb = [target_images[i, ..., :3] for i in range(num_views)]
target_cameras = [OpenGLPerspectiveCameras(device=device, R=R[None, i, ...],
T=T[None, i, ...]) for i in range(num_views)]
```
Visualize the dataset:
```
# RGB images
image_grid(target_images.cpu().numpy(), rows=4, cols=5, rgb=True)
plt.show()
```
Later in this tutorial, we will fit a mesh to the rendered RGB images, as well as to just images of just the cow silhouette. For the latter case, we will render a dataset of silhouette images. Most shaders in PyTorch3D will output an alpha channel along with the RGB image as a 4th channel in an RGBA image. The alpha channel encodes the probability that each pixel belongs to the foreground of the object. We construct a soft silhouette shader to render this alpha channel.
```
# Rasterization settings for silhouette rendering
sigma = 1e-4
raster_settings_silhouette = RasterizationSettings(
image_size=128,
blur_radius=np.log(1. / 1e-4 - 1.)*sigma,
faces_per_pixel=50,
)
# Silhouette renderer
renderer_silhouette = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=camera,
raster_settings=raster_settings_silhouette
),
shader=SoftSilhouetteShader()
)
# Render silhouette images. The 3rd channel of the rendering output is
# the alpha/silhouette channel
silhouette_images = renderer_silhouette(meshes, cameras=cameras, lights=lights)
target_silhouette = [silhouette_images[i, ..., 3] for i in range(num_views)]
# Visualize silhouette images
image_grid(silhouette_images.cpu().numpy(), rows=4, cols=5, rgb=False)
plt.show()
```
## 3. Mesh prediction via silhouette rendering
In the previous section, we created a dataset of images of multiple viewpoints of a cow. In this section, we predict a mesh by observing those target images without any knowledge of the ground truth cow mesh. We assume we know the position of the cameras and lighting.
We first define some helper functions to visualize the results of our mesh prediction:
```
# Show a visualization comparing the rendered predicted mesh to the ground truth
# mesh
def visualize_prediction(predicted_mesh, renderer=renderer_silhouette,
target_image=target_rgb[1], title='',
silhouette=False):
inds = 3 if silhouette else range(3)
with torch.no_grad():
predicted_images = renderer(predicted_mesh)
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(predicted_images[0, ..., inds].cpu().detach().numpy())
plt.subplot(1, 2, 2)
plt.imshow(target_image.cpu().detach().numpy())
plt.title(title)
plt.axis("off")
# Plot losses as a function of optimization iteration
def plot_losses(losses):
fig = plt.figure(figsize=(13, 5))
ax = fig.gca()
for k, l in losses.items():
ax.plot(l['values'], label=k + " loss")
ax.legend(fontsize="16")
ax.set_xlabel("Iteration", fontsize="16")
ax.set_ylabel("Loss", fontsize="16")
ax.set_title("Loss vs iterations", fontsize="16")
```
Starting from a sphere mesh, we will learn offsets of each vertex such that the predicted mesh silhouette is more similar to the target silhouette image at each optimization step. We begin by loading our initial sphere mesh:
```
# We initialize the source shape to be a sphere of radius 1.
src_mesh = ico_sphere(4, device)
```
We create a new differentiable renderer for rendering the silhouette of our predicted mesh:
```
# Rasterization settings for differentiable rendering, where the blur_radius
# initialization is based on Liu et al, 'Soft Rasterizer: A Differentiable
# Renderer for Image-based 3D Reasoning', ICCV 2019
sigma = 1e-4
raster_settings_soft = RasterizationSettings(
image_size=128,
blur_radius=np.log(1. / 1e-4 - 1.)*sigma,
faces_per_pixel=50,
)
# Silhouette renderer
renderer_silhouette = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=camera,
raster_settings=raster_settings_soft
),
shader=SoftSilhouetteShader()
)
```
We initialize settings, losses, and the optimizer that will be used to iteratively fit our mesh to the target silhouettes:
```
# Number of views to optimize over in each SGD iteration
num_views_per_iteration = 2
# Number of optimization steps
Niter = 2000
# Plot period for the losses
plot_period = 250
%matplotlib inline
# Optimize using rendered silhouette image loss, mesh edge loss, mesh normal
# consistency, and mesh laplacian smoothing
losses = {"silhouette": {"weight": 1.0, "values": []},
"edge": {"weight": 1.0, "values": []},
"normal": {"weight": 0.01, "values": []},
"laplacian": {"weight": 1.0, "values": []},
}
# Losses to smooth / regularize the mesh shape
def update_mesh_shape_prior_losses(mesh, loss):
# and (b) the edge length of the predicted mesh
loss["edge"] = mesh_edge_loss(mesh)
# mesh normal consistency
loss["normal"] = mesh_normal_consistency(mesh)
# mesh laplacian smoothing
loss["laplacian"] = mesh_laplacian_smoothing(mesh, method="uniform")
# We will learn to deform the source mesh by offsetting its vertices
# The shape of the deform parameters is equal to the total number of vertices in
# src_mesh
verts_shape = src_mesh.verts_packed().shape
deform_verts = torch.full(verts_shape, 0.0, device=device, requires_grad=True)
# The optimizer
optimizer = torch.optim.SGD([deform_verts], lr=1.0, momentum=0.9)
```
We write an optimization loop to iteratively refine our predicted mesh from the sphere mesh into a mesh that matches the silhouettes of the target images:
```
loop = tqdm(range(Niter))
for i in loop:
# Initialize optimizer
optimizer.zero_grad()
# Deform the mesh
new_src_mesh = src_mesh.offset_verts(deform_verts)
# Losses to smooth /regularize the mesh shape
loss = {k: torch.tensor(0.0, device=device) for k in losses}
update_mesh_shape_prior_losses(new_src_mesh, loss)
# Compute the average silhouette loss over two random views, as the average
# squared L2 distance between the predicted silhouette and the target
# silhouette from our dataset
for j in np.random.permutation(num_views).tolist()[:num_views_per_iteration]:
images_predicted = renderer_silhouette(new_src_mesh, cameras=target_cameras[j], lights=lights)
predicted_silhouette = images_predicted[..., 3]
loss_silhouette = ((predicted_silhouette - target_silhouette[j]) ** 2).mean()
loss["silhouette"] += loss_silhouette / num_views_per_iteration
# Weighted sum of the losses
sum_loss = torch.tensor(0.0, device=device)
for k, l in loss.items():
sum_loss += l * losses[k]["weight"]
losses[k]["values"].append(float(l.detach().cpu()))
# Print the losses
loop.set_description("total_loss = %.6f" % sum_loss)
# Plot mesh
if i % plot_period == 0:
visualize_prediction(new_src_mesh, title="iter: %d" % i, silhouette=True,
target_image=target_silhouette[1])
# Optimization step
sum_loss.backward()
optimizer.step()
visualize_prediction(new_src_mesh, silhouette=True,
target_image=target_silhouette[1])
plot_losses(losses)
```
## 3. Mesh and texture prediction via textured rendering
We can predict both the mesh and its texture if we add an additional loss based on the comparing a predicted rendered RGB image to the target image. As before, we start with a sphere mesh. We learn both translational offsets and RGB texture colors for each vertex in the sphere mesh. Since our loss is based on rendered RGB pixel values instead of just the silhouette, we use a **SoftPhongShader** instead of a **SoftSilhouetteShader**.
```
# Rasterization settings for differentiable rendering, where the blur_radius
# initialization is based on Liu et al, 'Soft Rasterizer: A Differentiable
# Renderer for Image-based 3D Reasoning', ICCV 2019
sigma = 1e-4
raster_settings_soft = RasterizationSettings(
image_size=128,
blur_radius=np.log(1. / 1e-4 - 1.)*sigma,
faces_per_pixel=50,
)
# Differentiable soft renderer using per vertex RGB colors for texture
renderer_textured = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=camera,
raster_settings=raster_settings_soft
),
shader=SoftPhongShader(device=device,
cameras=camera,
lights=lights)
)
```
We initialize settings, losses, and the optimizer that will be used to iteratively fit our mesh to the target RGB images:
```
# Number of views to optimize over in each SGD iteration
num_views_per_iteration = 2
# Number of optimization steps
Niter = 2000
# Plot period for the losses
plot_period = 250
%matplotlib inline
# Optimize using rendered RGB image loss, rendered silhouette image loss, mesh
# edge loss, mesh normal consistency, and mesh laplacian smoothing
losses = {"rgb": {"weight": 1.0, "values": []},
"silhouette": {"weight": 1.0, "values": []},
"edge": {"weight": 1.0, "values": []},
"normal": {"weight": 0.01, "values": []},
"laplacian": {"weight": 1.0, "values": []},
}
# We will learn to deform the source mesh by offsetting its vertices
# The shape of the deform parameters is equal to the total number of vertices in
# src_mesh
verts_shape = src_mesh.verts_packed().shape
deform_verts = torch.full(verts_shape, 0.0, device=device, requires_grad=True)
# We will also learn per vertex colors for our sphere mesh that define texture
# of the mesh
sphere_verts_rgb = torch.full([1, verts_shape[0], 3], 0.5, device=device, requires_grad=True)
# The optimizer
optimizer = torch.optim.SGD([deform_verts, sphere_verts_rgb], lr=1.0, momentum=0.9)
```
We write an optimization loop to iteratively refine our predicted mesh and its vertex colors from the sphere mesh into a mesh that matches the target images:
```
loop = tqdm(range(Niter))
for i in loop:
# Initialize optimizer
optimizer.zero_grad()
# Deform the mesh
new_src_mesh = src_mesh.offset_verts(deform_verts)
# Add per vertex colors to texture the mesh
new_src_mesh.textures = TexturesVertex(verts_features=sphere_verts_rgb)
# Losses to smooth /regularize the mesh shape
loss = {k: torch.tensor(0.0, device=device) for k in losses}
update_mesh_shape_prior_losses(new_src_mesh, loss)
# Randomly select two views to optimize over in this iteration. Compared
# to using just one view, this helps resolve ambiguities between updating
# mesh shape vs. updating mesh texture
for j in np.random.permutation(num_views).tolist()[:num_views_per_iteration]:
images_predicted = renderer_textured(new_src_mesh, cameras=target_cameras[j], lights=lights)
# Squared L2 distance between the predicted silhouette and the target
# silhouette from our dataset
predicted_silhouette = images_predicted[..., 3]
loss_silhouette = ((predicted_silhouette - target_silhouette[j]) ** 2).mean()
loss["silhouette"] += loss_silhouette / num_views_per_iteration
# Squared L2 distance between the predicted RGB image and the target
# image from our dataset
predicted_rgb = images_predicted[..., :3]
loss_rgb = ((predicted_rgb - target_rgb[j]) ** 2).mean()
loss["rgb"] += loss_rgb / num_views_per_iteration
# Weighted sum of the losses
sum_loss = torch.tensor(0.0, device=device)
for k, l in loss.items():
sum_loss += l * losses[k]["weight"]
losses[k]["values"].append(float(l.detach().cpu()))
# Print the losses
loop.set_description("total_loss = %.6f" % sum_loss)
# Plot mesh
if i % plot_period == 0:
visualize_prediction(new_src_mesh, renderer=renderer_textured, title="iter: %d" % i, silhouette=False)
# Optimization step
sum_loss.backward()
optimizer.step()
visualize_prediction(new_src_mesh, renderer=renderer_textured, silhouette=False)
plot_losses(losses)
```
Save the final predicted mesh:
## 4. Save the final predicted mesh
```
# Fetch the verts and faces of the final predicted mesh
final_verts, final_faces = new_src_mesh.get_mesh_verts_faces(0)
# Scale normalize back to the original target size
final_verts = final_verts * scale + center
# Store the predicted mesh using save_obj
final_obj = os.path.join('./', 'final_model.obj')
save_obj(final_obj, final_verts, final_faces)
```
## 5. Conclusion
In this tutorial, we learned how to load a textured mesh from an obj file, create a synthetic dataset by rendering the mesh from multiple viewpoints. We showed how to set up an optimization loop to fit a mesh to the observed dataset images based on a rendered silhouette loss. We then augmented this optimization loop with an additional loss based on rendered RGB images, which allowed us to predict both a mesh and its texture.
| github_jupyter |
# T006 · Maximum common substructure
**Note:** This talktorial is a part of TeachOpenCADD, a platform that aims to teach domain-specific skills and to provide pipeline templates as starting points for research projects.
Authors:
- Oliver Nagel, CADD Seminars, 2017, Charité/FU Berlin
- Jaime Rodríguez-Guerra, 2019-2020, [Volkamer lab](https://volkamerlab.org), Charité
- Andrea Volkamer, 2019-2020, [Volkamer lab](https://volkamerlab.org), Charité
__Talktorial T006__: This talktorial is part of the TeachOpenCADD pipeline described in the [first TeachOpenCADD paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x), comprising talktorials T001-T010.
## Aim of this talktorial
Clustering and classification of large scale chemical data is essential for navigation, analysis and knowledge discovery in a wide variety of chemical application domains in drug discovery.
In the last talktorial, we learned how to group molecules (clustering) and found that the molecules in one cluster look similar to each other and share a common scaffold. Besides visual inspection, we will learn here how to calculate the maximum substructure that a set of molecules has in common.
### Contents in *Theory*
* Introduction to identification of maximum common substructure in a set of molecules
* Detailed explanation of the FMCS algorithm
### Contents in *Practical*
* Load and draw molecules
* Run the FMCS algorithm with different input parameters
* A more diverse set: the EGFR compounds downloaded from ChEMBL
* Identification of MCS using interactive cut-off adaption
### References
* Dalke A, Hastings J., FMCS: a novel algorithm for the multiple MCS problem. [*J. Cheminf.* 2013; **5** (Suppl 1): O6](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3606201/)
* Raymond JW., Willet P., Maximum common subgraph isomorphism algorithms for the matching of chemical structures. [*J Comput Aided Mol Des.* 2002 Jul; **16**(7):521-33](https://link.springer.com/article/10.1023/A:1021271615909)
* [Dalke's website with info on algorithm](http://dalkescientific.com/writings/diary/archive/2012/05/12/mcs_background.html)
* [RDKit Cookbook documentation on MCS](http://www.rdkit.org/docs/Cookbook.html#using-custom-mcs-atom-types)
## Theory
### Introduction to identification of maximum common substructure in a set of molecules

The maximum common structure (MCS) is defined as the largest substructure that appears in two or more candidate molecules.
* Finding the MCS = maximum common subgraph isomorphism problem
* Has many applications in the field of cheminformatics: similarity search, hierarchical clustering, or molecule alignment
* Advantages:
* Intuitive $\rightarrow$ shared structure among candidates likely to be important
* Provides insight into possible activity patterns
* Easy visualization by simply highlighting the substructure
Details on MCS algorithms (see review: [*J Comput Aided Mol Des.* 2002 Jul; **16**(7):521-33](https://link.springer.com/article/10.1023/A:1021271615909))
* Determining an MCS between two or more graphs is an NP-complete problem
* Algorithms for exact determination as well as approximations exist
* Exact: Maximum-clique, backtracking, dynamic programming
* Approximate: Genetic algorithm, combinatorial optimization, fragment storage, ...
* Problem reduction: Simplify the molecular graphs
Example of an implementation: [FMCS](http://dalkescientific.com/writings/diary/archive/2012/05/12/mcs_background.html) algorithm
* Models MCS problem as a graph isomorphism problem
* Based on subgraph enumeration and subgraph isomorphism testing
### Detailed explanation of FMCS algorithm
As explained in [*J. Cheminf.* 2013; **5**(Suppl 1): O6](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3606201/) and the respective [RDKit FMCS documentation](https://www.rdkit.org/docs/source/rdkit.Chem.fmcs.fmcs.html).
#### The simplified algorithm description
```
best_substructure = None
pick one structure in the set as query, all other as targets
for each substructure in the query:
convert into a SMARTS string based on the desired match properties
if SMARTS pattern exists in all of the targets:
then it is a common substructure
keep track of the maximum of such substructure
```
This simple approach alone will usually take a long time, but there are certain tricks used to speed up the run time.
#### Tricks to speed up the run time
<img src="images/speed_tricks.jpg" width=800 />
#### A) Bond elimination
* Remove bonds that cannot be part of the MCS
* Atom and bond type information has to be present in every input structure
* Bond type: string consisting of SMARTS of the first atom, the bond and the second atom
* Exclude all bond types not present in all input structures, delete respective edges
* Result: fragmented structures with all atom information, but fewer edges (bonds)
#### B) Use the structure with the smallest largest fragment as the query
* Heuristic approach:
* Find largest fragment of each input structure
* Sort input structures ascending by number of bonds in largest fragment
* Solve ties with number of atoms or input order as alternative
* The structure with the smallest largest fragment becomes the query structure
* The ones from the other input structures are the targets
#### C) Use a breadth-first search (BFS) and a priority queue to enumerate the fragment subgraphs
__C1__
* Enumeration based on growing a so called seed
* Seed: atoms/bonds in current subgraph, exclusion set (bonds not applicable for growth)
* To prevent redundancy:
* Initial seed is first bond in the fragment, can potentially grow to size of whole fragment
* Second seed is second bond, is excluded from using the first bond
* Third seed starts from the third bond, excluded from using the first and second
* ...
__C2__
* Seed grows along connected bonds (not in exclusions set or already in seed)
* All growing possibilities are considered at every step
* E.g. if there are N possible bonds for extension, $2^{N-1}$ possible new seeds are added to queue
* Enumeration ends when there are no new bonds to add to subgraph (exclude seed from queue)
* Largest seed will be processed first
#### D) Prune seeds not present in all of the other structures
* At each growth state $\rightarrow$ check if new seed exists in all other structures
* Else: Exclude seed from queue
#### E) Prune seeds without sufficient growing potential
* Evaluation of growth potential from exclusion list and possible edges for extension
* If potential smaller than current best subgraph $\rightarrow$ exclude seed from queue
Utilizing these approaches it is then trivial to keep track of the largest subgraph which corresponds to the maximum common substructure.
## Practical
```
from collections import defaultdict
from pathlib import Path
from copy import deepcopy
import random
from ipywidgets import interact, fixed, widgets
import pandas as pd
import matplotlib.pyplot as plt
from rdkit import Chem, Geometry
from rdkit.Chem import AllChem
from rdkit.Chem import Draw
from rdkit.Chem import rdFMCS
from rdkit.Chem import PandasTools
from teachopencadd.utils import seed_everything
seed_everything()
HERE = Path(_dh[-1])
DATA = HERE / "data"
```
### Load and draw molecules
Cluster data taken from **Talktorial T005** or later EGFR molecules from **Talktorial T001**
```
sdf = str(HERE / "../T005_compound_clustering/data/molecule_set_largest_cluster.sdf")
supplier = Chem.ForwardSDMolSupplier(sdf)
mols = list(supplier)
print(f"Set with {len(mols)} molecules loaded.")
# NBVAL_CHECK_OUTPUT
# Show only first 10 molecules -- use slicing
num_mols = 10
legends = [mol.GetProp("_Name") for mol in mols]
Draw.MolsToGridImage(mols[:num_mols], legends=legends[:num_mols], molsPerRow=5)
```
### Run the FMCS algorithm with different input parameters
The FMCS algorithm is implemented in RDKit: [rdFMCS](https://rdkit.org/docs/source/rdkit.Chem.rdFMCS.html)
#### Default values
In the simplest case only a list of molecules is given as a parameter.
```
mcs1 = rdFMCS.FindMCS(mols)
print(f"MCS1 contains {mcs1.numAtoms} atoms and {mcs1.numBonds} bonds.")
print("MCS SMARTS string:", mcs1.smartsString)
# NBVAL_CHECK_OUTPUT
# Draw substructure from Smarts
m1 = Chem.MolFromSmarts(mcs1.smartsString)
Draw.MolToImage(m1, legend="MCS1")
```
Define a helper function to draw the molecules with the highlighted MCS.
```
def highlight_molecules(molecules, mcs, number, label=True, same_orientation=True, **kwargs):
"""Highlight the MCS in our query molecules"""
molecules = deepcopy(molecules)
# convert MCS to molecule
pattern = Chem.MolFromSmarts(mcs.smartsString)
# find the matching atoms in each molecule
matching = [molecule.GetSubstructMatch(pattern) for molecule in molecules[:number]]
legends = None
if label is True:
legends = [x.GetProp("_Name") for x in molecules]
# Align by matched substructure so they are depicted in the same orientation
# Adapted from: https://gist.github.com/greglandrum/82d9a86acb3b00d3bb1df502779a5810
if same_orientation:
mol, match = molecules[0], matching[0]
AllChem.Compute2DCoords(mol)
coords = [mol.GetConformer().GetAtomPosition(x) for x in match]
coords2D = [Geometry.Point2D(pt.x, pt.y) for pt in coords]
for mol, match in zip(molecules[1:number], matching[1:number]):
if not match:
continue
coord_dict = {match[i]: coord for i, coord in enumerate(coords2D)}
AllChem.Compute2DCoords(mol, coordMap=coord_dict)
return Draw.MolsToGridImage(
molecules[:number],
legends=legends,
molsPerRow=5,
highlightAtomLists=matching[:number],
subImgSize=(200, 200),
**kwargs,
)
highlight_molecules(mols, mcs1, 5)
```
Save image to disk
```
img = highlight_molecules(mols, mcs1, 3, useSVG=True)
# Get SVG data
molsvg = img.data
# Set background to transparent & Enlarge size of label
molsvg = molsvg.replace("opacity:1.0", "opacity:0.0").replace("12px", "20px")
# Save altered SVG data to file
with open(DATA / "mcs_largest_cluster.svg", "w") as f:
f.write(molsvg)
```
#### Set a threshold
It is possible to lower the threshold for the substructure, for example so that the MCS only has to occur in e.g. 80% of the input structures.
```
mcs2 = rdFMCS.FindMCS(mols, threshold=0.8)
print(f"MCS2 contains {mcs2.numAtoms} atoms and {mcs2.numBonds} bonds.")
print("SMARTS string:", mcs2.smartsString)
# NBVAL_CHECK_OUTPUT
# Draw substructure
m2 = Chem.MolFromSmarts(mcs2.smartsString)
Draw.MolsToGridImage([m1, m2], legends=["MCS1", "MCS2: +threshold=0.8"])
highlight_molecules(mols, mcs2, 5)
```
<!-- FIXME: CI - CHEMBL27685 id is not in this subset, check that T005 produces deterministic outputs -->
As we can see in this example, some molecules were now left out due to the set threshold (`0.8`). This threshold allows to find a larger common substructure that contains, e.g., a benzene with a meta-substitution fluor pattern and a longer alkane chain.
#### Match ring bonds
In the above example it may not be obvious, but by default ring bonds can match non-ring bonds.
Often from an application point of view, we want to retain rings. Thus, one can set the `ringMatchesRingOnly` parameter to ```True```, then only ring bonds match with other ring bonds.
```
mcs3 = rdFMCS.FindMCS(mols, threshold=0.8, ringMatchesRingOnly=True)
print(f"MCS3 contains {mcs3.numAtoms} atoms and {mcs3.numBonds} bonds.")
print("SMARTS string:", mcs3.smartsString)
# NBVAL_CHECK_OUTPUT
# Draw substructure
m3 = Chem.MolFromSmarts(mcs3.smartsString)
Draw.MolsToGridImage([m1, m2, m3], legends=["MCS1", "MCS2: +treshold=0.8", "mcs3: +ringmatch"])
```
We can see here that depending on the chosen thresholds and parameters, we get slightly different MCS. Note that there are more parameter options available in the [RDKit FMCS module](https://www.rdkit.org/docs/GettingStartedInPython.html#maximum-common-substructure), e.g. considering atom, bond or valence matching.
```
highlight_molecules(mols, mcs3, 5)
```
### A more diverse set: the EGFR compounds downloaded from ChEMBL
We restrict the data to only highly active molecules (pIC50>9) and detect the maximum common scaffold in this subset.
```
# Read full EGFR data
mol_df = pd.read_csv(HERE / "../T001_query_chembl/data/EGFR_compounds.csv", index_col=0)
print("Total number of compounds:", mol_df.shape[0])
# Only keep molecules with pIC50 > 9 (IC50 > 1nM)
mol_df = mol_df[mol_df.pIC50 > 9]
print("Number of compounds with pIC50 > 9:", mol_df.shape[0])
# NBVAL_CHECK_OUTPUT
# Add molecule column to data frame
PandasTools.AddMoleculeColumnToFrame(mol_df, "smiles")
mol_df.head(3)
```
We do our calculations on the selected highly active molecules only.
```
mols_diverse = []
# Note: discarded variables we do not care about are usually referred to with a single underscore
for _, row in mol_df.iterrows():
m = Chem.MolFromSmiles(row.smiles)
m.SetProp("_Name", row.molecule_chembl_id)
mols_diverse.append(m)
```
In the interest of time, we randomly pick 50 molecules from this set.
```
# We have fixed the random seed above (imports) for deterministic results
mols_diverse_sample = random.sample(mols_diverse, 50)
```
We now calculate the same three variants of MCSs as described above and draw the respective substructures. We use a slightly lower threshold to account for the larger diversity in the set.
```
threshold_diverse = 0.7
mcs1 = rdFMCS.FindMCS(mols_diverse_sample)
print("SMARTS string1:", mcs1.smartsString)
mcs2 = rdFMCS.FindMCS(mols_diverse_sample, threshold=threshold_diverse)
print("SMARTS string2:", mcs2.smartsString)
mcs3 = rdFMCS.FindMCS(mols_diverse_sample, ringMatchesRingOnly=True, threshold=threshold_diverse)
print("SMARTS string3:", mcs3.smartsString)
# NBVAL_CHECK_OUTPUT
# Draw substructures
m1 = Chem.MolFromSmarts(mcs1.smartsString)
m2 = Chem.MolFromSmarts(mcs2.smartsString)
m3 = Chem.MolFromSmarts(mcs3.smartsString)
Draw.MolsToGridImage(
[m1, m2, m3],
legends=[
"MCS1",
f"MCS2: +threshold={threshold_diverse}",
"MCS3: +ringmatch",
],
)
```
This time it become more clear that setting `ringMatchesRingOnly=True` provides a more intuitive representation of the scaffold they share.
```
highlight_molecules(mols_diverse_sample, mcs3, 5)
```
### Identification of MCS using interactive cut-off adaption
We can also change the threshold interactively. For that, we need a helper function, hereby defined. It takes two arguments: `molecules` (fixed, so it's not configurable with a widget) and `percentage` (the value of which is determined by the interactive widget). Every time you modify the state of the slider, this helper function is called.
```
def render_mcs(molecules, percentage):
"""Interactive widget helper. `molecules` must be wrapped in `ipywidgets.fixed()`,
while `percentage` will be determined by an IntSlider widget."""
tmcs = rdFMCS.FindMCS(molecules, threshold=percentage / 100.0)
if tmcs is None:
print("No MCS found")
return None
m = Chem.MolFromSmarts(tmcs.smartsString)
print(tmcs.smartsString)
return m
# Note that the slider may take a few seconds to react
interact(
render_mcs,
molecules=fixed(mols_diverse_sample),
percentage=widgets.IntSlider(min=0, max=100, step=10, value=70),
);
```
## Discussion
Maximum substructures that a set of molecules have in common can a useful strategy to visualize common scaffolds. In this talktorial, we have use the FMCS algorithm to find the maximum common substructure within a largest cluster from **Talktorial T005**. An interactive widget allowed us to explore the impact of different thresholds on the maximum common substructure in our dataset.
## Quiz
* Why is the calculation of the maximum common substructure (MCS) useful?
* Can you briefly describe how an MCS can be calculated?
* How does the typical fragment of an active EGFR compound look like?
| github_jupyter |

[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/GRAMMAR_EN.ipynb)
# **Extract Part of speech tags and perform dependency parsing on a text**
## 1. Colab Setup
```
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
# !bash colab.sh
# -p is for pyspark
# -s is for spark-nlp
# !bash colab.sh -p 3.1.1 -s 3.0.1
# by default they are set to the latest
import pandas as pd
import numpy as np
import json
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
```
## 2. Start Spark Session
```
spark = sparknlp.start()
```
## 3. Select the DL model
```
MODEL_NAME='dependency_typed_conllu'
```
## 4. Some sample examples
```
## Generating Example Files ##
text_list = [
"""John Snow is a good man. He knows a lot about science.""",
"""In what country is the WTO headquartered?""",
"""I was wearing my dark blue shirt and tie.""",
"""The Geneva Motor Show is the most popular car show of the year.""",
"""Bill Gates and Steve Jobs had periods of civility.""",
]
```
## 5. Define Spark NLP pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
pos = PerceptronModel.pretrained("pos_anc", 'en')\
.setInputCols("document", "token")\
.setOutputCol("pos")
dep_parser = DependencyParserModel.pretrained('dependency_conllu')\
.setInputCols(["document", "pos", "token"])\
.setOutputCol("dependency")
typed_dep_parser = TypedDependencyParserModel.pretrained('dependency_typed_conllu')\
.setInputCols(["token", "pos", "dependency"])\
.setOutputCol("dependency_type")
nlpPipeline = Pipeline(
stages = [
documentAssembler,
tokenizer,
pos,
dep_parser,
typed_dep_parser
])
```
## 6. Select the example to test
```
index=0
```
## 7. Run the pipeline on selected example
```
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":[text_list[index]]}))
result = pipelineModel.transform(df)
```
## 8. Visualize results
```
result.select(F.explode(F.arrays_zip('token.result',
'token.begin',
'token.end',
'pos.result',
'dependency.result',
'dependency.metadata',
'dependency_type.result')).alias("cols"))\
.select(F.expr("cols['0']").alias("chunk"),
F.expr("cols['1']").alias("begin"),
F.expr("cols['2']").alias("end"),
F.expr("cols['3']").alias("pos"),
F.expr("cols['4']").alias("dependency"),
F.expr("cols['5']").alias("dependency_start"),
F.expr("cols['6']").alias("dependency_type")).show(truncate=False)
```
| github_jupyter |
## PP2P protocol
By Roman Sasik (rsasik@ucsd.edu)
This Notebook describes the steps used in Gene Ontology analysis, which produces both conditional and unconditional posterior probabilities that a GO term is differentially regulated in a given experiment. It is assumed that posterior probabilities for all genes have been calculated, either directly using _eBayes_ in the _limma R_ package, or indirectly using the _lfdr_ function of the _qvalue R_ package. The conditional probabilities are defined in the context of the GO graph structure:
<img src = "files/GOstructure.png">
The node in red can be pronounced conditionally significant if it is significant _given_ the status of its descendant nodes. For instance, if the dark grey node had been found significant and the light grey nodes had been found not significant, the red node can be declared significant only if there are more significant genes in it than in the dark grey node.
The program _PP2P_ works for both "continuous" PP's as well as for a simple cutoff, which is equivalent to the usual two-gene-list GO analysis (one a significant gene list, the other the expressed gene list).
The algorithm is described in this paper:
_"Posterior conditional probabilities for gene ontologies",_ R Sasik and A Chang, (to be published)
GNU fortran compiler _gfortran_ is assumed to be installed (is part of gcc).
### Compilation of the code
Execute this command:
```
!gfortran -ffree-line-length-none PP2p_branch_conditional_exact_pvalues.f90
!ls
```
### Input file
There is one input file - a tab-delimited list of genes, which must meet these requirements: 1) The first line is the header line 2) The first column contains Entrez gene ID's of all expressed genes, in no particular order. Each gene ID must be present only once. 3) The second column contains posterior probabilities (PP) of the genes in the first column. PP is the probability that, given some prior assumptions, the gene is differentially expressed (DE). An example of such a file is C_T_PP.txt. The genes in it are ordered by their PP but they don't have to be. This is the top of that file:
<img src = "files/input.png">
There are times when we do not have the PP's, but instead have a "list of DE genes." In that case, define PP's in the second column as 1 when the corresponding gene is among the significant genes and 0 otherwise. An example of such a file is C_T_1652.txt. (The 1652 in the file name indicates the number of significant genes, but it has no other significance).
## Running PP2p
Enter this command if you want to find differentially regulated GO terms in the Biological Process ontology, in the experiment defined by the input file C_T_PP.txt, and if you want a term reported as significant with posterior error probability of 0.01:
```
!./a.out BP C_T_PP.txt 0.01
```
The output is a number of files:
#### Conditional reporting is done in these files:
BP_C_T_PP_0.01_conditional.dot
BP_C_T_PP_0.01_conditional_lfdr_expanded.txt
BP_C_T_PP_0.01_conditional_lfdr.txt
#### Unconditional reporting is done in these files (BH indicates Benjamini-Hochberg adjustment of raw p-values; lfdr indicates local false discovery rate (Storey) corresponding to the raw p-values):
BP_C_T_PP_0.01_unconditional_BH_expanded.txt
BP_C_T_PP_0.01_unconditional_BH.txt
BP_C_T_PP_0.01_unconditional.dot
BP_C_T_PP_0.01_unconditional_lfdr_expanded.txt
BP_C_T_PP_0.01_unconditional_lfdr.txt
For instance, the simple list of conditionally significant GO terms is in BP_C_T_PP_0.01_conditional_lfdr.txt and looks like this:
<img src = "files/xls_conditional.png">
This is the entire file. There are no more conditionally significant GO terms. The way to read this output is from top to bottom, as GO terms are reported in levels depending on the significance (or not) of their child terms. Therefore, the "level" column also corresponds to the level of the GO organization - the lower the level, the more specific (and smaller) the term is.
The expanded files contain all the genes from the reported GO terms. For instance, the top of BP_C_T_PP_0.01_conditional_lfdr_expanded.txt looks like this:
<img src = "files/xls_conditional_expanded.png">
The .dot files encode the ontology structure of the significant terms. Convert them into pdf files using the following commands:
```
!dot -Tfig BP_C_T_PP_0.01_conditional.dot > BP_C_T_PP_0.01_conditional.fig
!fig2dev -L pdf BP_C_T_PP_0.01_conditional.fig BP_C_T_PP_0.01_conditional.pdf
!ls *pdf
```
## Demultiplexing phenotyping reads
The graph looks like this:
<img src = "files/dot_conditional.png">
Cleanup after exercize:
```
!rm BP_C_T*
```
| github_jupyter |
# 케글 커널 필사 2
* [커널](https://www.kaggle.com/bogorodvo/upd-lightgbm-baseline-model-using-sparse-matrix)
* 스코어 기준으로 정렬
```
import pandas as pd
import numpy as np
from tqdm import tqdm_notebook
import lightgbm as lgb
#import xgboost as xgb
from scipy.sparse import vstack, csr_matrix, save_npz, load_npz
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import StratifiedKFold
#from sklearn.metrics import roc_auc_score
import gc
gc.enable()
dtypes = {
'MachineIdentifier': 'category',
'ProductName': 'category',
'EngineVersion': 'category',
'AppVersion': 'category',
'AvSigVersion': 'category',
'IsBeta': 'int8',
'RtpStateBitfield': 'float16',
'IsSxsPassiveMode': 'int8',
'DefaultBrowsersIdentifier': 'float16',
'AVProductStatesIdentifier': 'float32',
'AVProductsInstalled': 'float16',
'AVProductsEnabled': 'float16',
'HasTpm': 'int8',
'CountryIdentifier': 'int16',
'CityIdentifier': 'float32',
'OrganizationIdentifier': 'float16',
'GeoNameIdentifier': 'float16',
'LocaleEnglishNameIdentifier': 'int8',
'Platform': 'category',
'Processor': 'category',
'OsVer': 'category',
'OsBuild': 'int16',
'OsSuite': 'int16',
'OsPlatformSubRelease': 'category',
'OsBuildLab': 'category',
'SkuEdition': 'category',
'IsProtected': 'float16',
'AutoSampleOptIn': 'int8',
'PuaMode': 'category',
'SMode': 'float16',
'IeVerIdentifier': 'float16',
'SmartScreen': 'category',
'Firewall': 'float16',
'UacLuaenable': 'float32',
'Census_MDC2FormFactor': 'category',
'Census_DeviceFamily': 'category',
'Census_OEMNameIdentifier': 'float16',
'Census_OEMModelIdentifier': 'float32',
'Census_ProcessorCoreCount': 'float16',
'Census_ProcessorManufacturerIdentifier': 'float16',
'Census_ProcessorModelIdentifier': 'float16',
'Census_ProcessorClass': 'category',
'Census_PrimaryDiskTotalCapacity': 'float32',
'Census_PrimaryDiskTypeName': 'category',
'Census_SystemVolumeTotalCapacity': 'float32',
'Census_HasOpticalDiskDrive': 'int8',
'Census_TotalPhysicalRAM': 'float32',
'Census_ChassisTypeName': 'category',
'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16',
'Census_InternalPrimaryDisplayResolutionHorizontal': 'float16',
'Census_InternalPrimaryDisplayResolutionVertical': 'float16',
'Census_PowerPlatformRoleName': 'category',
'Census_InternalBatteryType': 'category',
'Census_InternalBatteryNumberOfCharges': 'float32',
'Census_OSVersion': 'category',
'Census_OSArchitecture': 'category',
'Census_OSBranch': 'category',
'Census_OSBuildNumber': 'int16',
'Census_OSBuildRevision': 'int32',
'Census_OSEdition': 'category',
'Census_OSSkuName': 'category',
'Census_OSInstallTypeName': 'category',
'Census_OSInstallLanguageIdentifier': 'float16',
'Census_OSUILocaleIdentifier': 'int16',
'Census_OSWUAutoUpdateOptionsName': 'category',
'Census_IsPortableOperatingSystem': 'int8',
'Census_GenuineStateName': 'category',
'Census_ActivationChannel': 'category',
'Census_IsFlightingInternal': 'float16',
'Census_IsFlightsDisabled': 'float16',
'Census_FlightRing': 'category',
'Census_ThresholdOptIn': 'float16',
'Census_FirmwareManufacturerIdentifier': 'float16',
'Census_FirmwareVersionIdentifier': 'float32',
'Census_IsSecureBootEnabled': 'int8',
'Census_IsWIMBootEnabled': 'float16',
'Census_IsVirtualDevice': 'float16',
'Census_IsTouchEnabled': 'int8',
'Census_IsPenCapable': 'int8',
'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',
'Wdft_IsGamer': 'float16',
'Wdft_RegionIdentifier': 'float16',
'HasDetections': 'int8'
}
%%time
print('Download Train and Test Data.\n')
train = pd.read_csv('./data/train.csv', dtype=dtypes, low_memory=True)
# train['MachineIdentifier'] = train.index.astype('uint32')
test = pd.read_csv('./data/test.csv', dtype=dtypes, low_memory=True)
# test['MachineIdentifier'] = test.index.astype('uint32')
debug = False
if debug:
train = train[:10000]
test = test[:10000]
gc.collect()
train.MachineIdentifier = range(len(train))
train.reset_index(drop=True, inplace=True)
test.MachineIdentifier = range(len(test))
test.reset_index(drop=True, inplace=True)
print('Transform all features to category.\n')
for usecol in tqdm_notebook(train.columns.tolist()[1:-1]):
train[usecol] = train[usecol].astype('str')
test[usecol] = test[usecol].astype('str')
#Fit LabelEncoder
le = LabelEncoder().fit(
np.unique(train[usecol].unique().tolist()+
test[usecol].unique().tolist()))
#At the end 0 will be used for dropped values
train[usecol] = le.transform(train[usecol])+1
test[usecol] = le.transform(test[usecol])+1
agg_tr = (train
.groupby([usecol])
.aggregate({'MachineIdentifier':'count'})
.reset_index()
.rename({'MachineIdentifier':'Train'}, axis=1))
agg_te = (test
.groupby([usecol])
.aggregate({'MachineIdentifier':'count'})
.reset_index()
.rename({'MachineIdentifier':'Test'}, axis=1))
agg = pd.merge(agg_tr, agg_te, on=usecol, how='outer').replace(np.nan, 0)
#Select values with more than 1000 observations
agg = agg[(agg['Train'] > 800)].reset_index(drop=True)
agg['Total'] = agg['Train'] + agg['Test']
#Drop unbalanced values
agg = agg[(agg['Train'] / agg['Total'] > 0.1) & (agg['Train'] / agg['Total'] < 0.9)]
agg[usecol+'Copy'] = agg[usecol]
train[usecol] = (pd.merge(train[[usecol]],
agg[[usecol, usecol+'Copy']],
on=usecol, how='left')[usecol+'Copy']
.replace(np.nan, 0).astype('int').astype('category'))
test[usecol] = (pd.merge(test[[usecol]],
agg[[usecol, usecol+'Copy']],
on=usecol, how='left')[usecol+'Copy']
.replace(np.nan, 0).astype('int').astype('category'))
del le, agg_tr, agg_te, agg, usecol
gc.collect()
train.shape
y_train = np.array(train['HasDetections'])
train_ids = train.index
test_ids = test.index
del train['HasDetections'], train['MachineIdentifier'], test['MachineIdentifier']
gc.collect()
print("If you don't want use Sparse Matrix choose Kernel Version 2 to get simple solution.\n")
print('--------------------------------------------------------------------------------------------------------')
print('Transform Data to Sparse Matrix.')
print('Sparse Matrix can be used to fit a lot of models, eg. XGBoost, LightGBM, Random Forest, K-Means and etc.')
print('To concatenate Sparse Matrices by column use hstack()')
print('Read more about Sparse Matrix https://docs.scipy.org/doc/scipy/reference/sparse.html')
print('Good Luck!')
print('--------------------------------------------------------------------------------------------------------')
#Fit OneHotEncoder
ohe = OneHotEncoder(categories='auto', sparse=True, dtype='uint8').fit(train)
#Transform data using small groups to reduce memory usage
m = 100000
train = vstack([ohe.transform(train[i*m:(i+1)*m]) for i in range(train.shape[0] // m + 1)])
test = vstack([ohe.transform(test[i*m:(i+1)*m]) for i in range(test.shape[0] // m + 1)])
train.shape
save_npz('./data_temp/train.npz', train, compressed=True)
save_npz('./data_temp/test.npz', test, compressed=True)
del ohe, train, test
gc.collect()
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
skf.get_n_splits(train_ids, y_train)
lgb_test_result = np.zeros(test_ids.shape[0])
lgb_train_result = np.zeros(train_ids.shape[0])
#xgb_test_result = np.zeros(test_ids.shape[0])
#xgb_train_result = np.zeros(train_ids.shape[0])
counter = 0
print('\nLightGBM\n')
for train_index, test_index in skf.split(train_ids, y_train):
print('Fold {}\n'.format(counter + 1))
train = load_npz('./data_temp/train.npz')
X_fit = vstack([train[train_index[i*m:(i+1)*m]] for i in range(train_index.shape[0] // m + 1)])
X_val = vstack([train[test_index[i*m:(i+1)*m]] for i in range(test_index.shape[0] // m + 1)])
X_fit, X_val = csr_matrix(X_fit, dtype='float32'), csr_matrix(X_val, dtype='float32')
y_fit, y_val = y_train[train_index], y_train[test_index]
del train
gc.collect()
lgb_model = lgb.LGBMClassifier(max_depth=-1,
n_estimators=30000,
learning_rate=0.05,
num_leaves=2**12-1,
colsample_bytree=0.28,
objective='binary',
n_jobs=-1)
#xgb_model = xgb.XGBClassifier(max_depth=6,
# n_estimators=30000,
# colsample_bytree=0.2,
# learning_rate=0.1,
# objective='binary:logistic',
# n_jobs=-1)
lgb_model.fit(X_fit, y_fit, eval_metric='auc',
eval_set=[(X_val, y_val)],
verbose=100, early_stopping_rounds=100)
#xgb_model.fit(X_fit, y_fit, eval_metric='auc',
# eval_set=[(X_val, y_val)],
# verbose=1000, early_stopping_rounds=300)
#lgb_train_result[test_index] += lgb_model.predict_proba(X_val)[:,1]
#xgb_train_result[test_index] += xgb_model.predict_proba(X_val)[:,1]
del X_fit, X_val, y_fit, y_val, train_index, test_index
gc.collect()
test = load_npz('./data_temp/test.npz')
test = csr_matrix(test, dtype='float32')
lgb_test_result += lgb_model.predict_proba(test)[:,1]
#xgb_test_result += xgb_model.predict_proba(test)[:,1]
counter += 1
del test
gc.collect()
```
Fold 1
Training until validation scores don't improve for 100 rounds.
* [100] valid_0's auc: 0.731814 valid_0's binary_logloss: 0.604756
* [200] valid_0's auc: 0.737255 valid_0's binary_logloss: 0.598171
* [300] valid_0's auc: 0.738762 valid_0's binary_logloss: 0.596577
* [400] valid_0's auc: 0.73902 valid_0's binary_logloss: 0.596246
* [500] valid_0's auc: 0.738941 valid_0's binary_logloss: 0.596295
Early stopping, best iteration is:
* [413] valid_0's auc: 0.739032 valid_0's binary_logloss: 0.596234
Fold 2
Training until validation scores don't improve for 100 rounds.
* [100] valid_0's auc: 0.732085 valid_0's binary_logloss: 0.604716
* [200] valid_0's auc: 0.737355 valid_0's binary_logloss: 0.598296
* [300] valid_0's auc: 0.738891 valid_0's binary_logloss: 0.596623
* [400] valid_0's auc: 0.739114 valid_0's binary_logloss: 0.596321
Early stopping, best iteration is:
* [392] valid_0's auc: 0.739125 valid_0's binary_logloss: 0.596318
Fold 3
Training until validation scores don't improve for 100 rounds.
* [100] valid_0's auc: 0.731732 valid_0's binary_logloss: 0.604695
* [200] valid_0's auc: 0.7373 valid_0's binary_logloss: 0.598301
* [300] valid_0's auc: 0.739042 valid_0's binary_logloss: 0.596534
* [400] valid_0's auc: 0.73933 valid_0's binary_logloss: 0.596197
* [500] valid_0's auc: 0.739239 valid_0's binary_logloss: 0.596242
Early stopping, best iteration is:
* [403] valid_0's auc: 0.739335 valid_0's binary_logloss: 0.596189
Fold 4
Training until validation scores don't improve for 100 rounds.
* [100] valid_0's auc: 0.732696 valid_0's binary_logloss: 0.60421
* [200] valid_0's auc: 0.738141 valid_0's binary_logloss: 0.597535
* [300] valid_0's auc: 0.739715 valid_0's binary_logloss: 0.595869
* [400] valid_0's auc: 0.739938 valid_0's binary_logloss: 0.595555
Early stopping, best iteration is:
* [350] valid_0's auc: 0.739944 valid_0's binary_logloss: 0.595605
Fold 5
Training until validation scores don't improve for 100 rounds.
* [100] valid_0's auc: 0.731629 valid_0's binary_logloss: 0.60482
* [200] valid_0's auc: 0.737059 valid_0's binary_logloss: 0.598237
* [300] valid_0's auc: 0.738603 valid_0's binary_logloss: 0.596627
* [400] valid_0's auc: 0.738839 valid_0's binary_logloss: 0.596299
Early stopping, best iteration is:
* [396] valid_0's auc: 0.73884 valid_0's binary_logloss: 0.596306
```
sub = pd.DataFrame({"MachineIdentifier":test.MachineIdentifier, "HasDetections": lgb_test_result / counter})
submission = pd.read_csv('./data/sample_submission.csv')
submission.to_csv('./data/submission_lgb_more_feature.csv', index=False)
submission.HasDetections = lgb_test_result / counter
t1 = set(range(len(submission.index)))
t2 = set(sub.index)
submission.iloc[list(t1.difference(t2))].append(sub).sort_values('MachineIdentifier').to_csv('./data/submission_split_av.csv', index=False)
# for machine_id in tqdm_notebook(sub.MachineIdentifier):
# submission.loc[submission.MachineIdentifier == machine_id, 'HasDetections'] = sub[sub.MachineIdentifier == machine_id].HasDetections
submission = pd.read_csv('./data/sample_submission.csv')
# submission['HasDetections'] = lgb_test_result / counter
# submission.to_csv('lgb_submission.csv', index=False)
submission['HasDetections'] = lgb_test_result / counter
submission.to_csv('./data/submission_temp.csv', index=False)
```
# 모델 블렌딩 테스트
```
sub2 = pd.read_csv('./data/nffm_submission.csv')
sub3 = pd.read_csv('./data/ms_malware.csv')
submission.HasDetections = (2*submission.HasDetections + 2*sub2.HasDetections + sub3.HasDetections) / 5
submission.to_csv('./data/submission_temp3.csv', index=False)
```
| github_jupyter |
```
# !pip install prince
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# 차원축소
from sklearn.preprocessing import *
from sklearn.decomposition import PCA
import prince
df = pd.read_excel('../data/preprocess_data.xlsx')
categorical_variable = ['5일장', '복합장', '상설장','공중화장실보유여부', '주차장보유여부','농산물이 주요품목']
df_number = df.drop(categorical_variable, axis = 1)
scaled_data = StandardScaler().fit_transform(np.log1p(df_number))
data_scale_log=pd.DataFrame(data=scaled_data, columns=df_number.columns)
```
## 차원축소
- 시장특성, 주변시설특성, 소비자특성으로 나뉘어 차원축소를 진행한다.
- 연속형 변수는 PCA, 명목형 변수는 MCA기법으로 차원축소를 진행한다.
- 이때 설명력은 90%로 한다.
- 유동인구 데이터의 경우, "통계청"을 방문하여 "공개불가 데이터"를 활용하여 진행하여 PPT와 다소 차이가 있을 수 있다.
```
categorical = ['5일장', '복합장', '상설장','공중화장실보유여부', '주차장보유여부','농산물이 주요품목']
continuous = ['점포수','운영기간', '대형마트', '주차장', '대중교통', '학교', '편의점', '주유소,충전소',
'문화시설', '관광명소', '음식점', '카페', '공시지가', '행정동 인구', '행정시구 인구',
'시도별 소득월액', '시구 미성년자', '시구 젊은청년', '시구 소득인구', '시구 노년인구',
'동별 미성년자', '동별 젊은청년', '동별 소득인구','동별 노년인구']
df_categorical = df[categorical]
df_continuous = df[continuous]
```
### 시장특성
다음 분석으로부터 다음의 결과를 도출 할 수 있다.
시장특성의 연속형 변수의 경우, 통계청 데이터인 유동인구 데이터가 본 과정에서는 누락되어 있어 다소 차이가 있다.
연속형 변수
- 1번 주성분
- 점포수가 많고 공시지가가 높으며 운영기간이 짧음
- 규모가 있는 신설시장
- 2번 주성분
- 점포수가 적고, 운영기간도 짧음
- 규모가 작은 신설시장
- 3번 주성분:
- 점포수가 많고, 공시지가가 낮으며, 운영기간이 짧음
- 도시 외각 지역의 규모가 큰 신설시장
명목형 변수
- 1번 성분
- 5일장과 복합장이고 주차장을 보유한 시장
- 주차장을 보유한 5일장, 복합장
- 2번 성분
- 복합장 성분이 큼
- 복합장 시장
- 3번 성분
- 농산물이 주요 품목이고 공중화장실과 주차장이 없음
- 편의시설이 없는 농산물 시장
- 4번 성분
- 주차장은 없으나 공중화장실을 보유함.
- 주차장은 없지만 화장실을 보유한 시장
#### 시장특성 - 연속형 변수
```
market_feature = ['점포수', '운영기간', '공시지가']
pca = PCA(n_components=0.9)
market_scale_data = data_scale_log[market_feature]
pca.fit(market_scale_data)
pca_market = pd.DataFrame((pca.transform(market_scale_data)))
num_of_principal = pca_market.shape[1]
print('주성분의 개수:', num_of_principal)
print('주성분의 설명력')
for i in range(num_of_principal) :
print(f"제 {i+1}주성분 :", pca.explained_variance_ratio_[i])
pd.DataFrame(data=pca.components_,columns=market_scale_data.columns)
```
#### 시장특성 - 명목형 변수
```
mca = prince.MCA(n_components=4)
mca = mca.fit(df_categorical)
mca_eigen = mca.eigenvalues_
mca_score = np.array(mca_eigen)/sum(mca_eigen)
print('mca의 설명력', round(mca_score[0], 3), round(mca_score[1], 3), round(mca_score[2], 3), round(mca_score[3], 3))
mca.column_coordinates(df_categorical)
df_mca = mca.transform(df_categorical)
```
### 주변시설 특성
다음의 분석으로 부터, 다음의 결과를 이끌어 낼 수 있었다.
- 1번 주성분
- 전체적으로 주성분 값이 낮음
- 주변시설이 낙후된 시장
- 2번 주성분
- 관광명소와 문화시설이 주위에 많고 대형마트가 없다.
- 대형마트가 없는 관광지구시장
- 3번 주성분
- 주변에 주유소,충전소가 많고 관광명소와 대형마트가 없다.
- 주유소,충전소가 많고 대형마트가 없는 비관광지구시장
- 4번 주성분
- 주변에 대중교통 시설이 많고 주유소, 충전소가 없다.
- 대중교통 이용이 편리한 시장
- 5번 주성분
- 주변에 학교가 많고, 대중교통 시설, 관광명소, 주유소,충전소가 많다.
- 학교 주변 시장
- 6번 주성분
- 문화시설이 많고 학교와 관광명소가 없다.
- 학교, 관광명소와 떨어진 문화시설 주변시장
```
infra_feature = ['대형마트', '주차장', '대중교통', '학교', '편의점', '주유소,충전소', '문화시설', '관광명소', '음식점', '카페']
pca = PCA(n_components=0.9)
infra_scale_data = data_scale_log[infra_feature]
pca.fit(infra_scale_data)
pca_infra = pd.DataFrame((pca.transform(infra_scale_data)))
num_of_principal = pca_infra.shape[1]
print('주성분의 개수:', num_of_principal)
print('주성분의 설명력')
for i in range(num_of_principal) :
print(f"제 {i+1}주성분 :", pca.explained_variance_ratio_[i])
pd.DataFrame(data=pca.components_,columns=infra_scale_data.columns)
```
### 소비자 특성
다음의 분석으로 부터, 다음의 결과를 이끌어 낼 수 있었다.
- 1번 주성분
- 전반적으로 인구 수치가 음의값을 가진다.
- 외각 지역의 시장
- 2번 주성분
- 행정 시구인구와 행정동 인구가 반비례관계를 띈다.
- 주거지에서 벗어난 시장
```
consumer_feature = ['행정동 인구', '행정시구 인구', '시도별 소득월액', '시구 미성년자',
'시구 젊은청년', '시구 소득인구', '시구 노년인구', '동별 미성년자', '동별 젊은청년', '동별 소득인구', '동별 노년인구']
pca = PCA(n_components=0.9)
consumer_scale_data = data_scale_log[consumer_feature]
pca.fit(consumer_scale_data)
pca_consumer = pd.DataFrame((pca.transform(consumer_scale_data)))
num_of_principal = pca_consumer.shape[1]
print('주성분의 개수:', num_of_principal)
print('주성분의 설명력')
for i in range(num_of_principal) :
print(f"제 {i+1}주성분 :", pca.explained_variance_ratio_[i])
pd.DataFrame(data=pca.components_,columns=consumer_scale_data.columns).T
```
## 차원축소 결과
```
pca_mca=pd.concat([pca_market,df_mca,pca_infra,pca_consumer],axis=1)
pca_mca.columns=['규모가 있는 신설시장',
'규모가 작은 신설시장',
'도시 외각 지역의 규모가 큰 신설시장',
'주차장을 보유한 5일장, 복합장',
'복합장 시장',
'편의시설이 없는 농산물 시장',
'화장실만 보유한 시장',
'주변 시설이 낙후된 시장',
'대형마트가 없는 관광지구 시장',
'주유소,충전소가 많은 대형마트가 없는 비관광지구 시장',
'대중교통 이용이 편리한 시장',
'학교 주변 시장',
'학교, 관광명소와 떨어진 문화시설 주변 시장',
'외각지역의 시장',
'주거지에서 벗어난 시장']
pca_mca.head()
pca_mca.to_excel('../data/pca_mca_data.xlsx', index = False)
```
| github_jupyter |
# 2. Numpy with images
All images can be seen as matrices where each element represents the value of one pixel. Multiple images (or matrices) can be stacked together into a "multi-dimensional matrix" or tensor. The different dimensions and the pixel values can have very different meanings depending on the type of image considered (e.g. time-lapse, volume, etc.), but the structure remains the same.
Python does not allow by default to gracefully handle such multi-dimensional data. In particular it is not designed to handle matrix operations. [Numpy](https://numpy.org/) was developed to fill this blank and offers a clean way to handle creation and operations on such data called here **```arrays```**. Numpy is underlying a large number of packages and has become absolutely essential to Python scientific programming. In particular it underlies the functions of [scikit-image](https://scikit-image.org/), the main package used in this course for image processing. Scikit-image is then itself an important basis of other "higher-level" software like [CellProfiler](https://cellprofiler.org/). So Numpy really sits at the basis of the Python scientific world, and it is thus essential to have an understanding of it.
Instead of introducing Numpy in an abstract way, we are going here to present it through the lens of image processing in order to focus on the most useful features in the context of this course. For the moment, let's just import Numpy:
```
import numpy as np
```
We also import scikit-image and matplotlib. You will learn more on these packages in later chapters. At the moment we just want to be able to load and display a an image (i.e. a matrix) to understand Numpy.
```
import skimage.io
import skimage
import matplotlib.pyplot as plt
plt.gray();
```
## 2.1 Exploring an image
We start by importing an image using scikit-image (see the next notebook [03-Image_import](03-Image_import.ipynb) for details). We have some data already available in the Data folder:
```
image = skimage.io.imread('../Data/neuron.tif')
```
### 2.1.1 Image size
The first thing we can do with the image is simply look at the output:
```
image
```
We see that Numpy tells us we have an array and we don't have a simple list of pixels, but a sort of *list of lists* representing the fact that we are dealing with a two-dimensional object. Each list represents one row of pixels. Jupyter smartly only shows us the first/last rows/columns. As mentioned before, the array is the fundamental data container implemented by Numpy, and **ALL** the images we are going to process in this course will be Numpy arrays.
We have seen before that all Python variables are also objects with methods and attributes. Numpy arrays are no exception. For example we can get information such as the ```shape``` of the array, i.e. number of rows, columns, planes etc.
```
image.shape
```
This means that we have an image of 1024 rows and 1280 columns. We can now look at it using matplotlib (see next chapter for details) using the ```plt.imshow()``` function:
```
plt.imshow(image);
```
### 2.1.2 Image type
```
image
```
In the output above we see that we have one additional piece of information: the array has ```dtype = uint8``` , which means that the image is of type *unsigned integer 8 bit*. We can also get the type of an array by using:
```
image.dtype
```
Standard formats we are going to see are 8bit (uint8), 16bit (uint16) and non-integers (usually float64). The image type sets what values pixels can take. For example 8bit means values from $0$ to $2^8 -1= 256-1 = 255$. Just like for example in Fiji, one can change the type of the image. If we know we are going to do operations requiring non-integers we can turn the pixels into floats trough the ```.astype()``` method.
```
image_float = image.astype(np.float64)
image_float.dtype
```
The importance of the image type goes slightly against Python's philosophy of dynamics typing (no need to specify a type when creating a variable), but is a necessity when handling scientific images. We are going to see now what sort of operations we can do with arrays, and the importance of *types* is going to be more obvious.
## 2.2 Operations on arrays
### 2.2.1 Arithmetics on arrays
Numpy is written in a smart way such that it is able to handle operations between arrays of different sizes. In the simplest case, one can combine a scalar and an array, for example through an addition:
```
image+10
```
Here Numpy automatically added the scalar 10 to **each** element of the array. Beyond the scalar case, operations between arrays of different sizes are also possible through a mechanism called broadcasting. This is an advanced (and sometimes confusing) features that we won't use in this course but about which you can read for example [here](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html).
The only case we are going to consider here are operations between arrays of same size. For example we can multiply the image by itself. We use first the float version of the image:
```
image_sq = image_float*image_float
image_sq
image_float
```
Looking at the first row we see $32^2 = 1024$ and $31^2=961$ etc. which means that the multiplication operation has happened **pixel-wise**. Note that this is **NOT** a classical matrix multiplication. We can also see that the output has the same size as the original arrays:
```
image_sq.shape
image_float.shape
```
Let's see now what happens when we square the original 8bit image:
```
image*image
```
We see that we don't get at all the expected result. Since we multiplied two 8bit images, Numpy assumes we want an 8bit output. And therefore the values are bound between 0-255. For example the second value of the first row is just the remainder of the modulo 256:
```
961%256
```
The same thing happens e.g. if we add an integer scalar to the matrix. We can also visualize the result:
```
print(image+230)
plt.imshow(image+230)
```
Clearly something went wrong as we get values that are smaller than 230. The image also has a strange aspect, with dark regions suddenly brighter. Again any value **over-flowing** above 255 restarts at 0.
This problem can be alleviated in different ways. For example we can combine a integer array with a float scalar and Numpy will automatically give a result using the "most complex" type:
```
image_plus_float = image+230.0
print(image_plus_float)
```
To be on the safe side we can also explicitely change the type when we know we might run into this kind of trouble. This can be done via the ```.astype()``` method:
```
image_float = image.astype(float)
image_float.dtype
```
Again, if we combine floats and integers the output is going to be a float:
```
image_float+230
```
It is a very common error to use the wrong image type for a certain application. When you do series of operations, it's always a good idea to regularly look at the image itself to catch such problems that remain "silent" (no error message).
### 2.2.2 Logical operations
A set of important operations when processing images are logical (or boolean) operations that allow e.g. to create object masks. Those have a very simple syntax in Numpy. For example, let's compare pixel intensities to some value ```threshold```:
```
threshold = 100
image > threshold
```
We see that the result is again a pixel-wise comparison with ```threshold```, generating in the end a boolean or logical matrix. We can directly assign this logical matrix to a variable and verify its shape and type and plot it:
```
image_threshold = image > threshold
image_threshold.shape
image_threshold.dtype
image_threshold
plt.imshow(image_threshold);
```
Of course other logical operator can be used (<, >, ==, !=) and the resulting boolean matrices combined:
```
threshold1 = 70
threshold2 = 100
image_threshold1 = image > threshold1
image_threshold2 = image < threshold2
image_AND = image_threshold1 & image_threshold2
image_XOR = image_threshold1 ^ image_threshold2
fig, ax = plt.subplots(nrows=1,ncols=4,figsize=(15,15))
ax[0].imshow(image_threshold1)
ax[1].imshow(image_threshold2)
ax[2].imshow(image_AND)
ax[3].imshow(image_XOR);
```
## 2.3 Numpy functions
To broadly summarize, one can say that Numpy offers four types of operations: 1. Creation of various types of arrays, 2. Pixel-wise modifications of arrays, 3. Operations changing array dimensions, 4. Combinations of arrays.
### 2.3.1 Array creation
Often we are going to create new arrays that later transform them. Functions creating arrays usually take arguments specifying both the content of the array and its dimensions.
Some of the most useful functions create 1D arrays of ordered values. For example to create a sequence of numbers separated by a given step size:
```
np.arange(0,20,2)
```
Or to create an array with a given number of equidistant values:
```
np.linspace(0,20,5)
```
In higher dimensions, the simplest example is the creation of arrays full of ones or zeros. In that case one only has to specify the dimensions. For example to create a 3x5 array of zeros:
```
np.zeros((3,5))
```
Same for an array filled with ones:
```
np.ones((3,5))
```
Until now we have only created one-dimensional lists of 2D arrays. However Numpy is designed to work with arrays of arbitrary dimensions. For example we can easily create a three-dimensional "ones-array" of dimension 5x8x4:
```
array3D = np.ones((2,6,5))
array3D
array3D.shape
```
And all operations that we have seen until now and the following ones apply to such high-dimensional arrays exactly in the same way as before:
```
array3D*5
```
We can also create more complex arrays. For example an array filled with numbers drawn from a normal distribution:
```
np.random.standard_normal((3,5))
```
As mentioned before, some array-creating functions take additional arguments. For example we can draw samples from a gaussian distribution whose mean and variance we can specify.
```
np.random.normal(loc=10, scale=2, size=(5,2))
```
### 2.3.2 Pixel-wise operations
Numpy has a large trove of functions to do all common mathematical operations matrix-wise. For example you can take the cosine of all elements of your array:
```
angles = np.random.random_sample(5)
angles
np.cos(angles)
```
Or to calculate exponential values:
```
np.exp(angles)
```
And many many more. If you need any specific one, look for it in the docs or just Google it.
### 2.2.3 Operations changing dimensions
Some functions are accessible in the form of *methods*, i.e. they are called using the *dot-notation*. For example to find the maximum in an array:
```
angles.max()
```
Alternatively there's also a maximum function:
```
np.max(angles)
```
The ```max``` function like many others (min, mean, median etc.) can also be applied to a given axis. Let's imagine we have an image of dimensions 10x10x4 (e.g. 10 pixels high, 10 pixels wide and 4 planes):
```
volume = np.random.random((10,10,4))
```
If we want to do a maximum projection along the third axis, we can specify (remember that we start counting at 0 in Python). We expect to get a single 10x10px image out of this:
```
projection = np.max(volume, axis = 2)
projection.shape
```
We see that we have indeed a new array with one dimension less because of the projection.
### 2.3.4 Combination of arrays
Finally arrays can be combined in multiple ways. For example if we want to assemble two images with the same size into a stack, we can use the ```np.stack``` function:
```
image1 = np.ones((4,4))
image2 = np.zeros((4,4))
stack = np.stack([image1, image2],axis = 2)
stack.shape
```
## 2.3 Slicing and indexing
Just like broadcasting, the selection of parts of arrays by slicing or indexing can become very sophisticated. We present here only the very basics to avoid confusion. There are often multiple ways to do slicing/indexing and we favor here easier to understand but sometimes less efficient solutions.
Let's look at a painting (Paul Klee, "Übermut"):
```
image = skimage.io.imread('../Data/Klee.jpg')
image.shape
```
We see that the image has three dimensions, probably it's a stack of three images of size 643x471. Let us try to have a look at this image hoping that dimensions are handled gracefully:
```
plt.imshow(image);
```
Whenever an image has three planes in its third dimension like here, matplotlib assumes that it is dealing with a natural image where planes encode the colors Red, Green and Blue (RGB) and displays it accordingly.
### 2.3.1 Array slicing
Let us now just look at one of the three planes composing the image. To do that, we are going the select a portion of the image array by slicing it. For each dimension we can:
- give a single index e.g. ```0``` to recover the first element
- give a range e.g. ```0:10``` to recover the first 10 elements
- recover **all** elements using the sign ```:```
In our particular case, we want to select all rows ```:``` for the first axis, all columns ```:``` for the second axis, and a single plane ```1``` for the third axis of the image:
```
image[:,:,1].shape
plt.imshow(image[:,:,0],cmap='gray')
plt.title('First plane: Red');
```
We see now the red layer of the image, and indeed the bright regions in this gray scale image correspond with red regions. We can do the same for the others by specifying planes 0, 1, and 2:
```
fig, ax = plt.subplots(nrows=1, ncols=4, figsize=(10,10))
ax[0].imshow(image[:,:,0],cmap='gray')
ax[0].set_title('First plane: Red')
ax[1].imshow(image[:,:,1],cmap='gray')
ax[1].set_title('Second plane: Green')
ax[2].imshow(image[:,:,2],cmap='gray')
ax[2].set_title('Third plane: Blue');
ax[3].imshow(image)
ax[3].set_title('Complete image');
```
Logically intensities are high for the red channel (red, brown regions) and low for the blue channel (few blue shapes). We can confirm that by measuring the mean of each plane. To do that we use the same function as above but apply it to a single sliced plane:
```
image0 = image[:,:,0]
np.mean(image0)
```
For all planes, we can either use a comprehension list to go through all planes:
```
[np.mean(image[:,:,i]) for i in range(3)]
```
Or more elegantly, calculate the mean over the axis 0 and 1:
```
np.mean(image,axis = (0,1))
```
To look at some more details let us focus on a smaller portion of the image e.g. the head of the character. For that we are going to take a slice of the red image and store it in a new variable and display the selection. We consider pixel rows from 170 to 300 and columns from 200 to 330 of the first plane (0).
```
image_red = image[170:300,200:330,0]
plt.imshow(image_red,cmap='gray');
```
There are different ways to select parts of an array. For example one can select every n'th element by giving a step size. In the case of an image, this subsamples the data:
```
image_subsample = image[170:300:3,200:330:3,0]
plt.imshow(image_subsample,cmap='gray');
```
### 2.3.2 Array indexing
In addition to slicing an array, we can also select specific values out of it. There are [many](https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html) different ways to achieve that, but we focus here on two main ones.
First, one might have a list of pixel positions and one wishes to get the values of those pixels. By passing two lists of the same size containing the rows and columns positions of those pixels, one can recover them:
```
row_position = [0,1,2,3]
col_position = [0,1,0,1]
print(image_red[0:5,0:5])
image_red[row_position,col_position]
```
Alternatively, one can pass a logical array of the same dimensions as the original array, and only the ```True```
pixels are selected. For example, let us create a logical array by picking values below a threshold:
```
threshold_image = image_red<120
```
Let's visualize it. Matplotlib handles logical arrays simply as a binary image:
```
plt.imshow(threshold_image)
```
We can recover the value of all the "white" (True) pixels in the original image by **indexing one array with the other**:
```
selected_pixels = image_red[threshold_image]
print(selected_pixels)
```
And now ask how many pixels are above threshold and what their average value is.
```
len(selected_pixels)
np.mean(selected_pixels)
```
We now know that there are 5054 pixels above the threshold and that their mean is 64.5.
| github_jupyter |
#Introduction
This notebook will help you to train yolov3 and yolov4, also their tiny versions with applying different hyperparameters and create a different instances for each project training.
[](https://colab.research.google.com/drive/1brPZDy_yVDo38ixxpI6iSu7V4fj82Ohq?usp=sharing)
```
import os
from shutil import copyfile
```
###Link Google Drive
```
from google.colab import drive
drive.mount('/content/drive')
```
### Move to the path where you want to save in gdrive
```
%cd /content/drive/MyDrive
```
###Clone the repo ==> <font color='red'>RunFirstTime</font> when starting from zero
```
!git clone https://github.com/MNaseerSubhani/TheEasyYolo.git
```
### Get inside the repo folder
```
%cd TheEasyYolo
```
###Update Submodule : <font color='red'>RunFirstTime</font> when starting from zero
```
!git submodule init
!git submodule update
```
### <font color='yellow'>Yolo Instance Init</font> : <font color='red'>RunFirstTime</font> when creating new Instance
```
%cd /content/drive/MyDrive/TheEasyYolo
instance_name = "yolo_stage_2" # Put any name of the Instance
#set hyperparameters
num_of_classes = 1 # Total number of classes
channel = 1 # Channel used "1 for Grayscale or 3 for RGB"
sub_division = 16
width = 320 # Width of input image
height = 256 # Height of input image
batch = 16 # batch size
yolo_ver = "YoloV4-tiny" #The input should be YoloV4, YoloV4-tiny, YoloV3, YoloV3-tiny
if not os.path.exists(instance_name):
os.mkdir(instance_name)
if yolo_ver == "YoloV3-tiny":
cfg_name = "yolov3-tiny.cfg"
elif yolo_ver == "YoloV3":
cfg_name = "yolov3.cfg"
elif yolo_ver == "YoloV4-tiny":
cfg_name = "yolov4-tiny.cfg"
elif yolo_ver == "YoloV4":
cfg_name = "yolov4.cfg"
cfg_file_name = f'{cfg_name[:-4]}_{instance_name}.cfg'
copyfile(os.path.join('./darknet/cfg/', cfg_name),os.path.join(instance_name, cfg_file_name))
%cd {instance_name}
!sed -i 's/subdivisions=1/subdivisions=$sub_division/' {cfg_file_name}
!sed -i 's/width=416/width=$width/' {cfg_file_name}
!sed -i 's/batch=64/batch=$batch/' {cfg_file_name}
!sed -i 's/height=416/height=$height/' {cfg_file_name}
!sed -i 's/channels=3/channels=$channel/' {cfg_file_name}
!sed -i 's/classes=80/classes=$num_of_classes/' {cfg_file_name}
filters = int ((num_of_classes + 5) * 3)
!sed -i 's/filters=255/filters=$filters/' {cfg_file_name}
%cd ..
copyfile('generate_test.py',os.path.join(instance_name, 'generate_test.py'))
copyfile('generate_train.py',os.path.join(instance_name, 'generate_train.py'))
```
### <font color='red'>NOTE: At this stage add dataset to the instance folder as structured below</font>
--data
----------- train
----------- test
### Generate text files for the data : <font color='red'>RunFirstTime</font> when creating new Instance
```
%cd {instance_name}
!python generate_train.py
!python generate_test.py
%cd ..
```
###Generate train.names file for the instance : <font color='red'>RunFirstTime</font> when creating new Instance
```
%cd {instance_name}
with open("train.names", "w") as f:
f.write("defect")
#f.write("\nother classe")
%cd ..
```
###Generate train.data file for the instance : <font color='red'>RunFirstTime</font> when creating new Instance
```
%cd {instance_name}
if not os.path.exists('backup'):
os.mkdir('backup')
with open("train.data", "w") as f:
f.write(f"classes={num_of_classes}")
f.write(f"\ntrain = {os.path.join(os.getcwd(), 'data', 'train.txt')}")
f.write(f"\nvalid = {os.path.join(os.getcwd(), 'data', 'test.txt')}")
f.write(f"\nnames = {os.path.join(os.getcwd(), 'train.names')}")
f.write(f"\nbackup = {os.path.join(os.getcwd(), 'backup/')}")
#f.write("\nother classe")
%cd ..
```
#Training
```
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
```
### Download pre-trained weights for the convolutional layers : <font color='red'>RunFirstTime</font> when creating new Instance
```
if yolo_ver == "YoloV4" or yolo_ver == "YoloV4-tiny":
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
pre_trained_weights = 'yolov4.conv.137'
elif yolo_ver == "YoloV3":
!wget https://github.com/smarthomefans/darknet-test/blob/master/yolov3-tiny.weights
pre_trained_weights = 'yolov3-tiny.weights'
elif yolo_ver == "YoloV3-tiny":
!wget https://github.com/smarthomefans/darknet-test/blob/master/yolov3-tiny.weights
pre_trained_weights = 'yolov3-tiny.weights'
```
### <font color='yellow'>Run Make</font> : <font color='red'>RunFirstTime</font> when starting from zero
```
!make
```
###Training
```
!./darknet detector train ../{instance_name}/train.data ../{instance_name}/{cfg_file_name} {pre_trained_weights} -dont_show -map -clear
```
Continue Training
```
!./darknet detector train ../{instance_name}/train.data ../{instance_name}/{cfg_file_name} ../{instance_name}/backup/{cfg_file_name[:-4] + '_last.weights'} -dont_show -map -clear
```
#Test
```
%cd ..
%cd {instance_name}
!sed -i 's/batch=$batch/batch=1/' {cfg_file_name}
!sed -i 's/subdivisions=$sub_division/subdivisions=1/' {cfg_file_name}
%cd ..
%cd darknet
!./darknet detector test ../{instance_name}/train.data ../{instance_name}/{cfg_file_name} ../{instance_name}/backup/{cfg_file_name[:-4] + '_last.weights'} ../{instance_name}/data/test/IMG_0296.jpg -thresh 0.1
%cd home/TheEasyYolo
!jupyter nbconvert --ClAearOutputPreprocessor.enabled=True --inplace TheEasyYolo.ipynb
!ls
```
| github_jupyter |
## DEEPER MULTILAYER PERCEPTRON WITH DROPOUT
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
```
# LOAD MNIST
```
mnist = input_data.read_data_sets('data/', one_hot=True)
```
# DEFINE MODEL
```
# NETWORK TOPOLOGIES
n_input = 784 # MNIST data input (img shape: 28*28)
n_hidden_1 = 512 # 1st layer num features
n_hidden_2 = 512 # 2nd layer num features
n_hidden_3 = 256 # 3rd layer num features
n_classes = 10 # MNIST total classes (0-9 digits)
# INPUT AND OUTPUT
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
dropout_keep_prob = tf.placeholder("float")
# VARIABLES
stddev = 0.05
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=stddev)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=stddev)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], stddev=stddev)),
'out': tf.Variable(tf.random_normal([n_hidden_3, n_classes], stddev=stddev))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'b3': tf.Variable(tf.random_normal([n_hidden_3])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
print ("NETWORK READY")
def multilayer_perceptron(_X, _weights, _biases, _keep_prob):
x_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1']))
layer_1 = x_1
x_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
layer_2 = x_2
x_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, _weights['h3']), _biases['b3']))
layer_3 = tf.nn.dropout(x_3, _keep_prob)
return (tf.matmul(layer_3, _weights['out']) + _biases['out'])
```
# DEFINE FUNCTIONS
```
# PREDICTION
pred = multilayer_perceptron(x, weights, biases, dropout_keep_prob)
# LOSS AND OPTIMIZER
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
corr = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
# INITIALIZER
init = tf.initialize_all_variables()
print ("FUNCTIONS READY")
```
# RUN
```
# PARAMETERS
training_epochs = 20
batch_size = 100
display_step = 4
# LAUNCH THE GRAPH
sess = tf.Session()
sess.run(init)
# OPTIMIZE
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# ITERATION
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
feeds = {x: batch_xs, y: batch_ys, dropout_keep_prob: 0.6}
sess.run(optm, feed_dict=feeds)
feeds = {x: batch_xs, y: batch_ys, dropout_keep_prob: 1.0}
avg_cost += sess.run(cost, feed_dict=feeds)
avg_cost = avg_cost / total_batch
# DISPLAY
if (epoch+1) % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys, dropout_keep_prob: 1.0}
train_acc = sess.run(accr, feed_dict=feeds)
print ("TRAIN ACCURACY: %.3f" % (train_acc))
feeds = {x: mnist.test.images, y: mnist.test.labels, dropout_keep_prob: 1.0}
test_acc = sess.run(accr, feed_dict=feeds)
print ("TEST ACCURACY: %.3f" % (test_acc))
print ("OPTIMIZATION FINISHED")
```
| github_jupyter |
# PyStorm Walkthrough
```
# basic imports for working with matrices and plotting
import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
import nengo
```
This guide will introduce you to PyStorm, a Python package that exposes abstractions of Braindrop's hardware constructs as network objects. Basic familiarity with PyStorm's network objects is assumed (see first 2 pages of __braindrop_pystorm_documentation.pdf__).
The outline for this walkthrough is as follows:
1. Creating a 1D pool and its tap-point matrix
2. Sending inputs and measuring spikes
3. Decoding functions
4. Visualizing tuning curves
*** NOTE ***
Below we've provided a cell for resetting the chip's connection to the server. In most cases, this code will be unnecessary. However, sometimes when the initial handshake occurs (during the `HAL()` call in the tutorial code below), the USB connection fails and the Python call quickly raises the error messeage `Comm failed to init`. If this happens, run this cell below to reset the connection and try again. This may take a few tries for it to reconnect.
```
!sudo /home/ee207student/usbreset.sh # reset the USB connection, which sometimes dies.
# if Comm below is repeatedly & quickly failing to init, you can run this cell.
```
*** END NOTE ***
## PyStorm Imports
```
############################################################################
# Importing essential PyStorm libraries
# HAL, or Hardware Abstraction Layer, must be imported and initialized
from pystorm.hal import HAL
HAL = HAL()
# data_utils contains a useful function for binning spikes and estimating spike rates
from pystorm.hal import data_utils
# RunControl objects are responsible for executing command sequences on the chip, such as initiating
# tuning curve measurements
from pystorm.hal.run_control import RunControl
# Graph objects represent properties of your network
# so that HAL can allocate resources for them on the chip
from pystorm.hal.neuromorph import graph
```
## 1 Creating a 1D Pool and Tap Point Matrix
Let's review basic requirements for building and simulating an ensemble with PyStorm (for syntax definitions, see __braindrop_pystorm_documentation.pdf__).
### 1.1 Defining a pool's parameters and its tap-point matrix
Define your neuron array's width $W$ and height $H$, such that the ensemble's size is $N = WH$, and the dimensionality $D$ of its input space. Then, construct a $N$-by-$D$ tap-point matrix `tap_matrix`, which specifies an _anchor encoder_ for each tap-point used. For instance, in Problem Set 3, you defined tap points on a 1-D diffuser with plus and minus signs ($+1$ or $-1$). In $D$ dimensions, these anchor encoders are $D$-dimensional vectors in $\{-1, 0, +1\}^D$. When we use a 2-D diffuser to create two-dimensional encoders, we draw the tap-points' anchor encoders from $\{-1, 0, +1\}^2$.
Although a $N$-by-$D$ matrix can specify all $N$ anchor encoders, a $N$-neuron pool has only $N/4$ tap-points—since there is 1 synaptic filter per $2\times2$ cluster of neurons. If you specify more than $N/4$ anchor encoders, HAL will throw an error when it tries to create your network.
The code below implements an algorithm that assigns 1-D anchor encoders to tap-points. It's quite simple:
- Divide the set of tap-points in half along the width axis
- Assign one side $[+1]$ anchor encoders and the other half $[-1]$ anchor encoders.
`plot_tap_matrix()` visualizes the results within the $W$-by-$H$ neuron array. Free parameter $\rho$ controls tap-point density by skipping synapses. Try changing this parameter to create different tap-point matrices.
```
def tap_mat_1d(width, height, rho=0)
'''
Creates a structured 1D tap-point matrix
Parameters
----------
width: int
Width of neuron array
height: int
Height of neuron array
rho: int
1/2**rho is the fraction of total tap-points the
algorithm should use
Returns
-------
tap_matrix: numpy matrix
width*height x 2 tap-point matrix
'''
# creating an empty tap-point matrix
tap_matrix = np.zeros((N, Din))
# one synaptic filter per 4 neurons
for x in range(2**rho - 1, width, 2 * 2**rho):
for y in range(2**rho - 1, height, 2 * 2**rho):
n = y * width + x
if x < width // 2:
tap_matrix[n, 0] = 1
else:
tap_matrix[n, 0] = -1
return tap_matrix
def plot_tap_matrix(matrix, width, height):
'''
Plots single dimension of a tap-point matrix with a given width and height
'''
plt.imshow(matrix.reshape((height, width)), cmap='bwr', vmin=-1, vmax=1, interpolation='none', aspect='equal')
plt.colorbar(fraction=0.046, pad=0.04)
ax = plt.gca()
ax.set_yticks(np.arange(0, height, 1))
ax.set_xticks(np.arange(0, width, 1))
ax.set_yticklabels(np.arange(1, height+1, 1));
ax.set_xticklabels(np.arange(1, width+1, 1));
ax.set_yticks(np.arange(-.5, height, 1), minor=True);
ax.set_xticks(np.arange(-.5, width, 1), minor=True);
plt.grid(which='minor', color='k', linestyle='--')
############################################################################
# Creating pool parameters and a tap-point matrix
# width, height, and size of a 1D pool
width = 16
height = 16
width_height = (width, height)
N = width * height
Din = 1
rho = 0
tap_matrix = tap_mat_1d(width, height, rho=rho)
# plotting the matrix
plt.figure(figsize=(6,6))
plot_tap_matrix(tap_matrix, width, height)
```
### 1.2 Creating the pool and its connections
Now that we've selected pool's width (16), height (16), dimensionality (1), and tap-point matrix, we may create an actual network. We do this by defining input and output nodes, for incoming and outgoing signals, a pool of neurons, a bucket to accumulate its neurons' spikes, and then connect these network objects to create a graph object.
```
############################################################################
# Creating a pool and its connections
# create the network as a graph object, named "net" here, which tracks connections and configuration parameters
# for the network's objects.
net = graph.Network("net")
# decoders are initially zero, we remap them later (without touching the rest of the network)
# using HAL.remap_weights()
Dout = 1
decoders = np.zeros((Dout, N))
# create and name the input and specify its dimensionality
i1 = net.create_input("i1", Din)
# create and name the neuron pool and supply the computed tap-point matrix.
# we also explicitly set the biases here; you will optimize these in
# the next problem. All neurons have their bias set to -3.
p1 = net.create_pool("p1", tap_matrix, biases=-3)
# create and name the bucket and specify its dimensionality
b1 = net.create_bucket("b1", Dout)
# create and name the output and specify its dimensionality
o1 = net.create_output("o1", Dout)
# connect the input to the pool
# "None" is for the weights field; only connections to buckets can have weights
net.create_connection("c_i1_to_p1", i1, p1, None)
# connect the pool to the bucket and specify initial weights
decoder_conn = net.create_connection("c_p1_to_b1", p1, b1, decoders)
# connect the bucket to the output
net.create_connection("c_b1_to_o1", b1, o1, None)
```
### 1.3 Setting up HAL to map the defined network
HAL instantiates our network on Braindrop and allocates resources to it. To do this, we have to first configure Braindrop's and the host machine's time-bins (i.e., time-resolution). Then, we `map(net)`, the Graph object (defined in 2.1.3). We must redo this call any time we change `net`'s properties.
```
############################################################################
# Setting up the HAL interface and mapping the network
# misc driver parameters: downstream refers to Braindrop and upstream refers to the host
downstream_time_res = 10000 # ns, or 10us
upstream_time_res = 1000000 # ns, or 1ms
HAL.set_time_resolution(downstream_time_res, upstream_time_res)
# invoke HAL.map(), make tons of neuromorph/driver calls under the hood
# to map the network
print("calling map")
HAL.map(net)
```
## 2 Sending Inputs and Measuring Spike Rates
In order to drive Braindrop with a stimulus $x(t)$, PyStorm requires you to specifiy its values at particular times they occur and a value for the parameter $f_\mathrm{max}$. The stimulus $-1 < x(t) < +1$ is streamed to Braindrop as a periodic train of impulses with instantaneous rate $-f_\mathrm{max} < f_\mathrm{max}x(t) < +f_\mathrm{max}$, where the minus sign denotes an impulse with area -1 instead of +1.
### 2.1 Example input signals
The functions `make_cons_stimulus()` and `make_sine_stimulus()` demonstrate two different inputs that are useful for measuring neurons' firing properties.
`make_cons_stimulus()` creates a D-dimensional stimulus that steps through all points in a uniformly-gridded space $[-f_\mathrm{max},+f_\mathrm{max}]^D$. This signal's value equals each of the $Q^D$ points for a constant amount of time. It's used to sweep a $D$-dimensional space and measure the neurons' responses. That yields estimates for their firing-rates using bin-widths equal to `hold_time`. Such box-car filtering isn't what the synapse does, however. Thus it yields a wrong estimate for firing-rates. Training decoders using these firing-rate estimates therefore does not produce decoders that operate with the synapses' firing-rate estimates.
`make_sine_stimulus()` creates a multidimensional sinusoidal stimulus with amplitude $f_\mathrm{max}$, period `base_period`, and duration given in `cycles`. Note that it's constant over a step size `input_dt`. Each dimension's ($D_i$) sinusoid oscillates at frequency $\phi^{-D_i}/T$, where $\phi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$ and $T$ is the first-dimension's period, which `base_period` signifies. This scaling ensures no pair of dimensions ever synchronize. As a result, the full domain $[-1,1]^D$ is swept. When used to train decoders with `base_period` equal to $2\pi\tau_s$, where $\tau_s$ is the synapses' time-constant, it yields reasonable firing-rate noise estimates (after lowpass filtering time-constant $\tau_s$). Training decoders will be demonstrated in section 3.
```
def make_cons_stimulus(Din, hold_time, points_per_dim, fmax=1000):
'''
Creates multidimensional staircase-like signal
Parameters
----------
Din: int
Input dimensionality
hold_time: float (seconds)
Duration that each input is applied
points_per_dim: int
Number of points per dimension
fmax: float
Units converting x = +/-1 to firing rate
Returns
-------
inp_times: numpy array
Length-S array of times a new input starts (in seconds)
inputs: numpy matrix
S-by-Din matrix of inputs (in units of firing rate)
'''
total_points = points_per_dim ** Din
stim_rates_1d = np.linspace(-fmax, fmax, points_per_dim).astype(int)
if Din > 1:
stim_rates_mesh = np.meshgrid(*([stim_rates_1d]*Din))
else:
stim_rates_mesh = [stim_rates_1d]
stim_rates = [r.flatten() for r in stim_rates_mesh]
inputs = np.array(stim_rates).T
inp_times = np.arange(0, total_points) * hold_time
return inp_times, inputs
def make_sine_stimulus(Din, base_period, cycles, fmax=1000, input_dt=1e-3):
'''
Creates multidimensional sinusoidal input
Parameters
----------
Din: int
Input dimensionality
base_period: float (seconds)
Period of oscillation for first dimension
cycles: float
Number of cycles the signal is generated for
fmax: float
Units converting x = +/-1 to firing rate
input_dt: float
Step-size between each consecutive input (in seconds)
Returns
-------
inp_times: numpy array
Length-S array of times a new input starts (in seconds)
inputs: numpy matrix
S-by-Din matrix of inputs (in units of firing rate)
'''
phi = (1 + np.sqrt(5))/2.0
freq = 1/base_period
f = ((1/phi)**np.arange(Din)) * freq
stim_func = lambda t: np.sin(2*np.pi*f*t)
sim_time = cycles * base_period
inp_times = np.arange(int(sim_time/input_dt))*input_dt
inputs = np.zeros((len(inp_times), Din))
for index, t in enumerate(inp_times):
v = stim_func(t)
for i, vv in enumerate(v):
ff = int(vv*fmax)
if ff == 0:
ff = 1
inputs[index, i] = float(ff)
return inp_times, inputs
############################################################################
# Plotting example stimuli
Dstim = 2
hold_time = 0.1
Q = 10
times, inp = make_cons_stimulus(Dstim, hold_time, Q)
plt.figure()
plt.plot(inp[:,0], inp[:,1], '--o')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title('Constant Sweep, Total Length = %.1f sec' % times[-1])
Dstim = 2
base_period = 0.1
cycles = 30
times, inp = make_sine_stimulus(Dstim, base_period, cycles)
plt.figure()
plt.plot(inp[:,0], inp[:,1])
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title('Sinusoidal Sweep, Total Length = %.1f sec' % times[-1]);
```
### 2.2 Sending inputs and collecting spikes
To send stimuli to Braindrop, we use the `RunControl` object, telling it what the stimulus is, where it should go, and for how long it should be measured. It returns spike or output data back to the host, binned with a resolution defined by the `upstream_time_res` parameter (set earlier).
In this example, you will use `make_sine_stimulus()` as stimulus to your pool, with `base_period` equal to $2\pi\tau_s$, where $\tau_s$ is the average synaptic time-constant. In order to select the number of cycles, note that each cycle results in approximately 2 uncorrelated samples of the input space. If you want $Q$ samples per dimension, you should use $(Q/2)^D$ cycles to measure the firing rate. To be conservative, quadruple this number.
```
############################################################################
# Sending inputs and collecting spikes
# define input parameters
Din = 1
fmax = 1000 # Hz
input_dt = 1e-3 # s
tau_syn = 0.02 # 20 ms
base_period = 2*np.pi*tau_syn # approximately 125 ms
# approximate samples per dimension
Q = 20
cycles = 4* ((Q / 2)**Din) # 4*10^D cycles, or 5 sec for D=1, 50 sec for D=2
# create the input
inp_times, inputs_sin = make_sine_stimulus(Din, base_period, cycles, fmax=fmax, input_dt=input_dt)
# package together the inputs and their timestamps
# times must be sent in ns (and thus are multiplied by 1e9)
FUDGE = 2
curr_time = HAL.get_time()
times = curr_time + FUDGE * 1e9 + inp_times * 1e9
times_w_end = np.hstack((times, times[-1] + input_dt * 1e9))
input_vals = {i1 : (times, inputs_sin)}
# create the RunControl object for our defined network
rc = RunControl(HAL, net)
# send inputs and collect spikes
print("getting spikes")
_, spikes_and_bin_times = rc.run_input_sweep(input_vals, get_raw_spikes=True,
end_time=times_w_end[-1], rel_time=False)
print("done")
# spike_bin_times is length-S, bin-size set by upstream_time_res
# spike_dict is a dictionary, spike_ct is a S-by-N matrix of spike counts within S bins
spike_dict, spike_bin_times = spikes_and_bin_times
spike_ct = spike_dict[p1]
# convert spike counts to spike trains
spikes = spike_ct / input_dt
# truncate inputs to length of measured outputs
inp_times_trunc = inp_times[:spikes.shape[0]]
inputs_sin_trunc = inputs_sin[:spikes.shape[0]]
```
## 3 Decoding Functions
<img src="brainstorm_diagram.png" width="600">
Let's analyze how to decode functions using the above architecture as an example. It depicts a `graph` object with two pools, `poolA` and `poolB`, each with their own set of tap-point matrices and synapses. The input node sends spikes at rate $f_\mathrm{max}x(t)$ to `poolA`'s synapses, specified by the pool's tap-point matrix. The synaptic current spreads through the diffuser and drives neurons in `poolA` to spike. These spikes are collected in `bucketAB` and thinned to decode a desired function $f(x)$. Once `bucketAB` emits its a decoded spike-train, it is filtered through `poolB`'s synapses (specified in its tap-point matrix) before spreading through the diffuser and driving `poolB`'s neurons. If we want to compute and output some other function $g(x)$ on `poolB`'s spike-train, we connect `poolB` to `bucketBOut`. This bucket thins the spike-train with its own set of decode weights, and the result is sent to the host through an output node.
We first build this graph.
input is u(t)
input to pool A is $\tau_a\dot{x} + x = u(t)$
Apply $f$ to $x(t)$, $f(x(t))$
Then filter throught $\tau_b$, $\tau_b\dot{y} + y = f(x(t))$
```
DinA = 1
DoutA = 1
widthA = 16
heightA = 16
rhoA = 0
Na = widthA * heightA
tap_mat_A = tap_mat_1d(widthA, heightA, rho=rhoA)
xy_loc_A = (0, 0) # select pool starting at top-left = (0, 0)
DinB = 1
DoutB = 1
widthB = 16
heightB = 16
rhoB = 0
Nb = widthB * heightB
tap_mat_B = tap_mat_1d(widthB, heightB, rho=rhoB)
xy_loc_B = (16, 0) # select pool starting at top-left = (16,0)
# decoders are initially zero, we remap them later (without touching the rest of the network)
# using HAL.remap_weights()
decoders_AB = np.zeros((DoutA, Na))
decoders_BOut = np.zeros((DoutB, Nb))
net = graph.Network("net")
i1 = net.create_input("i1", DinA)
pA = net.create_pool("pA", tap_mat_A, biases=-3, user_xy_loc=xy_loc_A)
pB = net.create_pool("pB", tap_mat_B, biases=-3, user_xy_loc=xy_loc_B)
bAB = net.create_bucket("bAB", DoutA)
bBOut = net.create_bucket("bBOut", DoutB)
o1 = net.create_output("o1", Dout)
net.create_connection("c_i1_to_p1", i1, pA, None)
decoder_AB_conn = net.create_connection("c_pA_to_bAB", pA, bAB, decoders_AB)
net.create_connection("c_bAB_to_pB", bAB, pB)
decoder_BOut_conn = net.create_connnection("c_pB_to_bBOut", pB, bBOut, decoders_BOut)
net.create_connection("c_bBOut_to_o1", bBOut, o1, None)
HAL.map(net)
```
### 3.1 Training decoders
To train decoders, first filter the input signal $x(t)$ by the synaptic time-constant (since the spikes were generated by filtering the input through the synapse as well), and then apply the transformation $f(x)$ to the resulting time-series. You could then take your spike-train matrix $\mathbf{S}$ and transformed and filtered input $\mathbf{f}(x_\mathrm{filt})$ find decoders $\mathbf{d}$ such that $\mathbf{S}\mathbf{d} = \mathbf{f}(x_\mathrm{filt})$. However, decoding directly on the spike train will result in extremely noisy function decoders. Instead, you should convolve both the spikes and the the target function with a longer time constant filter. This procedure will give a $S$-by-$N$ matrix of firing rates $\mathbf{A}$ and a length-$S$ vector of target firing-rates $\mathbf{f}_\mathrm{filt}(x_\mathrm{filt})$, allowing you to solve $\mathbf{A}\mathbf{d} = \mathbf{f}_\mathrm{filt}(x_\mathrm{filt})$ for decoders $\mathbf{d}$.
After finding this decoder, you must check to ensure that all decoding weights are at most 1, since this is the largest weight that can be implemented by thinning. If this is not the case, rescale all the decoders by $\max(|\mathbf{d}|)$ and redefine $f_\mathrm{max}' = f_\mathrm{max}/\max(|\mathbf{d}|)$. This will be used to rescale our firing rate back to the units of the function space by dividing the decoded output by $f_\mathrm{max}'$ instead of $f_\mathrm{max}$.
```
############################################################################
# Training decoders and evaluating training error
# create filter objects
f_in = nengo.synapses.Lowpass(tau_syn)
tau_out = 0.1
f_out = nengo.synapses.Lowpass(tau_out)
# here we demonstrate the target function f(x) = x
func = lambda x: x
# first filter the input signal
x_filt = f_in.filt(inputs_sin_trunc, y0=0, dt=input_dt)
# then apply the function
fx_filt = func(x_filt)
# create a firing-rate matrix by filtering with f_out
A = f_out.filt(spikes, y0=0, dt=input_dt)
# create a filtered target function
f_filt = f_out.filt(fx_filt, y0=0, dt=input_dt)
# use nengo's built-in solver
solver = nengo.solvers.LstsqL2(reg=0.0001)
# use solver to find decoders
dec, _ = solver(A, f_filt)
# make sure decoders are <= 1
max_d = np.max(np.abs(dec))
if max_d > 1:
# if larger, rescale
fmax_out = fmax / max_d
dec /= max_d
print("rescaling by", max_d)
else:
# otherwise, we use the same scaling
fmax_out = fmax
# apply the decode on the training data
fhat_filt = A @ dec
# rescale to output units
fhat_filt_out = fhat_filt / fmax_out
f_filt_out = f_filt / fmax
# compute training error
rmse = np.sqrt(np.mean((f_filt_out - fhat_filt_out)**2))
# plotting 4 periods of data
plt.figure()
plt.plot(inp_times_trunc[:int(4*base_period/input_dt)],
fhat_filt_out[:int(4*base_period/input_dt)], label='decoded')
plt.plot(inp_times_trunc[:int(4*base_period/input_dt)],
f_filt_out[:int(4*base_period/input_dt)], '--k', label='true')
plt.xlabel(r'$t$ [s]')
plt.legend()
plt.title('RMSE = %.3f' % rmse)
```
### 3.2 Changing the decoder and testing with new inputs
In order to update the network's decoder to the weights from 3.1, you must reassign the weights to the decoder connection (which connects the pool to the bucket) and call HAL to remap the network with the new weights.
To test this model, we've chosen the sinusoidal signal generated by `make_sine_stimulus()`, this time with a base period of 1 second for a duration of 4 cycles. We package this input along with its timestamps as in section 2, and use the RunControl object to send these inputs to Braindrop. Instead of measuring spikes directly from the pool, we set `raw_spikes=False` to receive spikes from the output node `o1`. This will pass the neuron's spikes through the `bucket`, which decodes by thinning the pool's merged spike-train in Braindrop's accumulators. The thinned spike-train ultimately reaches the output node `o1` and is send back to the host, binned according to the resolution set by `upstream_time_res`.
```
############################################################################
# Testing the trained decoders
# all we have to is change the decoders
decoder_conn.reassign_weights(dec.T)
# then call remap
print("remapping weights")
HAL.remap_weights()
# define input parameters
Din = 1
base_period = 1 # seconds
cycles = 4
# create the input
inp_times, inputs_sin = make_sine_stimulus(Din, base_period, cycles, fmax=fmax, input_dt=input_dt)
# package together inputs and their timestamps
# times must be sent in ns (and thus are multiplied by 1e9)
FUDGE = 2
curr_time = HAL.get_time()
times = curr_time + FUDGE * 1e9 + inp_times * 1e9
times_w_end = np.hstack((times, times[-1] + input_dt * 1e9))
input_vals = {i1 : (times, inputs_sin)}
# create the RunControl object for our defined network
rc = RunControl(HAL, net)
# send the input and collect outputs from the output node
# out_dict is a dictionary of output spikes for each output node,
# out_bin_times is a array of bin
outputs_and_bin_times, _ = rc.run_input_sweep(input_vals, get_raw_spikes=False,
end_time=times_w_end[-1], rel_time=False)
# out_bin_times is length-S, bin-size set by upstream_time_res
# out_dict is a dictionary, out_ct is a S-by-N matrix of output spike counts within S bins
out_dict, out_bin_times = outputs_and_bin_times
out_ct = out_dict[o1]
# convert spike counts to spike train
out_spikes = out_ct / input_dt
# truncate inputs to length of measured outputs
inp_times_trunc = inp_times[:spikes.shape[0]]
inputs_sin_trunc = inputs_sin[:spikes.shape[0]]
```
### 3.3 Plotting the decoded output
In order to plot the decoded spike-train, we estimate the instantaneous spike-rate by filtering with `f_out`, which has a time constant set by `tau_out`. Then for comparison, we compute the desired function by first filtering by `f_in` to model the initial filtering by Braindrop's synapses, applying the transformation $f(x)$, then filtering by `f_out` to align with the filtered spike-train.
When plotting, these are rescaled back the same range by dividing the desired function by `fmax` and the decoded function by `fmax_out`.
```
############################################################################
# Visualizing the decoded spikes
# filter the decoded spikes to generate the output signal
fh_filt = f_out.filt(out_spikes, y0=0, dt=input_dt)
# create the desired function for comparison
x_filt = f_in.filt(inputs_sin_trunc, y0=0, dt=input_dt)
fx_filt = func(x_filt)
f_filt = f_out.filt(fx_filt, y0=0, dt=input_dt)
# rescale by fmax and fmax_out to return to consistent units
fh_filt_out = fh_filt / fmax_out
f_filt_out = f_filt / fmax
# compute rmse
rmse = np.sqrt(np.mean((f_filt_out - fh_filt_out)**2))
plt.figure()
plt.title('RMSE = %.3f' % rmse)
plt.plot(inp_times_trunc, fh_filt_out, label='decoded')
plt.plot(inp_times_trunc, f_filt_out, label='true')
plt.xlabel(r'$t$ [s]')
```
## 4 Visualizing Tuning Curves
Generating tuning curves from spike data can be done in one of two ways, depending on the way spike data is represented. We'll begin with the most straightforward method: driving our ensemble with $Q^D$ inputs uniformly covering $[-f_\mathrm{max}, +f_\mathrm{max}]$ for a fixed duration and estimating each neurons firing rate. Then we'll demonstrate how we can still extract this firing-rate information when the tuning curves are represented parametrically.
### 4.1 Measuring tuning curves from constant input signals
To create this input signal, we use `make_cons_stimulus()`. This function naturally traverses a uniformly-gridded domain and holds inputs for constant durations. After generating the stimulus and creating timestamps for the input, we'll use the RunControl function `run_and_sweep()` to measure binned spike counts. Then, we'll use a utility function in `pystorm.hal.data_utils`, `bins_to_rates()`, to estimate each neuron's spike rate from the bin spike counts.
```
############################################################################
# Measuring tuning curves and plotting
# the longer the hold time, the more accurately we estimate the spike rate
hold_time = 0.5
# stimulus parameters
Din = 1
Q = 10
fmax = 1000
# generate the input signal
inp_times, inputs_cons = make_cons_stimulus(Din, hold_time, Q, fmax=fmax)
# package together the stopwatch times and input stimuli
# times must be sent in ns (and thus are multiplied by 1e9)
FUDGE = 2
curr_time = HAL.get_time()
times = curr_time + FUDGE * 1e9 + inp_times * 1e9
times_w_end = np.hstack((times, times[-1] + hold_time * 1e9))
input_vals = {i1 : (times, inputs_cons)}
# send inputs and collect spikes
print("getting spikes")
_, spikes_and_bin_times = rc.run_input_sweep(input_vals, get_raw_spikes=True,
end_time=times_w_end[-1], rel_time=False)
print("done")
# parse spike data into bin times and binned spike counts
spike_dict, spike_bin_times = spikes_and_bin_times
spike_ct = spike_dict[p1]
# its arguments are spike counts, original bin times, bin edges, and a discard fraction
# it averages spike count over the holding time, discarding some fraction of initial spikes
# it returns a Q**Din-by-N matrix, exactly corresponding to the tuning curve matrix
A = data_utils.bins_to_rates(spike_ct, spike_bin_times, times_w_end, init_discard_frac=.2)
# plot the tuning curves
x = input_cons / fmax
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.plot(x, A);
plt.xlabel(r'$x$')
plt.ylabel('Firing Rate [Hz]');
plt.subplot(122)
plt.plot(x, A)
plt.ylim([0, 2e3])
plt.xlabel(r'$x$')
plt.ylabel('Firing Rate [Hz]');
```
### 4.2 Estimating tuning curves from parametric data
We'll now show you how you can estimate tuning curves when neurons' firing-rates are not explicit functions of the input $\mathbf{x}$. Instead, both inputs $\mathbf{x}$ and a $\mathbf{a}$ are functions of time $t$. So long as the input set $\mathcal{X} = \{\mathbf{x}(t) | t \leq T\}$ adequately samples the domain $[-f_\mathrm{max}, +f_\mathrm{max}]$, you can estimate the pool's firing-rate $\mathbf{a}(\mathbf{x}')$ for a new input $\mathbf{x}'$ by interpolating over $\{\mathbf{a}(\mathbf{x}_j), \mathbf{x}_j \in \mathcal{X} \cap\mathcal{N}(\mathbf{x}')\}$.
In the code below, we demonstrate this principle using inputs $x(t)$ generated by `make_sine_inputs()`. To improve the accuracy of the firing-rate estimates, we slow the input oscillation with a longer `base_period` and additional cycles.
```
def tuning_curve_from_parametric(x_times, x_vals, binned_spk_ct, curve_pts,
sigma=0.1, threshold=1e-3, default_value=None):
"""
Estimate the tuning curve, given time-varying spike counts
and a time-varying signal
Parameters
----------
x_times: Python list
list of times (in seconds) that we know the input for
x_vals: numpy array
the corresponding input for each time in x_times, in abstract units (not fmax)
binned_spk_ct: numpy array
spike counts for each bin
curve_pts: numpy array
the x values we want a frequency estimate for, in abstract units (not fmax)
sigma: float
std.dev of the gaussian weighting (in x-space)
threshold: float
if the total weight is less than this, we have no good estimate
for this x point
default_value: int or None
the value to use if the total weight is less that the threshold
(usually 0 or None)
Returns
-------
curve: numpy array
tuning curve for the given neuron's binned spike counts
"""
if len(curve_pts.shape) == 1:
curve_pts = curve_pts.reshape(curve_pts.shape[0],1)
if len(x_vals.shape) == 1:
x_vals = x_vals.reshape(x_vals.shape[0],1)
# convert binned spike counts to list of spike times
spike_times = []
for j, ct in enumerate(binned_spk_ct[:-1]):
if ct > 0:
# ct spikes within a dt are uniformly spaced within the dt
spike_times.extend(list(np.linspace(x_times[j], x_times[j+1], ct)))
# number of dimensions
D = x_vals.shape[1]
# throw away spikes that occur at the same time
t = np.unique(spike_times)
# compute frequency for each inter-spike interval
freq = 1.0/np.diff(t)
# the midpoints of each inter-spike interval
t_mid = (t[:-1] + t[1:])/2
# find the x values for each time point in t_mid
x_mid = scipy.interpolate.griddata(x_times, x_vals, (t_mid,), method='nearest')
# compute the L2 norm (squared) for each pair of points
# in x_mid and curve_pts. That is, find the squared distance in
# x-space between each of the points we're interested in (curve_pts)
# and each point we have (x_mid)
v = np.sum([(x_mid[:,i][:,None]-curve_pts[:,i][None,:])**2 for i in range(D)], axis=0)
# compute the weighting for each point (gaussian weighting)
w = np.exp(-v/(2*sigma)**2)/(np.sqrt(2*np.pi*sigma**2))
# for each point in curve_pts, determine the frequency using
# a weighted average (where the weight is the gaussian based on distance)
total = np.sum(w, axis=0)
safe_total = np.where(total<threshold, 1.0, total)
curve = np.sum(w*freq[:,None], axis=0)/safe_total
# for values where the weights are below some theshold,
# set them to some default value (usually None or 0)
curve[total<threshold] = default_value
return curve
############################################################################
# Estimating tuning curves and plotting
# the longer the period and higher the cycle count, the more densely
# we sample the domain
base_period = 1
cycles = 10
# stimulus parameters
Din = 1
fmax = 1000
input_dt = 1e-3
# generate the input signal
inp_times, inputs_sin = make_sine_stimulus(Din, base_period, cycles, fmax=fmax, input_dt=input_dt)
# package together the stopwatch times and input stimuli
# times must be sent in ns (and thus are multiplied by 1e9)
FUDGE = 2
curr_time = HAL.get_time()
times = curr_time + FUDGE * 1e9 + inp_times * 1e9
times_w_end = np.hstack((times, times[-1] + input_dt * 1e9))
input_vals = {i1 : (times, inputs_sin)}
# send inputs and collect spikes
print("getting spikes")
_, spikes_and_bin_times = rc.run_input_sweep(input_vals, get_raw_spikes=True,
end_time=times_w_end[-1], rel_time=False)
print("done")
# parse spike data into bin times and binned spike counts
spike_dict, spike_bin_times = spikes_and_bin_times
spike_ct = spike_dict[p1]
# we define the x-values that we will estimate firing rates for
Q = 20
x = np.linspace(-1, 1, Q)
A = np.zeros((Q, N))
# we go through each neuron's binned spike counts and extract its tuning curve
for n in range(N):
# since interpolation radius is not defined in units of fmax, we convert inputs_sin
# and x back to abstract units
A[:,n] = tuning_curve_from_parametric(inp_times, inputs_sin / fmax, spike_ct[:,n], x)
# plot the tuning curves
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.plot(x, A);
plt.xlabel(r'$x$')
plt.ylabel('Firing Rate [Hz]');
plt.subplot(122)
plt.plot(x, A)
plt.ylim([0, 2e3])
plt.xlim([-1, 1])
plt.xlabel(r'$x$')
plt.ylabel('Firing Rate [Hz]');
```
| github_jupyter |
## Test
```
from dcapy.schedule import Well, WellsGroup, Period, Scenario
from dcapy import dca
from dcapy.wiener import MeanReversion
from dcapy.cashflow import CashFlowParams
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import rv_histogram
oil_mr = MeanReversion(
initial_condition = 66,
ti = 0,
generator = {'dist':'norm','kw':{'loc':0,'scale':3}},
m=50,
steps=36,
eta=0.112652,
freq_input = 'M'
)
cp = CashFlowParams(**{
'name':'Income',
'value':dca.ProbVar(dist='uniform',kw={'loc':30,'scale':35}),
'multiply': 'oil_volume',
'target':'income',
'wi':0.92
})
cp.get_value(5,freq_output='M',seed=21)
prob_prod = dca.ProbVar(dist='triang',kw={'c':0.2,'loc':15,'scale':35})
prob_oil_price = dca.ProbVar(dist='uniform',kw={'loc':30,'scale':35})
prod_rand = prob_prod.get_sample(20)
price_rand = prob_oil_price.get_sample(20)
sns.scatterplot(prod_rand,price_rand)
p = Period(
name='base',
dca = dca.Arps(
qi = prod_rand.tolist(),
di = 0.005102753514909305,
freq_di='D',
ti = 0,
b=1,
),
start = 0,
end=36,
freq_output='M',
freq_input='M',
cashflow_params = [
{
'name':'wo',
'value':-212000,
'periods':1,
'target':'capex'
},
{
'name':'Opex',
'value':-7,
'multiply': 'oil_volume',
'target':'opex'
},
{
'name':'Income',
'value':price_rand.tolist(),
'multiply': 'oil_volume',
'target':'income',
'wi':0.92
},
{
'name':'abandon',
'value':-200000,
'periods':-1,
'target':'capex'
},
]
)
s = Scenario(name='sc', periods = [p])
w = Well(
name = f'well_eagle',
scenarios = [
s
]
)
gw = WellsGroup(name='wells',wells=[w], seed=21)
gw.tree()
f = gw.generate_forecast(freq_output='M')
sns.lineplot(data=f,x=f.index, y=f.oil_rate,hue='iteration')
c = gw.generate_cashflow(freq_output='M')
print(len(c))
npv_df = gw.npv(0.15,freq_cashflow='M')/1000
npv_df['rate'] = prod_rand
npv_df['price'] = price_rand
npv_df['npv_b'] = npv_df['npv'].apply(lambda x: 1 if x>0 else 0)
sns.displot(npv_df['npv'], kde=True)
npv_dist = rv_histogram(np.histogram(npv_df['npv'], bins=100))
sns.ecdfplot(data=npv_df, x="npv", complementary=True)
npv_dist = rv_histogram(np.histogram(npv_df['npv'], bins=100))
npv_dist.sf(0)
sns.scatterplot(data=npv_df, x="rate", y="price", hue='npv_b',palette='coolwarm_r')
sns.jointplot(data=npv_df, x="rate", y="price",hue='npv_b')
```
## Group Wells
```
list_wells = ['DIJAR-3','DIJAR-4','DIJAR-8','DIJAR-1-1']
initial_time = np.arange(len(list_wells))
capex = [-212000,-276000,-385000,-212000]
list_wells_sched=[]
prod_eagle = prob_prod.get_sample(1000)
price_eagle = prob_oil_price.get_sample(1000)
for i, well in enumerate(list_wells):
p = Period(
name='base',
dca = dca.Arps(
qi = prod_eagle.tolist(),
di = 0.005102753514909305/5,
freq_di='D',
ti = initial_time[i],
b=1,
),
start = 0,
end=36,
freq_output='M',
freq_input='M',
cashflow_params = [
{
'name':'wo',
'value':capex[i],
'periods':1,
'target':'capex'
},
{
'name':'Opex',
'value':-7,
'multiply': 'oil_volume',
'target':'opex'
},
{
'name':'Income',
'value':price_eagle.tolist(),
'multiply': 'oil_volume',
'target':'income',
'wi':0.92
},
{
'name':'abandon',
'value':-200000,
'periods':-1,
'target':'capex'
},
]
)
s = Scenario(name='sc', periods = [p])
w = Well(
name = list_wells[i],
scenarios = [
s
]
)
list_wells_sched.append(w)
eagle_wells = WellsGroup(name='eagle',wells=list_wells_sched)
eagle_wells.tree()
f = eagle_wells.generate_forecast(freq_output='M')
c = eagle_wells.generate_cashflow(freq_output='M')
print(len(c))
fig, ax = plt.subplots()
sns.lineplot(data=f, x='date', y='oil_rate',hue='well', ax=ax)
ax2 = ax.twinx()
sns.lineplot(data=f, x='date', y='oil_cum', hue='well',ax=ax2, palette='Pastel1')
fig, ax = plt.subplots()
total_prod = f.reset_index().groupby(['date','iteration'])['oil_rate','oil_cum'].sum().reset_index()
total_prod.head()
sns.lineplot(data=total_prod, x='date', y='oil_rate', ax=ax)
ax2 = ax.twinx()
sns.lineplot(data=total_prod, x='date', y='oil_cum', ax=ax2, palette='Pastel1')
npv_eagle = eagle_wells.npv(0.15,freq_cashflow='M')/1000
npv_eagle['rate'] = prod_eagle
npv_eagle['price'] = price_eagle
npv_eagle['npv_b'] = npv_eagle['npv'].apply(lambda x: 1 if x>0 else 0)
sns.scatterplot(data=npv_eagle, x="rate", y="price", hue='npv_b',palette='coolwarm_r')
npv_eagle_dist = rv_histogram(np.histogram(npv_eagle['npv'], bins=100))
print(f'Prob success {npv_eagle_dist.sf(0)}')
sns.ecdfplot(data=npv_eagle, x="npv", complementary=True)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from __future__ import print_function
```
# Load Data
Keras comes with the MNIST data loader. Keras has a function `mnist.load_data()` which downloads the data from its servers if it is not present already. The data loaded using this function is divided into training and test sets.
```
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
```
# Checkout the data
The data consists of handwritten numbers ranging from 0 to 9, along with their ground truth. It has 60,000 train samples and 10,000 test samples. Each sample is a 28x28 grayscale image.
```
from keras.utils import to_categorical
print('Training data shape : ', train_images.shape, train_labels.shape)
print('Testing data shape : ', test_images.shape, test_labels.shape)
# Find the unique numbers from the train labels
classes = np.unique(train_labels)
nClasses = len(classes)
print('Total number of outputs : ', nClasses)
print('Output classes : ', classes)
plt.figure(figsize=[10,5])
# Display the first image in training data
plt.subplot(121)
plt.imshow(train_images[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(train_labels[0]))
# Display the first image in testing data
plt.subplot(122)
plt.imshow(test_images[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(test_labels[0]))
```
# Process the data
* The images are grayscale and the pixel values range from 0 to 255.
* Convert each image matrix ( 28x28 ) to an array ( 28*28 = 784 dimenstional ) which will be fed to the network as a single feature.
* We convert the data to float and **scale** the values between 0 to 1.
* We also convert the labels from integer to **categorical ( one-hot ) encoding** since that is the format required by Keras to perform multiclass classification. One-hot encoding is a type of boolean representation of integer data. It converts the integer to an array of all zeros except a 1 at the index of the number. For example, using a one-hot encoding of 10 classes, the integer 5 will be encoded as 0000010000
```
# Change from matrix to array of dimension 28x28 to array of dimention 784
dimData = np.prod(train_images.shape[1:])
train_data = train_images.reshape(train_images.shape[0], dimData)
test_data = test_images.reshape(test_images.shape[0], dimData)
# Change to float datatype
train_data = train_data.astype('float32')
test_data = test_data.astype('float32')
# Scale the data to lie between 0 to 1
train_data /= 255
test_data /= 255
# Change the labels from integer to categorical data
train_labels_one_hot = to_categorical(train_labels)
test_labels_one_hot = to_categorical(test_labels)
# Display the change for category label using one-hot encoding
print('Original label 0 : ', train_labels[0])
print('After conversion to categorical ( one-hot ) : ', train_labels_one_hot[0])
```
# Create the network
```
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(dimData,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(nClasses, activation='softmax'))
```
# Configure the Network
```
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
```
# Train the Network
```
history = model.fit(train_data, train_labels_one_hot, batch_size=256, epochs=20, verbose=1,
validation_data=(test_data, test_labels_one_hot))
```
# Plot the loss and accuracy curves
```
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=16)
plt.figure(figsize=[8,6])
plt.plot(history.history['acc'],'r',linewidth=3.0)
plt.plot(history.history['val_acc'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
```
# Evaluate the trained network on test data
```
[test_loss, test_acc] = model.evaluate(test_data, test_labels_one_hot)
print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc))
```
# Create a new network with Dropout Regularization
```
from keras.layers import Dropout
model_reg = Sequential()
model_reg.add(Dense(512, activation='relu', input_shape=(dimData,)))
model_reg.add(Dropout(0.5))
model_reg.add(Dense(512, activation='relu'))
model_reg.add(Dropout(0.5))
model_reg.add(Dense(nClasses, activation='softmax'))
model_reg.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history_reg = model_reg.fit(train_data, train_labels_one_hot, batch_size=256, epochs=20, verbose=1,
validation_data=(test_data, test_labels_one_hot))
plt.figure(figsize=[8,6])
plt.plot(history_reg.history['loss'],'r',linewidth=3.0)
plt.plot(history_reg.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=16)
plt.figure(figsize=[8,6])
plt.plot(history_reg.history['acc'],'r',linewidth=3.0)
plt.plot(history_reg.history['val_acc'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
[test_loss, test_acc] = model_reg.evaluate(test_data, test_labels_one_hot)
print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc))
```
# Predict the first image from test data
We have seen that the first image is the number 7. Let us predict using the model
```
# Predict the probabilities for each class
model_reg.predict(test_data[[0],:])
# Predict the most likely class
model_reg.predict_classes(test_data[[0],:])
```
| github_jupyter |
# Create a SageMaker MLOps Project for Pipelines
Automatically run pipelines when code changes using Amazon SageMaker Projects
Note: This requires that you have enabled products within SageMaker Studio

```
import os
import sagemaker
import logging
import boto3
import sagemaker
import pandas as pd
from pprint import pprint
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name="sagemaker", region_name=region)
sc = boto3.Session().client(service_name="servicecatalog", region_name=region)
sts = boto3.Session().client(service_name="sts", region_name=region)
iam = boto3.Session().client(service_name="iam", region_name=region)
codepipeline = boto3.Session().client("codepipeline", region_name=region)
```
```
describe_response = sc.describe_product(Id=sagemaker_pipeline_product_id)
sagemaker_pipeline_product_provisioning_artifact_id = describe_response["ProvisioningArtifacts"][0]["Id"]
print(sagemaker_pipeline_product_provisioning_artifact_id)
```
# Create a SageMaker Project
```
import time
timestamp = int(time.time())
sagemaker_project_name = "dsoaws-{}".format(timestamp)
create_response = sm.create_project(
ProjectName=sagemaker_project_name,
ProjectDescription="dsoaws-{}".format(timestamp),
ServiceCatalogProvisioningDetails={
"ProductId": sagemaker_pipeline_product_id,
"ProvisioningArtifactId": sagemaker_pipeline_product_provisioning_artifact_id,
},
)
sagemaker_project_id = create_response["ProjectId"]
sagemaker_project_arn = create_response["ProjectArn"]
print("Project ID {}".format(sagemaker_project_id))
print("Project ARN {}".format(sagemaker_project_arn))
sagemaker_project_name_and_id = "{}-{}".format(sagemaker_project_name, sagemaker_project_id)
print("Combined Project ID and ARN combined: {}".format(sagemaker_project_name_and_id))
```
# _Wait for the Project to be Created_
```
%%time
import time
try:
describe_project_response = sm.describe_project(ProjectName=sagemaker_project_name)
project_status = describe_project_response["ProjectStatus"]
print("Creating Project...")
while project_status in ["Pending", "CreateInProgress"]:
print("Please wait...")
time.sleep(30)
describe_project_response = sm.describe_project(ProjectName=sagemaker_project_name)
project_status = describe_project_response["ProjectStatus"]
print("Project status: {}".format(project_status))
if project_status == "CreateCompleted":
print("Project {}".format(project_status))
else:
print("Project status: {}".format(project_status))
raise Exception("Project not created.")
except Exception as e:
print(e)
print(describe_project_response)
```
# _Wait for Project to be Created ^^ Above ^^_
# Attach IAM Policies for FeatureStore
This is the role used by Code Build when it starts the pipeline.
```
sc_role_name = "AmazonSageMakerServiceCatalogProductsUseRole"
account_id = sts.get_caller_identity()["Account"]
print(account_id)
response = iam.attach_role_policy(RoleName=sc_role_name, PolicyArn="arn:aws:iam::aws:policy/AmazonSageMakerFullAccess")
print(response)
response = iam.attach_role_policy(
RoleName=sc_role_name, PolicyArn="arn:aws:iam::aws:policy/AmazonSageMakerFeatureStoreAccess"
)
print(response)
response = iam.attach_role_policy(RoleName=sc_role_name, PolicyArn="arn:aws:iam::aws:policy/IAMFullAccess")
print(response)
```
# Stop the `Abalone` Sample Pipeline that Ships with SageMaker Pipelines
The sample "abalone" pipeline starts automatically when we create the project. We want to stop this pipeline to release these resources and use them for our own pipeline.
```
sample_abalone_pipeline_execution_arn = sm.list_pipeline_executions(PipelineName=sagemaker_project_name_and_id)[
"PipelineExecutionSummaries"
][0]["PipelineExecutionArn"]
print(sample_abalone_pipeline_execution_arn)
sm.stop_pipeline_execution(PipelineExecutionArn=sample_abalone_pipeline_execution_arn)
%%time
try:
describe_pipeline_execution_response = sm.describe_pipeline_execution(
PipelineExecutionArn=sample_abalone_pipeline_execution_arn
)
pipeline_execution_status = describe_pipeline_execution_response["PipelineExecutionStatus"]
while pipeline_execution_status not in ["Stopped", "Failed"]:
print("Please wait...")
time.sleep(30)
describe_pipeline_execution_response = sm.describe_pipeline_execution(
PipelineExecutionArn=sample_abalone_pipeline_execution_arn
)
pipeline_execution_status = describe_pipeline_execution_response["PipelineExecutionStatus"]
print("Pipeline execution status: {}".format(pipeline_execution_status))
if pipeline_execution_status in ["Stopped", "Failed"]:
print("Pipeline execution status {}".format(pipeline_execution_status))
else:
print("Pipeline execution status: {}".format(pipeline_execution_status))
raise Exception("Pipeline execution not deleted.")
except Exception as e:
print(e)
print(describe_pipeline_execution_response)
sm.delete_pipeline(PipelineName=sagemaker_project_name_and_id)
```
# Clone the MLOps Repositories in AWS CodeCommit
```
import os
sm_studio_root_path = "/root/"
sm_notebooks_root_path = "/home/ec2-user/SageMaker/"
root_path = sm_notebooks_root_path if os.path.isdir(sm_notebooks_root_path) else sm_studio_root_path
print(root_path)
print(region)
code_commit_repo1 = "https://git-codecommit.{}.amazonaws.com/v1/repos/sagemaker-{}-modelbuild".format(
region, sagemaker_project_name_and_id
)
print(code_commit_repo1)
sagemaker_mlops_build_code = "{}{}/sagemaker-{}-modelbuild".format(
root_path, sagemaker_project_name_and_id, sagemaker_project_name_and_id
)
print(sagemaker_mlops_build_code)
code_commit_repo2 = "https://git-codecommit.{}.amazonaws.com/v1/repos/sagemaker-{}-modeldeploy".format(
region, sagemaker_project_name_and_id
)
print(code_commit_repo2)
sagemaker_mlops_deploy_code = "{}{}/sagemaker-{}-modeldeploy".format(
root_path, sagemaker_project_name_and_id, sagemaker_project_name_and_id
)
print(sagemaker_mlops_deploy_code)
!git config --global credential.helper '!aws codecommit credential-helper $@'
!git config --global credential.UseHttpPath true
!git clone $code_commit_repo1 $sagemaker_mlops_build_code
!git clone $code_commit_repo2 $sagemaker_mlops_deploy_code
```
# Remove Sample `Abalone` Example Code
```
!rm -rf $sagemaker_mlops_build_code/pipelines/abalone
```
# Copy Workshop Code Into Local Project Folders
```
workshop_project_build_code = "{}workshop/10_pipeline/mlops/sagemaker-project-modelbuild".format(root_path)
print(workshop_project_build_code)
workshop_project_deploy_code = "{}workshop/10_pipeline/mlops/sagemaker-project-modeldeploy".format(root_path)
print(workshop_project_deploy_code)
!cp -R $workshop_project_build_code/* $sagemaker_mlops_build_code/
!cp -R $workshop_project_deploy_code/* $sagemaker_mlops_deploy_code/
```
# Commit New Code
```
print(sagemaker_mlops_build_code)
!cd $sagemaker_mlops_build_code; git status; git add --all .; git commit -m "Data Science on AWS"; git push
!cd $sagemaker_mlops_deploy_code; git status; git add --all .; git commit -m "Data Science on AWS"; git push
```
# Store the Variables
```
%store sagemaker_mlops_build_code
%store sagemaker_mlops_deploy_code
%store sagemaker_project_name
%store sagemaker_project_id
%store sagemaker_project_name_and_id
%store sagemaker_project_arn
%store sagemaker_pipeline_product_id
%store sagemaker_pipeline_product_provisioning_artifact_id
!ls -al $sagemaker_mlops_build_code/pipelines/dsoaws/
!pygmentize $sagemaker_mlops_build_code/pipelines/dsoaws/pipeline.py
```
# Wait for Pipeline Execution to Start
Now that we have committed code, our pipeline will start. Let's wait for the pipeline to start.
```
%%time
import time
from pprint import pprint
while True:
try:
print("Listing executions for our pipeline...")
list_executions_response = sm.list_pipeline_executions(PipelineName=sagemaker_project_name_and_id)[
"PipelineExecutionSummaries"
]
break
except Exception as e:
print("Please wait...")
time.sleep(30)
pprint(list_executions_response)
build_pipeline_name = "sagemaker-{}-modelbuild".format(sagemaker_project_name_and_id)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Check <a target="blank" href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines/{}/view?region={}">ModelBuild</a> Pipeline</b>'.format(
build_pipeline_name, region
)
)
)
```
# Wait For Pipeline Execution To Complete
```
%%time
import time
from pprint import pprint
executions_response = sm.list_pipeline_executions(PipelineName=sagemaker_project_name_and_id)[
"PipelineExecutionSummaries"
]
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
print(pipeline_execution_status)
while pipeline_execution_status == "Executing":
try:
executions_response = sm.list_pipeline_executions(PipelineName=sagemaker_project_name_and_id)[
"PipelineExecutionSummaries"
]
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
# print('Executions for our pipeline...')
# print(pipeline_execution_status)
except Exception as e:
print("Please wait...")
time.sleep(30)
pprint(executions_response)
```
# _Wait for the Pipeline Execution ^^ Above ^^ to Complete_
# List Pipeline Execution Steps
```
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
print(pipeline_execution_status)
pipeline_execution_arn = executions_response[0]["PipelineExecutionArn"]
print(pipeline_execution_arn)
pd.set_option("max_colwidth", 1000)
from pprint import pprint
steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn)
pprint(steps)
import time
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker.session.Session())
for execution_step in reversed(steps["PipelineExecutionSteps"]):
print(execution_step)
# We are doing this because there appears to be a bug of this LineageTableVisualizer handling the Processing Step
if execution_step["StepName"] == "Processing":
processing_job_name = execution_step["Metadata"]["ProcessingJob"]["Arn"].split("/")[-1]
print(processing_job_name)
display(viz.show(processing_job_name=processing_job_name))
elif execution_step["StepName"] == "Train":
training_job_name = execution_step["Metadata"]["TrainingJob"]["Arn"].split("/")[-1]
print(training_job_name)
display(viz.show(training_job_name=training_job_name))
else:
display(viz.show(pipeline_execution_step=execution_step))
time.sleep(5)
```
# Approve the Registered Model for Staging
The pipeline that was executed created a Model Package version within the specified Model Package Group. Of particular note, the registration of the model/creation of the Model Package was done so with approval status as `PendingManualApproval`.
Notes:
* You can do this within SageMaker Studio, as well. However, we are approving programmatically here in this example.
* This approval is only for Staging. For Production, you must go through the CodePipeline (deep link is below.)
```
%%time
import time
from pprint import pprint
while True:
try:
print("Executions for our pipeline...")
list_model_packages_response = sm.list_model_packages(ModelPackageGroupName=sagemaker_project_name_and_id)
break
except Exception as e:
print("Please wait...")
time.sleep(30)
pprint(list_model_packages_response)
time.sleep(30)
model_package_arn = list_model_packages_response["ModelPackageSummaryList"][0]["ModelPackageArn"]
print(model_package_arn)
model_package_update_response = sm.update_model_package(
ModelPackageArn=model_package_arn,
ModelApprovalStatus="Approved",
)
print(model_package_update_response)
time.sleep(30)
model_name = sm.list_models()["Models"][0]["ModelName"]
print(model_name)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/models/{}">Model</a></b>'.format(
region, model_name
)
)
)
deploy_pipeline_name = "sagemaker-{}-modeldeploy".format(sagemaker_project_name_and_id)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Check <a target="blank" href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines/{}/view?region={}">ModelDeploy</a> Pipeline</b>'.format(
deploy_pipeline_name, region
)
)
)
staging_endpoint_name = "{}-staging".format(sagemaker_project_name)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker Staging REST Endpoint</a></b>'.format(
region, staging_endpoint_name
)
)
)
```
# _Wait Until the Staging Endpoint is Deployed_
```
%%time
while True:
try:
waiter = sm.get_waiter("endpoint_in_service")
print("Waiting for staging endpoint to be in `InService`...")
waiter.wait(EndpointName=staging_endpoint_name)
break
except:
print("Waiting for staging endpoint to be in `Creating`...")
time.sleep(30)
print("Staging endpoint deployed.")
```
# _Wait Until the ^^ Staging Endpoint ^^ is Deployed_
# List Artifact Lineage
```
from pprint import pprint
steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn)
import time
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker.session.Session())
for execution_step in reversed(steps["PipelineExecutionSteps"]):
print(execution_step)
# We are doing this because there appears to be a bug of this LineageTableVisualizer handling the Processing Step
if execution_step["StepName"] == "Processing":
processing_job_name = execution_step["Metadata"]["ProcessingJob"]["Arn"].split("/")[-1]
print(processing_job_name)
display(viz.show(processing_job_name=processing_job_name))
elif execution_step["StepName"] == "Train":
training_job_name = execution_step["Metadata"]["TrainingJob"]["Arn"].split("/")[-1]
print(training_job_name)
display(viz.show(training_job_name=training_job_name))
else:
display(viz.show(pipeline_execution_step=execution_step))
time.sleep(5)
```
# Run A Sample Prediction in Staging
```
import json
from sagemaker.tensorflow.model import TensorFlowPredictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
predictor = TensorFlowPredictor(
endpoint_name=staging_endpoint_name,
sagemaker_session=sess,
model_name="saved_model",
model_version=0,
content_type="application/jsonlines",
accept_type="application/jsonlines",
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
)
inputs = [{"features": ["This is great!"]}, {"features": ["This is bad."]}]
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted star_rating: {}".format(predicted_class))
```
# Deploy to Production
```
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines/sagemaker-{}-modeldeploy/view?region={}"> Deploy to Production </a> Pipeline</b> '.format(
sagemaker_project_name_and_id, region
)
)
)
stage_name = "DeployStaging"
action_name = "ApproveDeployment"
time.sleep(30)
stage_states = codepipeline.get_pipeline_state(name=deploy_pipeline_name)["stageStates"]
for stage_state in stage_states:
if stage_state["stageName"] == stage_name:
for action_state in stage_state["actionStates"]:
if action_state["actionName"] == action_name:
token = action_state["latestExecution"]["token"]
print(token)
response = codepipeline.put_approval_result(
pipelineName=deploy_pipeline_name,
stageName=stage_name,
actionName=action_name,
result={"summary": "Approve from Staging to Production", "status": "Approved"},
token=token,
)
```
# Get the Production Endpoint Name
```
time.sleep(30)
production_endpoint_name = "{}-prod".format(sagemaker_project_name)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker Production REST Endpoint</a></b>'.format(
region, production_endpoint_name
)
)
)
```
# _Wait Until the Production Endpoint is Deployed_
```
%%time
while True:
try:
waiter = sm.get_waiter("endpoint_in_service")
print("Waiting for production endpoint to be in `InService`...")
waiter.wait(EndpointName=production_endpoint_name)
break
except:
print("Waiting for production endpoint to be in `Creating`...")
time.sleep(30)
print("Production endpoint deployed.")
```
# _Wait Until the ^^ Production Endpoint ^^ is Deployed_
```
import json
from sagemaker.tensorflow.model import TensorFlowPredictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
predictor = TensorFlowPredictor(
endpoint_name=production_endpoint_name,
sagemaker_session=sess,
model_name="saved_model",
model_version=0,
content_type="application/jsonlines",
accept_type="application/jsonlines",
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
)
inputs = [{"features": ["This is great!"]}, {"features": ["This is bad."]}]
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted star_rating: {}".format(predicted_class))
```
# List Artifact Lineage
```
from pprint import pprint
steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn)
import time
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker.session.Session())
for execution_step in reversed(steps["PipelineExecutionSteps"]):
print(execution_step)
# We are doing this because there appears to be a bug of this LineageTableVisualizer handling the Processing Step
if execution_step["StepName"] == "Processing":
processing_job_name = execution_step["Metadata"]["ProcessingJob"]["Arn"].split("/")[-1]
print(processing_job_name)
display(viz.show(processing_job_name=processing_job_name))
elif execution_step["StepName"] == "Train":
training_job_name = execution_step["Metadata"]["TrainingJob"]["Arn"].split("/")[-1]
print(training_job_name)
display(viz.show(training_job_name=training_job_name))
else:
display(viz.show(pipeline_execution_step=execution_step))
time.sleep(5)
```
# Release Resources
```
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
```
| github_jupyter |
```
import pandas as pd
import numpy as np
paths_df = pd.read_csv('s3://daniel.le-work/MEL_project/DL20190110_subset2_paths.csv')
paths_vec = paths_df.path.values.tolist()
len(paths_vec)
import time
for i in range(10):
start_time = time.time()
time.sleep(0.1)
etime = time.time() - start_time
with open('/home/ubuntu/data/DL20181011_melanocyte_test_data/DL20190110_outrigger_timelog.txt', 'a') as f:
f.write(f'{etime}\n')
# must establish outrigger environment
# create a docker image to run outrigger
from joblib import Parallel, delayed
from subprocess import run
import os
from shutil import copyfile,rmtree
import multiprocessing
num_cores = multiprocessing.cpu_count()
try:
rmtree('/GB100_1/outrigger_wkdir/results')
except:
pass
os.mkdir('/GB100_1/outrigger_wkdir/results')
def myfun(s3path):
start_time = time.time()
# parse path for prefix to name outputs
file_prefix = s3path.split('.')[0].split('/')[-1]
prefix = '_'.join(file_prefix.split('_')[:2])
plate = file_prefix.split('_')[1]
wkdir = f'/GB100_1/outrigger_wkdir/{prefix}'
output_dir = '/GB100_1/outrigger_wkdir/results'
results_subdir = f'{output_dir}/{plate}'
# create dir structure
os.mkdir(wkdir)
for target_dir in [output_dir, results_subdir]:
if not os.path.isdir(target_dir):
os.mkdir(results_subdir)
gtf_file = '/GB100_1/ref/HG38-PLUS/HG38-PLUS/genes/genes.gtf'
fa_file = '/GB100_1/ref/HG38-PLUS/HG38-PLUS/fasta/genome.fa'
# pull input from s3
os.chdir('/home/ubuntu/')
run(['aws', 's3', 'cp',
s3path, f'{wkdir}/'])
# run outrigger (approx. 10 min per sample)
os.chdir(wkdir)
run(['outrigger', 'index',
'--sj-out-tab', f'{file_prefix}.homo.SJ.out.tab',
'--gtf', gtf_file])
try:
os.chdir(wkdir)
run(['outrigger', 'validate',
'--genome', 'hg38',
'--fasta', fa_file])
except:
pass
# compile results
for subtype in ['se','mxe']:
try:
# /GB100_1/outrigger_wkdir/A10_B000873/outrigger_output/index/se/validated/events.csv
copyfile(f'{wkdir}/outrigger_output/index/{subtype}/validated/events.csv',
f'{results_subdir}/{prefix}_{subtype}.csv')
except:
os.mknod(f'{results_subdir}/{prefix}_{subtype}.csv')
# remove subdir
rmtree(wkdir)
# record execution time
etime = time.time() - start_time
with open('/home/ubuntu/data/DL20181011_melanocyte_test_data/DL20190110_outrigger_timelog.txt', 'a') as f:
f.write(f'{etime}\n')
# randomly sample 10 paths to time and process
matched_path = np.random.choice(paths_vec, 10)
Parallel(n_jobs=1,
backend="threading")(map(delayed(myfun), matched_path))
df = pd.read_csv('/home/ubuntu/data/DL20181011_melanocyte_test_data/DL20190110_outrigger_timelog.txt', header = None)
df.columns = ['sec']
df['min'] = df.sec / 60
df.describe()
paths_vec[:2]
jobs_queue = pd.DataFrame({'ec2_id': ['foo', 'i-0f95ea0e27dc6f375'],'path':paths_vec[:2]})
jobs_queue.to_csv('/home/ubuntu/data/DL20181011_melanocyte_test_data/jobs_queue.csv')
import subprocess
jobs_path = 's3://daniel.le-work/MEL_project/DL20190111_outrigger/jobs_queue.csv'
def pull_job(jobs_path):
s3path = None
# get instance id
proc = subprocess.run(['ec2metadata', '--instance-id'],
encoding='utf-8',
stdout=subprocess.PIPE)
ec2_id = proc.stdout.split('\n')[0]
# pull jobs queue
jobs_df = pd.read_csv(jobs_path)
if ec2_id in jobs_df.ec2_id.values:
s3path = jobs_df[jobs_df.ec2_id == ec2_id].path.tolist()[0]
elif len(jobs_df) > 0:
print('No matching jobs')
else:
print('No jobs in queue')
return s3path
s3path = pull_job(jobs_path)
if s3path is None:
print('failed')
jobs_df[jobs_df.ec2_id == ec2_id].path.tolist()[0]
ec2_id = 'i-0f95ea0e27dc6f375'
df = pd.DataFrame({'path':['s3://czbiohub-seqbot/fastqs/180301_NB501961_0074_AH5HKKBGX5/homo_results/A10_B000873_S714.homo.SJ.out.tab']})
df.to_csv(f'/home/ubuntu/data/DL20181011_melanocyte_test_data/{ec2_id}.job')
jobs_file = 's3://daniel.le-work/MEL_project/DL20190111_outrigger/queue/i-0f95ea0e27dc6f375.job'
try:
jobs_df = pd.read_csv(jobs_file)
s3path = jobs_df.path.values[0]
print(s3path)
except:
pass
```
| github_jupyter |
```
from __future__ import division, print_function
%matplotlib inline
from importlib import reload # Python 3
import utils; reload(utils)
from utils import *
```
## Setup
```
batch_size=64
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_test = np.expand_dims(X_test,1)
X_train = np.expand_dims(X_train,1)
X_train.shape
y_train[:5]
y_train = onehot(y_train)
y_test = onehot(y_test)
y_train[:5]
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
def norm_input(x): return (x-mean_px)/std_px
```
## Linear model
```
def get_lin_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
lm = get_lin_model()
gen = image.ImageDataGenerator()
batches = gen.flow(X_train, y_train, batch_size=batch_size)
test_batches = gen.flow(X_test, y_test, batch_size=batch_size)
steps_per_epoch = int(np.ceil(batches.n/batch_size))
validation_steps = int(np.ceil(test_batches.n/batch_size))
lm.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
lm.optimizer.lr=0.1
lm.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
lm.optimizer.lr=0.01
lm.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
```
## Single dense layer
```
def get_fc_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(512, activation='softmax'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc = get_fc_model()
fc.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
fc.optimizer.lr=0.1
fc.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
fc.optimizer.lr=0.01
fc.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
```
## Basic 'VGG-style' CNN
```
def get_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Conv2D(32,(3,3), activation='relu'),
Conv2D(32,(3,3), activation='relu'),
MaxPooling2D(),
Conv2D(64,(3,3), activation='relu'),
Conv2D(64,(3,3), activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model()
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.1
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.01
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8,
validation_data=test_batches, validation_steps=validation_steps)
```
## Data augmentation
```
model = get_model()
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=batch_size)
test_batches = gen.flow(X_test, y_test, batch_size=batch_size)
steps_per_epoch = int(np.ceil(batches.n/batch_size))
validation_steps = int(np.ceil(test_batches.n/batch_size))
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.1
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.01
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=14,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.0001
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=10,
validation_data=test_batches, validation_steps=validation_steps)
```
## Batchnorm + data augmentation
```
def get_model_bn():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Conv2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
Conv2D(32,(3,3), activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Conv2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
Conv2D(64,(3,3), activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn()
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.1
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.01
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=12,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=12,
validation_data=test_batches, validation_steps=validation_steps)
```
## Batchnorm + dropout + data augmentation
```
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Conv2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
Conv2D(32,(3,3), activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Conv2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
Conv2D(64,(3,3), activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn_do()
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.1
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.01
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=12,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1,
validation_data=test_batches, validation_steps=validation_steps)
```
## Ensembling
```
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, verbose=0,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.1
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, verbose=0,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.01
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=12, verbose=0,
validation_data=test_batches, validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=18, verbose=0,
validation_data=test_batches, validation_steps=validation_steps)
return model
models = [fit_model() for i in range(6)]
import os
user_home = os.path.expanduser('~')
path = os.path.join(user_home, "pj/fastai/data/MNIST_data/")
model_path = path + 'models/'
# path = "data/mnist/"
# model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')
eval_batch_size = 256
evals = np.array([m.evaluate(X_test, y_test, batch_size=eval_batch_size) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_test, batch_size=eval_batch_size) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
```
| github_jupyter |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Quantitative Finance Lecture Series
Learn quantitative finance with this comprehensive lecture series. Adapted from the Quantopian Lecture Series ([GitHub source repo](https://github.com/quantopian/research_public/tree/3bd730f292aa76d7c546f9a400583399030c8f65)) and updated to use QuantRocket APIs and data.
## Background
Quantopian was a crowd-sourced hedge fund that operated a research and backtesting platform where users could develop trading algorithms, a selection of which were licensed by Quantopian for inclusion in its hedge fund. To support its business objective, Quantopian developed several open-source Python libraries, including Zipline, one of QuantRocket's two backtesters. Quantopian also developed a comprehensive lecture series on quantitative finance, which it released under a Creative Commons license. The lectures were originally developed to use Python 2 and Quantopian data, but have here been updated to use Python 3 and QuantRocket data.
Following the closure of its crowd-sourced hedged fund, Quantopian shut down in 2020.
## Lecture Series Overview
The lectures are divided into four sections:
1. **Intro to Python**: These lectures provide an introduction to Jupyter notebooks, Python, and several key data processing libraries in Python.
2. **Topics in Statistics**: Covering a variety of topics that are broadly applicable in the field of statistics, these lectures are not specific to finance but typically include example applications to finance.
3. **Topics in Finance**: Building on the statistics lectures, these lectures dive into topics more specific to finance and portfolio management.
For computational analysis, the lectures rely most heavily on Python's numerous scientific computing libraries. Some of the lectures on financial topics also rely on the Pipeline API, which is part of Zipline, a backtester which was created by Quantopian and is available in QuantRocket. For data, the lectures utilize randomly generated data as well as actual financial data from QuantRocket.
The primary focus of Quantopian's crowd-sourced hedge fund was to develop market-neutral, long-short equity strategies for institutional investors. Consequently, the lectures are tilted toward this particular style of investment.
Although the lectures use QuantRocket APIs to load data, they are not primarily QuantRocket tutorials. The purpose of the lectures is to provide a theoretical foundation in quantitative finance. Most of the computational methods employed in the lectures (excluding the Pipeline API) are readily available in any Python research environment. For practical guidance on how to backtest and deploy trading strategies in QuantRocket, other examples in the [Code Library](https://www.quantrocket.com/code/) may be more suitable.
## Data Requirements
Most lectures can be completed using free sample data from QuantRocket. Some lectures are designed to use larger datasets that require a QuantRocket subscription. These lectures can still be completed without a QuantRocket subscription by simply reading along (without querying data) or by substituting free sample data.
The data requirements for each lecture are noted in the table below.
## Version
Due to the large variety of computational libraries used, the lectures are somewhat sensitive to version changes in the underlying libraries. The lectures were last updated and verified to be working with the library versions available in QuantRocket version 2.6.0.
## Contents
| | Title | Description | Required Dataset(s)
| :- | :- | :- | :-
| **Data Collection Tutorials** | | |
| Data Collection 1 | [Sample Dataset for US Stocks](Part1-Data-USStock-Sample.ipynb) | Collect free sample data (minute and daily) for US Stocks | -
| Data Collection 2 | [Full Dataset for US Stocks](Part2-Data-USStock.ipynb) | Collect the full dataset for US Stocks. QuantRocket subscription required. | -
| Data Collection 3 | [Sharadar US Fundamentals](Part3-Data-Sharadar-Fundamentals.ipynb) | Collect fundamental data for US Stocks from Sharadar. QuantRocket subscription and Sharadar subscription required. | -
| **Intro to Python** | | |
| Lecture 1 | [Introduction to Notebooks](Lecture01-Introduction-to-Notebooks.ipynb) | Introductory tutorial demonstrating how to use Jupyter Notebooks | Sample Data
| Lecture 2 | [Introduction to Python](Lecture02-Introduction-to-Python.ipynb) | Basic introduction to Python semantics and data structures | -
| Lecture 3 | [Introduction to NumPy](Lecture03-Introduction-to-NumPy.ipynb) | Introduction to NumPy, a data computation library | -
| Lecture 4 | [Introduction to pandas](Lecture04-Introduction-to-Pandas.ipynb) | Introduction to pandas, a library for managing and analyzing data | Sample Data
| Lecture 5 | [Plotting Data](Lecture05-Plotting-Data.ipynb) | How to plot data with matplotlib | Sample Data
| **Topics in Statistics** | | |
| Lecture 6 | [Means](Lecture06-Means.ipynb) | Understanding and calculating different types of means | Sample Data
| Lecture 7 | [Variance](Lecture07-Variance.ipynb) | Understanding and calculating measures of dispersion | -
| Lecture 8 | [Statistical Moments](Lecture08-Statistical-Moments.ipynb) | Understanding skewness and kurtosis | Sample Data
| Lecture 9 | [Linear Correlation Analysis](Lecture09-Linear-Correlation-Analysis.ipynb) | Understanding correlation and its relation to variance | Sample Data
| Lecture 10 | [Instability of Estimates](Lecture10-Instability-of-Estimates.ipynb) | How estimates can change with new data observations | Sample Data
| Lecture 11 | [Random Variables](Lecture11-Random-Variables.ipynb) | Understanding discrete and continuous random variables and probability distributions | Sample Data
| Lecture 12 | [Linear Regression](Lecture12-Linear-Regression.ipynb) | Using linear regression to understand the relationship between two variables | Sample Data
| Lecture 13 | [Maximum Likelihood Estimation](Lecture13-Maximum-Likelihood-Estimation.ipynb) | Basic introduction to maximum likelihood estimation, a method of estimating a probability distribution | Sample Data
| Lecture 14 | [Regression Model Instability](Lecture14-Regression-Model-Instability.ipynb) | Why regression coeffecients can change due to factors like regime change and multicollinearity | Sample Data
| Lecture 15 | [Multiple Linear Regression](Lecture15-Multiple-Linear-Regression.ipynb) | Multiple linear regression generalizes linear regression to multiple variables | Sample Data
| Lecture 16 | [Violations of Regression Models](Lecture16-Violations-of-Regression-Models.ipynb) | Different scenarios that can violate regression assumptions | Full US Stock
| Lecture 17 | [Model Misspecification](Lecture17-Model-Misspecification.ipynb) | What can cause a bad model to look good | Sample Data<br>Full US Stock<br>Sharadar fundamentals
| Lecture 18 | [Residual Analysis](Lecture18-Residual-Analysis.ipynb) | How to analyze residuals to build healthier models | Sample Data
| Lecture 19 | [Dangers of Overfitting](Lecture19-Dangers-of-Overfitting.ipynb) | How overfitting can make a bad model seem attractive | Sample Data
| Lecture 20 | [Hypothesis Testing](Lecture20-Hypothesis-Testing.ipynb) | Statistical techniques for rejecting the null hypothesis | Sample Data
| Lecture 21 | [Confidence Intervals](Lecture21-Confidence-Intervals.ipynb) | How to measure and interpret confidence intervals | -
| Lecture 22 | [Spearman Rank Correlation](Lecture22-Spearman-Rank-Correlation.ipynb) | How to measure monotonic but non-linear relationships | Sample Data
| Lecture 23 | [p-Hacking and Multiple Comparisons Bias](Lecture23-p-Hacking-and-Multiple-Comparisons-Bias.ipynb) | How to avoid getting tricked by false positives | -
| **Topics in Finance** | | |
| Lecture 24 | [Leverage](Lecture24-Leverage.ipynb) | Using borrowed money to amplify returns | -
| Lecture 25 | [Position Concentration Risk](Lecture25-Position-Concentration-Risk.ipynb) | The riskiness of investing in a small number of assets | -
| Lecture 26 | [Estimating Covariance Matrices](Lecture26-Estimating-Covariance-Matrices.ipynb) | Using covariance matrices to model portfolio volatility | Full US Stock
| Lecture 27 | [Introduction to Volume, Slippage, and Liquidity](Lecture27-Introduction-to-Volume-Slippage-and-Liquidity.ipynb) | An overview of liquidity and how it can affect your trading strategies | Sample Data
| Lecture 28 | [Market Impact Models](Lecture28-Market-Impact-Models.ipynb) | Understanding how your own trading activity moves the market price | Sample Data
| Lecture 29 | [Universe Selection](Lecture29-Universe-Selection.ipynb) | Defining a trading universe | Full US Stock<br>Sharadar fundamentals
| Lecture 30 | [The Capital Asset Pricing Model and Arbitrage Pricing Theory](Lecture30-CAPM-and-Arbitrage-Pricing-Theory.ipynb) | Using CAPM and Arbitrage Pricing Theory to evaluate risk | Full US Stock<br>Sharadar fundamentals
| Lecture 31 | [Beta Hedging](Lecture31-Beta-Hedging.ipynb) | Hedging your algorithm's market risk | Sample Data
| Lecture 32 | [Fundamental Factor Models](Lecture32-Fundamental-Factor-Models.ipynb) | Using fundamental data in factor models | Full US Stock<br>Sharadar fundamentals
| Lecture 33 | [Portfolio Analysis with pyfolio](Lecture33-Portfolio-Analysis-with-pyfolio.ipynb) | Evaluating backtest performance using pyfolio | -
| Lecture 34 | [Factor Risk Exposure](Lecture34-Factor-Risk-Exposure.ipynb) | Understanding and measuring your algorithm's exposure to common risk factors | Full US Stock<br>Sharadar fundamentals
| Lecture 35 | [Risk-Constrained Portfolio Optimization](Lecture35-Risk-Constrained-Portfolio-Optimization.ipynb) | Managing risk factor exposure | Full US Stock<br>Sharadar fundamentals
| Lecture 36 | [Principal Component Analysis](Lecture36-PCA.ipynb) | Using PCA to understand the key drivers of portfolio returns | Full US Stock
| Lecture 37 | [Long-Short Equity](Lecture37-Long-Short-Equity.ipynb) | Introduction to market-neutral strategies | -
| Lecture 38 | [Factor Analysis with Alphalens](Lecture38-Factor-Analysis-with-Alphalens.ipynb) | Using Alphalens to evaluate alpha factors | Full US Stock
| Lecture 39 | [Why You Should Hedge Beta and Sector Exposures](Lecture39-Why-Hedge.ipynb) | How hedging common risk exposures can improve portfolio performance | Full US Stock
| Lecture 40 | [VaR and CVaR](Lecture40-VaR-and-CVaR.ipynb) | Using Value at Risk to estimate potential loss | Full US Stock
| Lecture 41 | [Integration, Cointegration, and Stationarity](Lecture41-Integration-Cointegration-and-Stationarity.ipynb) | Introduction to stationarity and cointegration, which underpins pairs trading | Sample Data<br>Full US Stock
| Lecture 42 | [Introduction to Pairs Trading](Lecture42-Introduction-to-Pairs-Trading.ipynb) | A theoretical and practical introduction to pairs trading | Full US Stock
| Lecture 43 | [Autocorrelation and AR Models](Lecture43-Autocorrelation-and-AR-Models.ipynb) | Understanding how autocorrelation creates tail risk | -
| Lecture 44 | [ARCH, GARCH, and GMM](Lecture44-ARCH-GARCH-and-GMM.ipynb) | Introduction to volatility forecasting models | -
| Lecture 45 | [Kalman Filters](Lecture45-Kalman-Filters.ipynb) | Using Kalman filters to extract signals from noisy data | Sample Data
## License
© Copyright Quantopian, Inc.<br>
© Modifications Copyright QuantRocket LLC
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
| github_jupyter |
# Paddlepaddle实现逻辑回归 - 识别猫
欢迎大家来到这个有趣的实验!在这个实验中,大家将学使用PaddlePaddle实现Logistic回归模型来解决识别猫的问题,一步步跟随内容完成训练,加深对逻辑回归理论内容的理解并串联各个知识点,收获对神经网络和深度学习概念的整体把握。
** 你将学会 **
- 预处理图片数据
- 利用PaddlePaddle框架实现Logistic回归模型:
在开始实验之前,让我们简单介绍一下图片处理的相关知识:
** 图片处理 **
由于识别猫问题涉及到图片处理指示,这里对计算机如何保存图片做一个简单的介绍。在计算机中,图片被存储为三个独立的矩阵,分别对应图3-6中的红、绿、蓝三个颜色通道,如果图片是64*64像素的,就会有三个64*64大小的矩阵,要把这些像素值放进一个特征向量中,需要定义一个特征向量X,将三个颜色通道中的所有像素值都列出来。如果图片是64*64大小的,那么特征向量X的总纬度就是64*64*3,也就是12288维。这样一个12288维矩阵就是Logistic回归模型的一个训练数据。
<img src="images/image_to_vector.png" style="width:550px;height:300px;">
现在,让我们正式进入实验吧!
## 1 - 引用库
首先,载入几个需要用到的库,它们分别是:
- numpy:一个python的基本库,用于科学计算
- matplotlib.pyplot:用于生成图,在验证模型准确率和展示成本变化趋势时会使用到
- lr_utils:定义了load_datase()方法用于载入数据
- paddle.v2:PaddlePaddle深度学习框架
```
import sys
import numpy as np
import lr_utils
import matplotlib.pyplot as plt
import paddle.v2 as paddle
%matplotlib inline
```
## 2 - 数据预处理
这里简单介绍数据集及其结构。数据集以hdf5文件的形式存储,包含了如下内容:
- 训练数据集:包含了m_train个图片的数据集,数据的标签(Label)分为cat(y=1)和non-cat(y=0)两类。
- 测试数据集:包含了m_test个图片的数据集,数据的标签(Label)同上。
单个图片数据的存储形式为(num_x, num_x, 3),其中num_x表示图片的长或宽(数据集图片的长和宽相同),数字3表示图片的三通道(RGB)。
在代码中使用一行代码来读取数据,读者暂不需要了解数据的读取过程,只需调用load_dataset()方法,并存储五个返回值,以便后续的使用。
需要注意的是,添加“_orig”后缀表示该数据为原始数据,因为之后还需要对数据进行进一步处理。未添加“_orig”的数据则表示之后不对该数据作进一步处理。
```
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = lr_utils.load_dataset()
# 图片示例
index = 23
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
获取数据后的下一步工作是获得数据的相关信息,如训练样本个数m_train、测试样本个数m_test和图片的长度或宽度num_x,使用numpy.array.shape来获取数据的相关信息。
** 练习: ** 查看样本信息:
- m_train (训练样本数)
- m_test (测试样本数)
- num_px (图片长或宽)
`train_set_x_orig` 是一个(m_train, num_px, num_px, 3)形状的numpy数组。举个例子,你可以使用`train_set_x_orig.shape[0]`来获得 `m_train`。
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = test_set_x_orig.shape[1]
### END CODE HERE ###
print ("训练样本数: m_train = " + str(m_train))
print ("测试样本数: m_test = " + str(m_test))
print ("图片高度/宽度: num_px = " + str(num_px))
print ("图片大小: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**期望输出:**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
接下来需要对数据作进一步处理,为了便于训练,你可以忽略图片的结构信息,将包含图像长、宽和通道数信息的三维数组压缩成一维数组,图片数据的形状将由(64, 64, 3)转化为(64 * 64 * 3, 1)。
** 练习:**
将数据形状由(64, 64, 3)转化为(64 * 64 * 3, 1)。
** 技巧:**
我们可以使用一个小技巧来将(a,b,c,d)形状的矩阵转化为(b$*$c$*$d, a)形状的矩阵:
```python
X_flatten = X.reshape(X.shape[0], -1)
```
```
# 转换数据形状
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(m_train,-1)
test_set_x_flatten = test_set_x_orig.reshape(m_test,-1)
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**期望输出**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (209, 12288)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(50, 12288)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
</table>
在开始训练之前,还需要对数据进行归一化处理。图片采用红、绿、蓝三通道的方式来表示颜色,每个通道的单个像素点都存储着一个0-255的像素值,所以图片的归一化处理十分简单,只需要将数据集中的每个值除以255即可,但需要注意的是结果值应为float类型,直接除以255会导致结果错误,在Python中除以255.即可将结果转化为float类型。
现在让我们来归一化数据吧!
```
### START CODE HERE ### (≈ 2 lines of code)
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
### END CODE HERE ###
```
为了方便后续的测试工作,添加了合并数据集和标签集的操作,使用numpy.hstack实现numpy数组的横向合并。
```
train_set = np.hstack((train_set_x, train_set_y.T))
test_set = np.hstack((test_set_x, test_set_y.T))
```
<font color='blue'>
**经过上面的实验,大家应该记住:**
对数据进行预处理的一般步骤是:
- 了解数据的维度和形状等信息,例如(m_train, m_test, num_px, ...)
- 降低数据纬度,例如将数据维度(num_px, num_px, 3)转化为(num_px \* num_px \* 3, 1)
- 数据归一化
至此我们就完成了数据预处理工作!在接下来的练习中我们将构造reader,用于读取数据。
## 3 - 构造reader
构造read_data()函数,来读取训练数据集train_set或者测试数据集test_set。它的具体实现是在read_data()函数内部构造一个reader(),使用yield关键字来让reader()成为一个Generator(生成器),注意,yield关键字的作用和使用方法类似return关键字,不同之处在于yield关键字可以构造生成器(Generator)。虽然我们可以直接创建一个包含所有数据的列表,但是由于内存限制,我们不可能创建一个无限大的或者巨大的列表,并且很多时候在创建了一个百万数量级别的列表之后,我们却只需要用到开头的几个或几十个数据,这样造成了极大的浪费,而生成器的工作方式是在每次循环时计算下一个值,不断推算出后续的元素,不会创建完整的数据集列表,从而节约了内存使用。
** 练习:**现在让我们使用yield来构造一个reader()吧!
```
# 读取训练数据或测试数据
def read_data(data_set):
"""
一个reader
Args:
data_set -- 要获取的数据集
Return:
reader -- 用于获取训练数据集及其标签的生成器generator
"""
def reader():
"""
一个reader
Args:
Return:
data[:-1], data[-1:] -- 使用yield返回生成器(generator),
data[:-1]表示前n-1个元素,也就是训练数据,data[-1:]表示最后一个元素,也就是对应的标签
"""
for data in data_set:
### START CODE HERE ### (≈ 2 lines of code)
yield data[:-1], data[-1:]
### END CODE HERE ###
return reader
test_array = [[1,1,1,1,0],
[2,2,2,2,1],
[3,3,3,3,0]]
print("test_array for read_data:")
for value in read_data(test_array)():
print(value)
```
**期望输出**:
<table>
<tr>
<td>([1, 1, 1, 1], [0])</td>
</tr>
<tr>
<td>([2, 2, 2, 2], [1])</td>
</tr>
<td>([3, 3, 3, 3], [0])</td>
</tr>
</table>
## 4 - 训练过程
完成了数据的预处理工作并构造了read_data()来读取数据,接下来将进入模型的训练过程,使用PaddlePaddle来定义构造可训练的Logistic回归模型,关键步骤如下:
- 初始化
- 配置网络结构和设置参数
- 配置网络结构
- 定义损失函数cost
- 创建parameters
- 定义优化器optimizer
- 模型训练
- 模型检验
- 预测
- 绘制学习曲线
** (1)初始化 **
首先进行最基本的初始化操作,在PaddlePaddle中使用paddle.init(use_gpu=False, trainer_count=1)来进行初始化:
- use_gpu=False表示不使用gpu进行训练
- trainer_count=1表示仅使用一个训练器进行训练
```
# 初始化
paddle.init(use_gpu=False, trainer_count=1)
```
** (2)配置网络结构和设置参数 **
** 配置网络结构 **
我们知道Logistic回归模型结构相当于一个只含一个神经元的神经网络,如下图所示,只包含输入数据以及输出层,不存在隐藏层,所以只需配置输入层(input)、输出层(predict)和标签层(label)即可。
<img src="images/logistic.png" width="200px">
** 练习:**接下来让我们使用PaddlePaddle提供的接口开始配置Logistic回归模型的简单网络结构吧,一共需要配置三层:
** 输入层: **
我们可以定义image=paddle.layer.data(name=”image”, type=paddle.data_type.dense_vector(data_dim))来表示生成一个数据输入层,名称为“image”,数据类型为data_dim维向量;
在定义输入层之前,我们需要使用之前计算的num_px来获取数据维度data_dim,data_dim=num_px \* num_px \* 3
** 输出层: **
我们可以定义predict=paddle.layer.fc(input=image, size=1, act=paddle.activation.Sigmoid())表示生成一个全连接层,输入数据为image,神经元个数为1,激活函数为Sigmoid();
** 标签层 **
我们可以定义label=paddle.layer.data(name=”label”, type=paddle.data_type.dense_vector(1))表示生成一个数据层,名称为“label”,数据类型为1维向量。
```
# 配置网络结构
# 数据层需要使用到数据维度data_dim,根据num_px来计算data_dim
### START CODE HERE ### (≈ 2 lines of code)
data_dim = num_px * num_px * 3
### END CODE HERE ###
# 输入层,paddle.layer.data表示数据层
### START CODE HERE ### (≈ 2 lines of code)
image = paddle.layer.data(
name='image', type=paddle.data_type.dense_vector(data_dim))
### END CODE HERE ###
# 输出层,paddle.layer.fc表示全连接层
### START CODE HERE ### (≈ 2 lines of code)
predict = paddle.layer.fc(
input=image, size=1, act=paddle.activation.Sigmoid())
### END CODE HERE ###
# 标签数据层,paddle.layer.data表示数据层
### START CODE HERE ### (≈ 2 lines of code)
label = paddle.layer.data(
name='label', type=paddle.data_type.dense_vector(1))
### END CODE HERE ###
```
** 定义损失函数 **
在配置网络结构之后,我们需要定义一个损失函数来计算梯度并优化参数,在这里我们可以使用PaddlePaddle提供的交叉熵损失函数,定义cost = paddle.layer.multi_binary_label_cross_entropy_cost(input=predict, label=label),使用predict与label计算成本。
```
# 损失函数,使用交叉熵损失函数
### START CODE HERE ### (≈ 2 lines of code)
cost = paddle.layer.multi_binary_label_cross_entropy_cost(input=predict, label=label)
### END CODE HERE ###
```
** 创建parameters **
PaddlePaddle中提供了接口paddle.parameters.create(cost)来创建和初始化参数,参数cost表示基于我们刚刚创建的cost损失函数来创建和初始化参数。
```
# 创建parameters
### START CODE HERE ### (≈ 2 lines of code)
parameters = paddle.parameters.create(cost)
### END CODE HERE ###
```
** optimizer **
参数创建完成后,定义参数优化器optimizer= paddle.optimizer.Momentum(momentum=0, learning_rate=0.00002),使用Momentum作为优化器,并设置动量momentum为零,学习率为0.00002。注意,读者暂时无需了解Momentum的含义,只需要学会使用即可。
```
#创建optimizer
### START CODE HERE ### (≈ 2 lines of code)
optimizer = paddle.optimizer.Momentum(momentum=0, learning_rate=0.00002)
### END CODE HERE ###
```
** 其它配置 **
feeding={‘image’:0, ‘label’:1}是数据层名称和数组索引的映射,用于在训练时输入数据,costs数组用于存储cost值,记录成本变化情况。
最后定义函数event_handler(event)用于事件处理,事件event中包含batch_id,pass_id,cost等信息,我们可以打印这些信息或作其它操作。
```
# 数据层和数组索引映射,用于trainer训练时喂数据
feeding = {
'image': 0,
'label': 1}
# 记录成本cost
costs = []
# 事件处理
def event_handler(event):
"""
事件处理器,可以根据训练过程的信息作相应操作
Args:
event -- 事件对象,包含event.pass_id, event.batch_id, event.cost等信息
Return:
"""
if isinstance(event, paddle.event.EndIteration):
if event.pass_id % 100 == 0:
print("Pass %d, Batch %d, Cost %f" % (event.pass_id, event.batch_id, event.cost))
costs.append(event.cost)
```
** 模型训练 **
上述内容进行了模型初始化、网络结构的配置并创建了损失函数、参数、优化器,接下来利用上述配置进行模型训练。
首先定义一个随机梯度下降trainer,配置三个参数cost、parameters、update_equation,它们分别表示损失函数、参数和更新公式。
再利用trainer.train()即可开始真正的模型训练,我们可以设置参数如下:
<img src="images/code1.png" width="500px">
- paddle.reader.shuffle(train(), buf_size=5000)表示trainer从train()这个reader中读取了buf_size=5000大小的数据并打乱顺序
- paddle.batch(reader(), batch_size=256)表示从打乱的数据中再取出batch_size=256大小的数据进行一次迭代训练
- 参数feeding用到了之前定义的feeding索引,将数据层image和label输入trainer,也就是训练数据的来源。
- 参数event_handler是事件管理机制,读者可以自定义event_handler,根据事件信息作相应的操作。
- 参数num_passes=5000表示迭代训练5000次后停止训练。
** 练习:**定义trainer并开始训练模型(大家可以自己定义buf_size,batch_size,num_passes等参数,但是建议大家先参考上述的参数设置,在完成整个训练之后,再回过头来调整这些参数,看看结果会有什么不同)
```
# 构造trainer
### START CODE HERE ### (≈ 2 lines of code)
trainer = paddle.trainer.SGD(
cost=cost, parameters=parameters, update_equation=optimizer)
### END CODE HERE ###
# 模型训练
### START CODE HERE ### (≈ 2 lines of code)
trainer.train(
reader=paddle.batch(
paddle.reader.shuffle(read_data(train_set), buf_size=5000),
batch_size=256),
feeding=feeding,
event_handler=event_handler,
num_passes=2000)
### END CODE HERE ###
```
** 模型检验 **
模型训练完成后,接下来检验模型的准确率。
在这里我们首先需要定义一个函数get_data()来帮助我们获得用于检验模型准确率的数据,而数据的来源是read_data()返回的训练数据和测试数据。
```
# 获取数据
def get_data(data_creator):
"""
使用参数data_creator来获取测试数据
Args:
data_creator -- 数据来源,可以是train()或者test()
Return:
result -- 包含测试数据(image)和标签(label)的python字典
"""
data_creator = data_creator
data_image = []
data_label = []
for item in data_creator():
data_image.append((item[0],))
data_label.append(item[1])
### START CODE HERE ### (≈ 4 lines of code)
result = {
"image": data_image,
"label": data_label
}
### END CODE HERE ###
return result
```
获得数据之后,我们就可以开始利用paddle.infer()来进行预测,参数output_layer 表示输出层,参数parameters表示模型参数,参数input表示输入的测试数据。
** 练习:**
- 利用get_data()获取测试数据和训练数据
- 使用paddle.infer()进行预测
```
# 获取测试数据和训练数据,用来验证模型准确度
### START CODE HERE ### (≈ 2 lines of code)
train_data = get_data(read_data(train_set))
test_data = get_data(read_data(test_set))
### END CODE HERE ###
# 根据train_data和test_data预测结果,output_layer表示输出层,parameters表示模型参数,input表示输入的测试数据
### START CODE HERE ### (≈ 6 lines of code)
probs_train = paddle.infer(
output_layer=predict, parameters=parameters, input=train_data['image']
)
probs_test = paddle.infer(
output_layer=predict, parameters=parameters, input=test_data['image']
)
### END CODE HERE ###
```
获得检测结果probs_train和probs_test之后,我们将结果转化为二分类结果并计算预测正确的结果数量,定义train_accuracy和test_accuracy来分别计算训练准确度和测试准确度。注意,在test_accuracy中,我们不仅计算了准确度,并且传入了一个test_data_y数组参数用于存储预测结果,方便后续的查看。
```
# 训练集准确度
def train_accuracy(probs_train, train_data):
"""
根据训练数据集来计算训练准确度train_accuracy
Args:
probs_train -- 训练数据集的预测结果,调用paddle.infer()来获取
train_data -- 训练数据集
Return:
train_accuracy -- 训练准确度train_accuracy
"""
train_right = 0
train_total = len(train_data['label'])
for i in range(len(probs_train)):
if float(probs_train[i][0]) > 0.5 and train_data['label'][i] == 1:
train_right += 1
elif float(probs_train[i][0]) < 0.5 and train_data['label'][i] == 0:
train_right += 1
train_accuracy = (float(train_right) / float(train_total)) * 100
return train_accuracy
# 测试集准确度
def test_accuracy(probs_test, test_data, test_data_y):
"""
根据测试数据集来计算测试准确度test_accuracy
Args:
probs_test -- 测试数据集的预测结果,调用paddle.infer()来获取
test_data -- 测试数据集
Return:
test_accuracy -- 测试准确度test_accuracy
"""
test_right = 0
test_total = len(test_data['label'])
for i in range(len(probs_test)):
if float(probs_test[i][0]) > 0.5:
test_data_y.append(1)
if test_data['label'][i] == 1:
test_right += 1
elif float(probs_test[i][0]) < 0.5:
test_data_y.append(0)
if test_data['label'][i] == 0:
test_right += 1
test_accuracy = (float(test_right) / float(test_total)) * 100
return test_accuracy
```
调用上述两个函数并输出
```
# 计算train_accuracy和test_accuracy
test_data_y = []
### START CODE HERE ### (≈ 6 lines of code)
print("train_accuracy: {} %".format(train_accuracy(probs_train, train_data)))
print("test_accuracy: {} %".format(test_accuracy(probs_test, test_data, test_data_y)))
### END CODE HERE ###
```
** 期望输出:**
<table style="width:40%">
<tr>
<td> **train_accuracy ** </td>
<td> $\approx$ 98% </td>
</tr>
<tr>
<td> **test Accuracy** </td>
<td> $\approx$ 70% </td>
</tr>
</table>
因为数据集和逻辑回归模型的限制,并且没有加入其它优化方式,所以70%的测试集准确率已经是相当好的结果,如果你也得到相似的结果,大约98%的训练集准确率和70%的测试集准确率,那么恭喜你到目前为止的工作都很棒,你已经配置了不错的模型和参数。当然你可以返回去调整一些参数,例如learning_rate/batch_size/num_passes,或者参考官方[PaddlePaddle文档](http://paddlepaddle.org/docs/develop/documentation/zh/getstarted/index_cn.html)来修改你的模型,尝试去犯错或者通过调参来得到更好的结果都能帮助你熟悉深度学习以及PaddlePaddle框架的使用!
** 学习曲线 **
接下来我们利用之前保存的costs数据来输出成本的变化情况,利用学习曲线对模型进行分析。
```
costs = np.squeeze(costs)
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate = 0.00002")
plt.show()
```
可以看到图中成本在刚开始收敛较快,随着迭代次数变多,收敛速度变慢,最终收敛到一个较小值。
接下来,利用之前保存的测试结果test_data_y来对测试数据集中的单个图片进行预测,通过index来选择一张图片,看看你的模型是不是正确预测了这张图片吧!
```
# Example of a picture that was wrongly classified.
index = 14
plt.imshow((np.array(test_data['image'][index])).reshape((64, 64, 3)))
print ("y = " + str(test_data_y[index]) + ", you predicted that it is a \"" + classes[test_data_y[index]].decode("utf-8") + "\" picture.")
```
# 5 - 总结
通过这个练习我们应该记住:
1. 数据预处理是训练模型之前的必需工作,有时候理解数据并做好预处理工作比构建模型需要更多时间和耐心!
2. PaddlePaddle训练模型的基本步骤:
- 初始化
- 配置网络结构和设置参数:
- 定义成本函数cost
- 创建parameters
- 定义优化器optimizer
- 定义event_handler
- 定义trainer
- 开始训练
- 预测infer()并输出准确率train_accuracy和test_accuracy
3. 练习中的许多参数可以作调整,例如修改学习率会对模型结果产生很大影响,大家可以在本练习或者后面的练习中多做些尝试。
至此Logistic回归模型的训练工作完成,我们发现在使用PaddlePaddle进行模型配置和训练的过程中不用考虑参数的初始化、成本函数、激活函数、梯度下降、参数更新和预测等具体细节,只需要简单地配置网络结构和trainer即可,并且PaddlePaddle提供了许多接口来改变学习率、成本函数、批次大小等许多参数来改变模型的学习效果,使用起来更加灵活,方便测试,在之后的练习中,我们会对PaddlePaddle更加熟悉。
| github_jupyter |
# Preparing Environment
```
# Install packages
!pip install keras
#Import libraries
import keras
from keras import layers,optimizers
import numpy as np
from google.colab import files
import gzip
from keras.utils import to_categorical
from keras import backend as K
import tensorflow as tf
from keras import losses
from keras.callbacks import Callback
from keras import callbacks
from keras.models import Model
from keras.layers.core import Dense, Dropout, Activation
from keras import layers,optimizers
from keras import regularizers
from keras.optimizers import Adam,Adadelta
from keras import regularizers
from keras import metrics
import requests,time
import tensorflow as tf
from random import randint
import matplotlib.pyplot as plt
# Download the dataset from cullpdb for ICML2014
!wget -O cullpdb+profile_6133_filtered.npy.gz "http://www.princeton.edu/~jzthree/datasets/ICML2014/cullpdb+profile_6133_filtered.npy.gz"
#Prepare the dataset
dataset = gzip.open('cullpdb+profile_6133_filtered.npy.gz', 'rb')
dataset = np.load(dataset)
print("Before: ",dataset.shape)
dataset = np.reshape(dataset, (dataset.shape[0], 700, 57))
print("After: ",dataset.shape)
# OR use a uploaded file
uploaded = files.upload()
uploaded_file=''
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
uploaded_file = fn
# Prepare the dataset
dataset = uploaded_file
dataset = np.load(dataset)
print("Before: ",dataset.shape)
dataset = np.reshape(dataset, (dataset.shape[0], 700, 57)) # [Proteins|Aminoacids|Features]
print("After: ",dataset.shape)
# Prepare the datasets for classification
print('Preparing dataset...dataset shape ',dataset.shape )
dataindex = range(30) # aminoacids + secondary structure => input
t_end_idx=int(dataset.shape[0]*0.7)# 4427
v_idx = t_end_idx+ int(dataset.shape[0]*0.1)# 4980
np.random.seed(1234)
idx_arr = np.arange(dataset.shape[0])
np.random.shuffle(idx_arr)
traindataset = dataset[idx_arr,:,:]
print("traindataset.. ",traindataset.shape)
traindata = traindataset[:t_end_idx,:,dataindex]
trainlabel = traindataset[:t_end_idx,:,34:35] # acc relative (33)
valdata = traindataset[t_end_idx:v_idx,:,dataindex]
vallabel = traindataset[t_end_idx:v_idx,:,34:35]
traindata = np.concatenate((traindata, valdata), axis=0)
trainlabel = np.concatenate((trainlabel, vallabel), axis=0)
testdata = traindataset[v_idx:,:,dataindex]
testlabel = traindataset[v_idx:,:,34:35]
print(
"\n********Before:********\ntraindata.shape: ",traindata.shape,
"\nvaldata.shape: ",valdata.shape,
"\ntestdata.shape: ",testdata.shape,
"\ntrainlabel.shape: ",trainlabel.shape,
"\nvallabel.shape: ",vallabel.shape,
"\ntestlabel.shape: ",testlabel.shape
)
max_num_amino = 700
trainmask = dataset[:t_end_idx,:,30]* -1 + 1
valmask = dataset[t_end_idx:v_idx,:,30]* -1 + 1
testmask = dataset[v_idx:,:,30]* -1 + 1
trainvalmask = np.concatenate((trainmask, valmask), axis=0)
trainlabel = to_categorical(trainlabel,2)
testlabel = to_categorical(testlabel,2)
vallabel = to_categorical(vallabel,2)
train_tmp = []
train_lab_tmp = []
val_tmp = []
val_lab_tmp = []
test_tmp = []
test_lab_tmp = []
min_ratio = 0.1
max_ratio = 0.9
do_ratio = False
for i in range(valdata.shape[0]):
p = valdata[i,:max_num_amino,:]
pl = vallabel[i,:max_num_amino,:]
num_amino = int(sum(valmask[i]))
if(num_amino<=max_num_amino):
ratio_first = (np.sum(pl[:num_amino,0])/num_amino)
if do_ratio:
if( min_ratio < ratio_first and ratio_first < max_ratio ):
val_tmp.append(p)
pl[num_amino:,:]=[0,0]
val_lab_tmp.append(pl)
else:
val_tmp.append(p)
pl[num_amino:,:]=[0,0]
val_lab_tmp.append(pl)
for i in range(traindata.shape[0]):
p = traindata[i,:max_num_amino,:]
pl = trainlabel[i,:max_num_amino,:]
num_amino = int(sum(trainvalmask[i]))
if(num_amino<=max_num_amino):
ratio_first = (np.sum(pl[:num_amino,0])/num_amino)
if do_ratio:
if( min_ratio < ratio_first and ratio_first < max_ratio ):
train_tmp.append(p)
pl[num_amino:,:]=[0,0]
train_lab_tmp.append(pl)
else:
train_tmp.append(p)
pl[num_amino:,:]=[0,0]
train_lab_tmp.append(pl)
for i in range(testdata.shape[0]):
p = testdata[i,:max_num_amino,:]
pl = testlabel[i,:max_num_amino,:]
num_amino = int(sum(testmask[i]))
if(num_amino<=max_num_amino):
ratio_first = (np.sum(pl[:num_amino,0])/num_amino)
if do_ratio:
if( min_ratio < ratio_first and ratio_first < max_ratio ):
test_tmp.append(p)
pl[num_amino:,:]=[0,0]
test_lab_tmp.append(pl)
else:
test_tmp.append(p)
pl[num_amino:,:]=[0,0]
test_lab_tmp.append(pl)
traindata = np.array(train_tmp).astype(float)
valdata = np.array(val_tmp).astype(float)
testdata = np.array(test_tmp).astype(float)
trainlabel = np.array(train_lab_tmp).astype(float)
vallabel = np.array(val_lab_tmp).astype(float)
testlabel = np.array(test_lab_tmp).astype(float)
print("\n********After:********\ntraindata.shape: ",traindata.shape,"\nvaldata.shape: ",valdata.shape,"\ntestdata.shape: ",testdata.shape,
"\ntrainlabel.shape: ",trainlabel.shape,"\nvallabel.shape: ",vallabel.shape,"\ntestlabel.shape: ",testlabel.shape)
def proteinCategoricalCrossentropy(y_true,y_pred):
loss = y_true * K.log(y_pred)
loss = -K.sum(loss, -1)
return loss
def stable_softmax(x, axis=-1):# based on keras implementation
ndim = K.ndim(x)
if ndim == 2:
return K.softmax(x - K.max(x,axis=axis, keepdims=True))
elif ndim > 2:
e = K.exp(x - K.max(x, axis=axis, keepdims=True))
s = K.sum(e, axis=axis, keepdims=True)
return e / s
else:
raise ValueError('Cannot apply softmax to a tensor that is 1D. '
'Received input: %s' % x)
def weighted_accuracy(y_true, y_pred):
acc = K.sum(K.cast( K.equal(
K.argmax(y_true, axis=-1),
K.argmax((y_pred), axis=-1)
),"float32") * K.sum(y_true, axis=-1)
) / K.sum(y_true)
return acc
train_accs = []
val_accs = []
class ProteinCallback(Callback):
def __init__(self, eval_data):
self.eval_data = eval_data
def on_epoch_end(self, epoch, logs={}):
x, y = self.eval_data
loss, acc = self.model.evaluate(x, y, verbose=0)
print('Testing loss: {}, weighted_accuracy: {}'.format(loss, acc))
#Save Data in gdrive
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
```
# Convolutional neural network
```
num_epoche = 80
conv = 16
conv_kernel=7
drop = 0
activation = 'tanh'
poolSize = 5
batchSize=32
validationSplit=0.1
adam = Adam(lr=1e-1)
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
rmsprop = optimizers.RMSprop(lr=0.0001, rho=0.9, epsilon=None, decay=0.0)
optimizer = 'adam'
input_layer = layers.Input(shape=(max_num_amino,len(dataindex)), name='input')
net = layers.Conv1D(conv,conv_kernel, padding='same')(input_layer)
net = layers.Activation(activation)(net)
#net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Conv1D(conv*2, conv_kernel, padding='same')(net)
net = layers.Activation(activation)(net)
net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Conv1D(conv*3, conv_kernel, padding='same')(net)
net = layers.Activation(activation)(net)
net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Dense(32)(net)
net = layers.Activation(activation)(net)
net = layers.Dropout(drop)(net)
output_layer = layers.Dense(2, activation=stable_softmax, name='output')(net)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer=optimizer,#rmsprop,adam,SGD,adamax
#loss='categorical_crossentropy',
loss=proteinCategoricalCrossentropy,
metrics=[weighted_accuracy])
model.summary()
model_summary= (str(model.to_json()))
# Fit the model
time_start = time.time()
history = model.fit(traindata,trainlabel,epochs=num_epoche,verbose=1,batch_size=batchSize,#validation_split=validationSplit)
callbacks=[ProteinCallback((testdata,testlabel))],
validation_data=(valdata,vallabel))
time_execution = time.time() - time_start
print("Fit time: ",time_execution)
test_score= model.evaluate(testdata, testlabel)
print("Loss ",test_score[0])
print("Accuracy ",test_score[1])
k=randint(0, 3000)
testd=traindata[k:(k+700),:,:]
testl=trainlabel[k:(k+700),:,:]
print('Random test set: from ',k,' to ',k+700)
test_score= model.evaluate(testd, testl)
print("Loss ",test_score[0])
print("Accuracy ",test_score[1])
loss = [x for x in history.history['loss']]
val_loss = [x for x in history.history['val_loss'] ]
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
acc_values =[x for x in history.history['weighted_accuracy']]
val_acc_values = [x for x in history.history['val_weighted_accuracy']]
plt.plot(epochs, acc_values, 'b', label='Training acc')
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
#GRID SEARCH
#TEST DROP + TEST ACTIVATION + TEST CONV FILTERS + TEST CONV SIZE
import keras
def create_model(traindata,trainlabel,valdata,vallabel,opt,batch,nep,conv_filters,conv_size,activation,do_pooling,poolSize,drop,cb_list):
input_layer = layers.Input(shape=(max_num_amino,len(dataindex)), name='input')
net = layers.Conv1D(conv_filters,conv_size, padding='same')(input_layer)
net = layers.Activation(activation)(net)
#net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Conv1D(conv_filters, conv_size, padding='same')(net)
net = layers.Activation(activation)(net)
if do_pooling:
net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Conv1D(conv_filters, conv_size, padding='same')(net)
net = layers.Activation(activation)(net)
if do_pooling:
net = layers.MaxPooling1D(pool_size=poolSize,strides=1,padding='same')(net)
net = layers.Dropout(drop)(net)
net = layers.Dense(conv)(net)
net = layers.Activation(activation)(net)
net = layers.Dropout(drop)(net)
output_layer = layers.Dense(2, activation="softmax", name='output')(net)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer=opt,
loss=proteinCategoricalCrossentropy,
metrics=[weighted_accuracy])
history = model.fit(traindata,trainlabel,epochs=nep,verbose=2,callbacks=cb_list,batch_size=batch,validation_data=(valdata,vallabel))
return model,history
adam = Adam(lr=0.001)
ep = 120
optimizers = [adam]
batch_sizes = [32]
pool_sizes = [2]
conv_units = [32]#[16,32,64,128]
conv_kernel_size = [7]#[2,5,7]
drops = [0]#,0.1,0.25,0.5]
activations = [ 'tanh','relu','sigmoid'] #TODO fare drop. 0.25 + sigmoid
do_pool = [True]#TODO rifare senza pooling
histories = []
test_scores = []
time_start_tests = time.time()
with open("grid_search_"+ep+"eps.txt", "w") as txt:
txt.write("")
k=0
tot = len(optimizers)*len(conv_units)*len(conv_kernel_size)*len(pool_sizes)*len(drops)*len(activations)
for optimizer in optimizers:
for act in activations:
for pool in pool_sizes:
for do_p in do_pool:
for batch in batch_sizes:
for conv in conv_units:
for conv_kernel in conv_kernel_size:
for drop in drops:
k=k+1
if k<1:#test già fatti
break
print("Test ",k," su ",tot," Epochs ",ep," - Optimizer ",optimizer," - BatchSize",batch," - Activation ",act," - conv size ",conv,
" - conv kernel ",conv_kernel," - do pool ",do_p," - pool size ",pool," - drop size",drop)
time_start = time.time()
callbacks_list = []
m,h=create_model(traindata,trainlabel,valdata,vallabel,optimizer,batch,ep,conv_kernel,conv,act,do_p,pool,drop,callbacks_list)
histories.append(h)
time_execution = time.time() - time_start
model_summary= (str(m.to_json()))
test_score= m.evaluate(testdata, testlabel,verbose=0)
test_scores.append(test_score)
print("TEST: Loss ",test_score[0]," - Accuracy ",test_score[1],' in sec. ',time_execution)
with open("grid_search_"+ep+"eps.txt", "a") as txt:
txt.write("Test %s su %s, activation: %s, conv_filters: %s, conv_kernel_size: %s, do_pooling:%s, pool_size: %s, drop: %s, time:%s --- ACC: %s | LOSS: %s\n"% (k,tot,act,conv,conv_kernel,do_p,pool,drop,time_execution,test_score[1],test_score[0]))
txt.write("Losses: [")
for x in h.history['loss']:
txt.write(" %s ," % (x))
txt.write("]\n")
txt.write("Val losses: [")
for x in h.history['val_loss']:
txt.write(" %s ," % (x))
txt.write("]\n")
txt.write("Acc: [")
for x in h.history['weighted_accuracy']:
txt.write(" %s ," % (x))
txt.write("]\n")
txt.write("Val acc: [")
for x in h.history['val_weighted_accuracy']:
txt.write(" %s ," % (x))
txt.write("]\n")
print('-----Total execution: ',time.time() - time_start_tests)
# Create & upload a file.
uploaded = drive.CreateFile({'title': 'grid_search_'+ep+'eps.txt'})
uploaded.SetContentFile('grid_search_'+ep+'eps.txt')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
!ls
from google.colab import files
files.download('grid_search_120eps.txt')
```
# Convolution + Bidirectional LSTM
```
#CONVOLUTION + LSTM
#Note: make it run only if you have 6 hours to lose :-)
time_start_tests = time.time()
input_layer = layers.Input(shape=(max_num_amino,30), name='input')
cnn1 = layers.Conv1D(16,3, padding='same')(input_layer)
cnn2 = layers.Conv1D(16,5, padding='same')(input_layer)
cnn3 = layers.Conv1D(16,7, padding='same')(input_layer)
max_p1 = layers.MaxPool1D(5,strides=1,padding='same')(cnn1)
max_p2 = layers.MaxPool1D(5,strides=1,padding='same')(cnn2)
max_p3 = layers.MaxPool1D(5,strides=1,padding='same')(cnn3)
mergeCNN = layers.Concatenate(axis=-1)([max_p1,max_p2,max_p3])
dense1 = layers.Dense(64, activation="relu")(mergeCNN)
#drop1 = layers.Dropout(0.5)(dense1)
bidi = layers.Bidirectional(layers.LSTM(400,kernel_initializer=initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None),
bias_initializer='zeros', return_sequences=True), input_shape=(max_num_amino, 30))(dense1)
dense2 = layers.Dense(64, activation="relu")(bidi)
drop2 = layers.Dropout(0.5)(dense2)
dense3 = layers.Dense(32, activation="relu")(drop2)
output_layer = layers.Dense(2, activation="softmax")(dense3)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer=optimizers.RMSPROP(lr=10, rho=0.95),#'adam',
loss='categorical_crossentropy',
metrics=[weighted_accuracy])
model.summary()
history_lstm_120ep = model.fit(traindata,trainlabel,epochs=120,batch_size=128,callbacks=[ProteinCallback((testdata,testlabel))],
validation_data=(valdata,vallabel))
print('-----Total execution: ',time.time() - time_start_tests)
test_score= model.evaluate(testdata, testlabel)
print("Loss ",test_score[0])
print("Accuracy ",test_score[1])
loss = history_lstm_120ep.history['loss']
val_loss = history_lstm_120ep.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
acc_values =history_lstm_120ep.history['weighted_accuracy']
val_acc_values = history_lstm_120ep.history['val_weighted_accuracy']
plt.plot(epochs, acc_values, 'b', label='Training acc')
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model_30epoche_lstm_v3.h5")
# Create & upload a file.
uploaded = drive.CreateFile({'title': 'model_30epoche_lstm_v3.h5'})
uploaded.SetContentFile('model_30epoche_lstm_v3.h5')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
np.save('history_30epoche_lstm_v3.npy',np.array(history_lstm_120ep))
# Create & upload a file.
uploaded = drive.CreateFile({'title': 'history_30epoche_lstm_v3.npy'})
uploaded.SetContentFile('history_30epoche_lstm_v3.npy')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
```
| github_jupyter |
```
from database.market import Market
from transformer.column_transformer import ColumnTransformer
from transformer.date_transformer import DateTransformer
from utils.date_utils import DateUtils
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime, timedelta, timezone
from tqdm import tqdm
import math
import numpy as np
import statistics
## Loading Constants
start = (datetime.now() - timedelta(days=730)).strftime("%Y-%m-%d")
end = datetime.now().strftime("%Y-%m-%d")
# Loading Databases
market = Market()
market.connect()
tickers = market.retrieve_data("sp500").sort_values("Symbol")
market.close()
model_range = range(len(tickers))
datasets = [
"pdr"
,"tiingo"
,"finnhub"
]
refined_daily_classification = []
refined_quarterly_classification = []
refined_weekly_classification = []
refined_model_regression = []
market.connect()
for i in tqdm(model_range):
try:
refined_regression = []
for dataset in datasets:
ticker = tickers["Symbol"][i]
if "." in ticker:
ticker = ticker.replace(".","-")
prices = market.retrieve_price_data("portfolio_{}_prices".format(dataset),ticker)
if dataset == "pdr":
prices = ColumnTransformer.rename_columns(prices, " ")
else:
prices = ColumnTransformer.rename_columns(prices, "_")
prices = DateTransformer.convert_to_date(dataset,prices,"date")
prices.reset_index(inplace=True)
relev = prices[["date","adjclose"]]
relev["ticker"] = ticker
relev.sort_values("date",inplace=True)
relev.rename(columns={"adjclose":dataset},inplace=True)
relev["date"] = [datetime.strptime(str(x).split(" ")[0],"%Y-%m-%d") for x in relev["date"]]
## daily transformations
refined_regression.append(relev)
base = refined_regression[0]
for rr in refined_regression[1:]:
base = base.merge(rr,on=["date","ticker"],how="left")
adjclose = []
for row in base.iterrows():
values = []
for x in datasets:
try:
values.append(row[1][x])
except:
continue
adjclose.append(np.nanmean(values))
base["adjclose"] = adjclose
relev = base.copy()
relev["week"] = [x.week for x in relev["date"]]
relev["quarter"] = [x.quarter for x in relev["date"]]
relev["year"] = [x.year for x in relev["date"]]
refined_model_regression.append(relev.copy())
relev_classification = relev.copy()
relev_classification["adjclose"] = [1 if x > 0 else 0 for x in relev_classification["adjclose"].diff()]
refined_daily_classification.append(relev_classification)
## weekly transformations
relev["week"] = [x.week for x in relev["date"]]
relev["quarter"] = [x.quarter for x in relev["date"]]
relev["year"] = [x.year for x in relev["date"]]
relev_weekly_classification = relev.groupby(["year","quarter","week"]).mean().reset_index()
relev_weekly_classification["adjclose"] = [1 if x > 0 else 0 for x in relev_weekly_classification["adjclose"].diff()]
relev_weekly_classification["ticker"] = ticker
refined_weekly_classification.append(relev_weekly_classification)
## quarterly transformations
relev_quarterly_classification = relev.groupby(["year","quarter"]).mean().reset_index().drop("week",axis=1)
relev_quarterly_classification["adjclose"] = [1 if x > 0 else 0 for x in relev_quarterly_classification["adjclose"].diff()]
relev_quarterly_classification["ticker"] = ticker
refined_quarterly_classification.append(relev_quarterly_classification)
except Exception as e:
print(str(e),ticker)
classification_sets = {"date":refined_daily_classification,
"quarter":refined_quarterly_classification,
"week":refined_weekly_classification}
for ds in classification_sets:
base = pd.concat(classification_sets[ds])
if ds == "date":
base["year"] = [x.year for x in base["date"]]
base["quarter"] = [x.quarter for x in base["date"]]
base["week"] = [x.week for x in base["date"]]
final = base.pivot_table(index=ds,values="adjclose",columns="ticker").reset_index()
else:
if ds == "week":
final = base.pivot_table(index=["year","quarter","week"],values="adjclose",columns="ticker").reset_index()
else:
final = base.pivot_table(index=["year",ds],values="adjclose",columns="ticker").reset_index()
name = "portfolio_dataset_{}_classification".format(ds)
final.fillna(-99999,inplace=True)
market.drop_table(name)
market.store_data(name,final)
base = pd.concat(refined_model_regression)
market.drop_table("portfolio_prices")
market.store_data("portfolio_prices",base)
final = base.pivot_table(index=["year","quarter","week"],values="adjclose",columns="ticker").reset_index()
final.fillna(-99999,inplace=True)
for timeframe in ["week","quarter"]:
if timeframe == "week":
relev = final.groupby(["year","quarter","week"]).mean().reset_index()
else:
relev = final.groupby(["year","quarter"]).mean().reset_index()
relev.reset_index(drop=True,inplace=True)
name = "portfolio_dataset_{}_regression".format(timeframe)
market.drop_table(name)
market.store_data(name,relev)
market.close()
```
| github_jupyter |
# Data Wrangling with Dynamic Attributes
```
from urllib.request import urlopen
import warnings
import os
import json
URL = 'http://www.oreilly.com/pub/sc/osconfeed'
JSON = 'data/osconfeed.json'
def load():
if not os.path.exists(JSON):
msg = 'downloading {} to {}'.format(URL, JSON)
warnings.warn(msg)
with urlopen(URL) as remote, open(JSON, 'wb') as local:
local.write(remote.read())
with open(JSON) as fp:
return json.load(fp)
feed = load()
sorted(feed['Schedule'].keys())
for key, value in sorted(feed['Schedule'].items()):
print('{:3} {}'.format(len(value), key))
feed['Schedule']['speakers'][-1]['name']
feed['Schedule']['speakers'][-1]['serial']
feed['Schedule']['events'][40]['name']
feed['Schedule']['events'][40]['speakers']
```
## Exploring JSON-Like Data with Dynamic Attributes
```
from collections import abc
class FrozenJSON:
"""A read-only facade for navigating a JSON-like object
using attribute notation"""
def __init__(self, mapping):
self.__data = dict(mapping) #1
def __getattr__(self, name): #2
if hasattr(self.__data, name):
return getattr(self.__data, name) #3
else:
return FrozenJSON.build(self.__data[name]) #4
@classmethod
def build(cls, obj): #5
if isinstance(obj, abc.Mapping): #6
return cls(obj)
elif isinstance(obj, abc.MutableSequence): #7
return [cls.build(item) for item in obj]
else: #8
return obj
from osconfeed import load
raw_feed = load()
feed = FrozenJSON(raw_feed)
raw_feed = load()
feed = FrozenJSON(raw_feed)
len(feed.Schedule.speakers)
sorted(feed.Schedule.keys())
for key, value in sorted(feed.Schedule.items()):
print('{:3} {}'.format(len(value), key))
feed.Schedule.speakers[-1].name
talk = feed.Schedule.events[40]
type(talk)
talk.name
talk.speakers
talk.flavor
```
## The Invalid Attribute Name Problem
```
grad = FrozenJSON({'name': 'Jim Bo', 'class': 1982})
grad.class
getattr(grad,'class')
from collections import abc
import keyword
class FrozenJSON:
"""A read-only facade for navigating a JSON-like object
using attribute notation"""
def __init__(self, mapping):
self.__data = {}
for key, value in mapping.items():
if keyword.iskeyword(key):
key += '_'
self.__data[key] = value
def __getattr__(self, name):
if hasattr(self.__data, name):
return getattr(self.__data, name)
else:
return FrozenJSON.build(self.__data[name])
@classmethod
def build(cls, obj):
if isinstance(obj, abc.Mapping):
return cls(obj)
elif isinstance(obj, abc.MutableSequence):
return [cls.build(item) for item in obj]
else:
return obj
grad = FrozenJSON({'name': 'Jim Bo', 'class': 1982})
grad.class_
x = FrozenJSON({'2be': 'or not'})
x.2be
```
## Flexible Object Creation with __new__
```
from collections import abc
class FrozenJSON:
"""A read-only facade for navigating a JSON-like object
using attribute notation"""
def __new__(cls, arg): #1
if isinstance(arg, abc.Mapping):
return super().__new__(cls) #2
elif isinstance(arg, abc.MutableSequence): #3
return [cls(item) for item in arg]
else:
return arg
def __init__(self, mapping):
self.__data = {}
for key, value in mapping.items():
if keyword.iskeyword(key):
key += '_'
self.__data[key] = value
def __getattr__(self, name):
if hasattr(self.__data, name):
return getattr(self.__data, name)
else:
return FrozenJSON(self.__data[name]) #4
```
## Restructuring the OSCON Feed with shelve
```
import warnings
import osconfeed
DB_NAME = 'data/schedule1_db'
CONFERENCE = 'conference.115'
class Record:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def load_db(db):
raw_data = osconfeed.load()
warnings.warn('loading' + DB_NAME)
for collection, rec_list in raw_data['Schedule'].items():
record_type = collection[:-1]
for record in rec_list:
key = '{}.{}'.format(record_type, record['serial'])
record['serial'] = key
db[key] = Record(**record)
import shelve
db = shelve.open(DB_NAME)
if CONFERENCE not in db:
load_db(db)
speaker = db['speaker.3471']
type(speaker)
speaker.name, speaker.twitter
db.close()
```
## Linked Record Retrieval with Properties
```
import warnings
import inspect
import osconfeed
DB_NAME = 'data/schedule2_db'
CONFERENCE = 'conference.115'
class Record:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __eq__(self, other):
if isinstance(other, Record):
return self.__dict__ == other.__dict__
else:
return NotImplemented
class MissingDatabaseError(RuntimeError):
"""Raised when a database is required but was not set."""
class DbRecord(Record):
__db = None
@staticmethod
def set_db(db):
DbRecord.__db = db
@staticmethod
def get_db():
return DbRecord.__db
@classmethod
def fetch(cls, ident):
db = cls.get_db()
try:
return db[ident]
except TypeError:
if db is None:
msg = "database not set; call '{}.set_db(mydb)'"
raise MissingDatabaseError(msg.format(cls.__name__))
else:
raise
def __repr__(self):
if hasattr(self, 'serial'):
cls_name = self.__class__.__name__
return '<{} serial={!r}>'.format(cls_name, self.serial)
else:
return super().__repr__()
class Event(DbRecord):
@property
def venue(self):
key = 'venue.{}'.format(self.venue_serial)
return self.__class__.fetch(key)
@property
def speakers(self):
if not hasattr(self, '_speaker_objs'):
spkr_serials = self.__dict__['speakers']
fetch = self.__class__.fetch
self._speaker_objs = [fetch('speaker.{}'.format(key))
for key in spkr_serials]
return self._speaker_objs
def __repr__(self):
if hasattr(self, 'name'):
cls_name = self.__class__.__name__
return '<{} {!r}>'.format(cls_name, self.name)
else:
return super().__repr__()
def load_db(db):
raw_data = osconfeed.load()
warnings.warn('loading ' + DB_NAME)
for collection, rec_list in raw_data['Schedule'].items():
record_type = collection[:-1]
cls_name = record_type.capitalize()
cls = globals().get(cls_name, DbRecord)
if inspect.isclass(cls) and issubclass(cls, DbRecord):
factory = cls
else:
factory = DbRecord
for record in rec_list:
key = '{}.{}'.format(record_type, record['serial'])
record['serial'] = key
db[key] = factory(**record)
import shelve
db = shelve.open(DB_NAME)
if CONFERENCE not in db:
load_db(db)
DbRecord.set_db(db)
event = DbRecord.fetch('event.33950')
event
event.venue
event.venue.name
for spkr in event.speakers:
print('{0.serial}: {0.name}'.format(spkr))
event.speakers
db.close()
```
# Using a Property for Attribute Validation
## LineItem Take #1: Class for an Item in an Order
```
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
raisins = LineItem('Golden raisins', 10, 6.95)
raisins.subtotal()
raisins.weight = -20
raisins.subtotal()
```
## LineItem Take #2: A Validating Property
```
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
@property
def weight(self):
return self.__weight
@weight.setter
def weight(self, value):
if value > 0:
self.__weight = value
else:
raise ValueError('value must be > 0')
walnuts = LineItem('walnuts', 0, 10.00)
```
# A Proper Look at Properties
```
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
def get_weight(self):
return self.__weight
def set_weight(self):
if value > 0:
self.__weight = value
else:
raise ValueError('value must be > 0')
weight = property(get_weight, set_weight)
```
## Properties Override Instance Attributes
```
class Class:
data = 'the class data attr'
@property
def prop(self):
return 'the prop value'
obj = Class()
vars(obj)
obj.data
obj.data = 'bar'
vars(obj)
obj.data
Class.data
Class.prop
obj.prop
obj.prop = 'foo'
obj.__dict__['prop'] = 'foo'
vars(obj)
obj.prop
Class.prop = 'baz'
obj.prop
obj.data
Class.data
Class.data = property(lambda self: 'the "data" prop value')
obj.data
del Class.data
obj.data
```
## Property Documentation
```
class Foo:
@property
def bar(self):
"""The bar attribute"""
return self.__dict__['bar']
@property
def bar(self, value):
self.__dict__['bar'] = value
help(Foo.bar)
help(Foo)
```
# Coding a Property Factory
```
def quantity(storage_name):
def qty_getter(instance):
return instance.__dict__[storage_name]
def qty_setter(instance, value):
if value > 0 :
instance.__dict__[storage_name] = value
else:
return ValueError('value must be > 0')
return property(qty_getter, qty_setter)
class LineItem:
weight = quantity('weight')
price = quantity('price')
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
nutmeg = LineItem('Moluccan nutmeg', 8, 13.95)
nutmeg.weight, nutmeg.price
sorted(vars(nutmeg).items())
```
# Handling Attribute Deletion
```
class BlackKnight:
def __init__(self):
self.members = ['an arm', 'another arm','a leg', 'another leg']
self.phrases = ["'Tis but a scratch.", "It's just a flesh wound.",
"I'm invincible!", "All right, we'll call it a draw."]
@property
def member(self):
print('next member is:')
return self.members[0]
@member.deleter
def member(self):
text = 'BLACK NIGHT (loses {})\n-- {}'
print(text.format(self.members.pop(0), self.phrases.pop(0)))
knight = BlackKnight()
knight.member
del knight.member
del knight.member
del knight.member
del knight.member
```
| github_jupyter |
```
from collections import defaultdict
import random
import math
import networkx as nx
import numpy as np
from six import iteritems
from gensim.models.keyedvectors import Vocab
import torch
import torch.nn as nn
from torch.nn.parameter import Parameter
import torch.nn.functional as F
import tqdm
import dgl
import dgl.function as fn
from sklearn.metrics import (auc, f1_score, precision_recall_curve,
roc_auc_score)
class NSLoss(nn.Module):
# 511 5 200
def __init__(self, num_nodes, num_sampled, embedding_size):
super(NSLoss, self).__init__()
self.num_nodes = num_nodes
self.num_sampled = num_sampled
self.embedding_size = embedding_size
self.weights = Parameter(torch.FloatTensor(num_nodes, embedding_size))
self.sample_weights = F.normalize(
torch.Tensor(
[
(math.log(k + 2) - math.log(k + 1)) / math.log(num_nodes + 1)
for k in range(num_nodes)
]
),
dim=0,
)
self.reset_parameters()
def reset_parameters(self):
self.weights.data.normal_(std=1.0 / math.sqrt(self.embedding_size))
def forward(self, input, embs, label):
n = input.shape[0]
log_target = torch.log(
torch.sigmoid(torch.sum(torch.mul(embs, self.weights[label]), 1))
)
negs = torch.multinomial(
self.sample_weights, self.num_sampled * n, replacement=True
).view(n, self.num_sampled)
noise = torch.neg(self.weights[negs])
sum_log_sampled = torch.sum(
torch.log(torch.sigmoid(torch.bmm(noise, embs.unsqueeze(2)))), 1
).squeeze()
loss = log_target + sum_log_sampled
return -loss.sum() / n
def load_testing_data(f_name):
print('We are loading data from:', f_name)
true_edge_data_by_type = dict()
false_edge_data_by_type = dict()
all_edges = list()
all_nodes = list()
with open(f_name, 'r') as f:
for line in f:
words = line[:-1].split(' ')
x, y = words[1], words[2]
if int(words[3]) == 1:
if words[0] not in true_edge_data_by_type:
true_edge_data_by_type[words[0]] = list()
true_edge_data_by_type[words[0]].append((x, y))
else:
if words[0] not in false_edge_data_by_type:
false_edge_data_by_type[words[0]] = list()
false_edge_data_by_type[words[0]].append((x, y))
all_nodes.append(x)
all_nodes.append(y)
all_nodes = list(set(all_nodes))
return true_edge_data_by_type, false_edge_data_by_type
edge_data_by_type = dict()
all_nodes = list()
with open("data/example/train.txt", 'r') as f:
for line in f:
words = line[:-1].split(' ') #
if words[0] not in edge_data_by_type:
edge_data_by_type[words[0]] = list()
x, y = words[1], words[2]
edge_data_by_type[words[0]].append((x, y))
all_nodes.append(x)
all_nodes.append(y)
all_nodes = list(set(all_nodes))
print('Total training nodes: ' + str(len(all_nodes)))
training_data_by_type = edge_data_by_type
valid_true_data_by_edge, valid_false_data_by_edge = load_testing_data(
"data/example/valid.txt"
)
testing_true_data_by_edge, testing_false_data_by_edge = load_testing_data(
"data/example/test.txt"
)
true_edge_data_by_type = dict()
false_edge_data_by_type = dict()
all_edges = list()
all_nodes = list()
with open("data/example/test.txt", 'r') as f:
for line in f:
words = line[:-1].split(' ')
x, y = words[1], words[2]
if int(words[3]) == 1:
if words[0] not in true_edge_data_by_type:
true_edge_data_by_type[words[0]] = list()
true_edge_data_by_type[words[0]].append((x, y))
else:
if words[0] not in false_edge_data_by_type:
false_edge_data_by_type[words[0]] = list()
false_edge_data_by_type[words[0]].append((x, y))
all_nodes.append(x)
all_nodes.append(y)
all_nodes = list(set(all_nodes))
# get_G_from_edges(tmp_data) tmp_data: [(node1, node2), (node, node)]
def get_G_from_edges(edges):
edge_dict = dict() # store how many edges between node1 and node2
for edge in edges: # (node1, node2)
edge_key = str(edge[0]) + '_' + str(edge[1])
if edge_key not in edge_dict:
edge_dict[edge_key] = 1
else:
edge_dict[edge_key] += 1
tmp_G = nx.Graph()
for edge_key in edge_dict:
weight = edge_dict[edge_key]
x = edge_key.split('_')[0]
y = edge_key.split('_')[1]
tmp_G.add_edge(x, y)
tmp_G[x][y]['weight'] = weight
return tmp_G
class RWGraph():
# layer_walker = RWGraph(get_G_from_edges(tmp_data))
def __init__(self, nx_G, node_type=None):
self.G = nx_G
self.node_type = node_type
# print("node_type", node_type)
def walk(self, walk_length, start, schema=None):
# Simulate a random walk starting from start node.
G = self.G
rand = random.Random()
if schema:
schema_items = schema.split('-')
assert schema_items[0] == schema_items[-1]
walk = [start]
while len(walk) < walk_length:
cur = walk[-1]
candidates = []
for node in G[cur].keys():
if schema == None or self.node_type[node] == schema_items[len(walk) % (len(schema_items) - 1)]:
candidates.append(node)
if candidates:
walk.append(rand.choice(candidates))
else:
break
return [str(node) for node in walk]
# layer_walker = RWGraph(get_G_from_edges(tmp_data))
# layer_walks = layer_walker.simulate_walks(num_walks=20, walk_length=10, schema=schema)
def simulate_walks(self, num_walks, walk_length, schema=None):
G = self.G
walks = []
nodes = list(G.nodes())
# print('Walk iteration:')
if schema is not None:
schema_list = schema.split(',')
for walk_iter in range(num_walks):
random.shuffle(nodes)
for node in nodes:
if schema is None:
walks.append(self.walk(walk_length=walk_length, start=node))
else:
for schema_iter in schema_list:
if schema_iter.split('-')[0] == self.node_type[node]:
walks.append(self.walk(walk_length=walk_length, start=node, schema=schema_iter))
return walks
# generate_walks(training_data_by_type, 20, 10, None, "data/example")
file_name = "data/example"
def generate_walks(network_data, num_walks, walk_length, schema, file_name):
if schema is not None:
node_type = load_node_type(file_name + '/node_type.txt')
else:
node_type = None
all_walks = []
for layer_id in network_data: # edge_type
tmp_data = network_data[layer_id] # node of each layer (edge type)
# start to do the random walk on a layer
layer_walker = RWGraph(get_G_from_edges(tmp_data))
layer_walks = layer_walker.simulate_walks(num_walks, walk_length, schema=schema)
all_walks.append(layer_walks)
print('Finish generating the walks')
return all_walks
def generate_vocab(all_walks): # (2, [8640, 4660], 10)
index2word = []
raw_vocab = defaultdict(int)
for walks in all_walks:
for walk in walks:
for word in walk:
raw_vocab[word] += 1
vocab = {}
for word, v in raw_vocab.items(): # no duplicate
vocab[word] = Vocab(count=v, index=len(index2word))
index2word.append(word)
index2word.sort(key=lambda word: vocab[word].count, reverse=True)
for i, word in enumerate(index2word):
vocab[word].index = i
return vocab, index2word
def generate_pairs(all_walks, vocab, window_size):
pairs = []
skip_window = window_size // 2
for layer_id, walks in enumerate(all_walks):
for walk in walks:
for i in range(len(walk)):
for j in range(1, skip_window + 1):
if i - j >= 0:
pairs.append((vocab[walk[i]].index, vocab[walk[i - j]].index, layer_id))
if i + j < len(walk):
pairs.append((vocab[walk[i]].index, vocab[walk[i + j]].index, layer_id))
return pairs
window_size = 5
all_walks = generate_walks(training_data_by_type, 20, 10, None, file_name)
vocab, index2word = generate_vocab(all_walks)
train_pairs = generate_pairs(all_walks, vocab, window_size)
edge_types = list(training_data_by_type.keys())
eval_type = 'all'
num_nodes = len(index2word)
edge_type_count = len(edge_types)
epochs = 100
batch_size = 64
embedding_size = 200
embedding_u_size = 15
u_num = edge_type_count
num_sampled = 5
dim_a = 20
att_head = 1
neighbor_samples = 10
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
network_data=training_data_by_type
neighbors = [[[] for __ in range(edge_type_count)] for _ in range(num_nodes)]
for r in range(edge_type_count):
g = network_data[edge_types[r]]
for (x, y) in g:
ix = vocab[x].index
iy = vocab[y].index
neighbors[ix][r].append(iy)
neighbors[iy][r].append(ix)
for i in range(num_nodes):
if len(neighbors[i][r]) == 0: # no neighbor
neighbors[i][r] = [i] * neighbor_samples # regard itself as its neighbor
elif len(neighbors[i][r]) < neighbor_samples: # randomly repeat neighbors to reach neighbor_samples
neighbors[i][r].extend(
list(
np.random.choice(
neighbors[i][r],
size=neighbor_samples - len(neighbors[i][r]),
)
)
)
elif len(neighbors[i][r]) > neighbor_samples: # random pick 10 and remove others
neighbors[i][r] = list(
np.random.choice(neighbors[i][r], size=neighbor_samples)
)
# train_pairs: size: 452200, form: (node1, node2, layer_id)
# neighbors: [num_nodes=511, 2, 10]
def get_batches(pairs, neighbors, batch_size):
n_batches = (len(pairs) + (batch_size - 1)) // batch_size
for idx in range(n_batches):
x, y, t, neigh = [], [], [], []
for i in range(batch_size):
index = idx * batch_size + i
if index >= len(pairs):
break
x.append(pairs[index][0])
y.append(pairs[index][1])
t.append(pairs[index][2])
neigh.append(neighbors[pairs[index][0]])
yield torch.tensor(x), torch.tensor(y), torch.tensor(t), torch.tensor(neigh)
def get_graphs(layers, index2word, neighbors):
graphs = []
for layer in range(layers):
g = dgl.DGLGraph()
g.add_nodes(len(index2word))
graphs.append(g)
for n in range(len(neighbors)):
for layer in range(layers):
graphs[layer].add_edges(n, neighbors[n][layer])
return graphs
class DGLGATNE(nn.Module):
def __init__(
self, graphs, num_nodes, embedding_size, embedding_u_size, edge_type_count, dim_a
):
super(DGLGATNE, self).__init__()
assert len(graphs) == edge_type_count
self.graphs = graphs
self.num_nodes = num_nodes
self.embedding_size = embedding_size
self.embedding_u_size = embedding_u_size
self.edge_type_count = edge_type_count
self.dim_a = dim_a
# for g in self.graphs:
# g.ndata['node_type_embeddings'] = -2 * torch.rand(num_nodes, embedding_u_size) + 1
self.node_embeddings = Parameter(torch.FloatTensor(num_nodes, embedding_size))
self.node_type_embeddings = Parameter(
torch.FloatTensor(num_nodes, edge_type_count, embedding_u_size)
)
self.trans_weights = Parameter(
torch.FloatTensor(edge_type_count, embedding_u_size, embedding_size)
)
self.trans_weights_s1 = Parameter(
torch.FloatTensor(edge_type_count, embedding_u_size, dim_a)
)
self.trans_weights_s2 = Parameter(torch.FloatTensor(edge_type_count, dim_a, 1))
self.reset_parameters()
def reset_parameters(self):
self.node_embeddings.data.uniform_(-1.0, 1.0)
self.node_type_embeddings.data.uniform_(-1.0, 1.0)
self.trans_weights.data.normal_(std=1.0 / math.sqrt(self.embedding_size))
self.trans_weights_s1.data.normal_(std=1.0 / math.sqrt(self.embedding_size))
self.trans_weights_s2.data.normal_(std=1.0 / math.sqrt(self.embedding_size))
# data: node1, node2, layer_id, 10 neighbors of node1. dimension: batch_size/batch_size*10
# embs: [batch_size64, embedding_size200]
# embs = model(data[0].to(device), data[2].to(device), data[3].to(device))
def forward(self, train_inputs, train_types, node_neigh):
for g in self.graphs:
g.ndata['node_type_embeddings'] = self.node_type_embeddings
sub_graphs = []
for layer in range(self.edge_type_count):
edges = self.graphs[layer].edge_ids(train_inputs[0], node_neigh[0][layer])
for node in range(1, train_inputs.shape[0]):
e = self.graphs[layer].edge_ids(train_inputs[node], node_neigh[node][layer])
edges = torch.cat([edges, e], dim = 0)
graph = self.graphs[layer].edge_subgraph(edges, preserve_nodes=True)
graph.ndata['node_type_embeddings'] = self.node_type_embeddings[:, layer, :]
sub_graphs.append(graph)
node_embed = self.node_embeddings
node_type_embed = []
for layer in range(self.edge_type_count):
graph = sub_graphs[layer]
graph.update_all(fn.copy_src('node_type_embeddings', 'm'), fn.sum('m', 'neigh'))
node_type_embed.append(graph.ndata['neigh'][train_inputs])
node_type_embed = torch.stack(node_type_embed, 1) # batch, layers, 10
# [batch_size, embedding_u_size10, embedding_size200]
trans_w = self.trans_weights[train_types]
# [batch_size, embedding_u_size10, dim_a20]
trans_w_s1 = self.trans_weights_s1[train_types]
# [batch_size, dim_a20, 1]
trans_w_s2 = self.trans_weights_s2[train_types]
attention = F.softmax(
torch.matmul(
torch.tanh(torch.matmul(node_type_embed, trans_w_s1)), trans_w_s2
).squeeze(2),
dim=1,
).unsqueeze(1)
node_type_embed = torch.matmul(attention, node_type_embed)
node_embed = node_embed[train_inputs] + torch.matmul(node_type_embed, trans_w).squeeze(1)
last_node_embed = F.normalize(node_embed, dim=1)
return last_node_embed
# def forward(self, train_inputs, train_types, node_neigh):
# graphs = self.graphs
# node_embed = self.node_embeddings[train_inputs]
# node_embed_neighbors = self.node_type_embeddings[node_neigh]
# node_embed_tmp = torch.cat(
# [
# node_embed_neighbors[:, i, :, i, :].unsqueeze(1)
# for i in range(self.edge_type_count)
# ],
# dim=1,
# )
# node_type_embed = torch.sum(node_embed_tmp, dim=2)
# trans_w = self.trans_weights[train_types]
# trans_w_s1 = self.trans_weights_s1[train_types]
# trans_w_s2 = self.trans_weights_s2[train_types]
# attention = F.softmax(
# torch.matmul(
# torch.tanh(torch.matmul(node_type_embed, trans_w_s1)), trans_w_s2
# ).squeeze(2),
# dim=1,
# ).unsqueeze(1)
# node_type_embed = torch.matmul(attention, node_type_embed)
# node_embed = node_embed + torch.matmul(node_type_embed, trans_w).squeeze(1)
# last_node_embed = F.normalize(node_embed, dim=1)
# return last_node_embed
graphs = get_graphs(edge_type_count, index2word, neighbors)
model = DGLGATNE(
graphs, num_nodes, embedding_size, embedding_u_size, edge_type_count, dim_a
)
# model = GATNEModel(
# num_nodes, embedding_size, embedding_u_size, edge_type_count, dim_a
# )
nsloss = NSLoss(num_nodes, num_sampled, embedding_size)
model.to(device)
nsloss.to(device)
optimizer = torch.optim.Adam(
[{"params": model.parameters()}, {"params": nsloss.parameters()}], lr=1e-4
)
best_score = 0
patience = 0
# batches = get_batches(train_pairs, neighbors)
epochs = 1
for epoch in range(epochs):
random.shuffle(train_pairs)
batches = get_batches(train_pairs, neighbors, batch_size) # 7066 batches
data_iter = tqdm.tqdm(
batches,
desc="epoch %d" % (epoch),
total=(len(train_pairs) + (batch_size - 1)) // batch_size,
bar_format="{l_bar}{r_bar}",
)
avg_loss = 0.0
for i, data in enumerate(data_iter):
# batch by batch, 7066 batches in total
optimizer.zero_grad()
# data: node1, node2, layer_id, 10 neighbors of node1. dimension: batch_size/batch_size*10
# embs: [batch_size64, embedding_size200]
embs = model(data[0].to(device), data[2].to(device), data[3].to(device))
loss = nsloss(data[0].to(device), embs, data[1].to(device))
loss.backward()
optimizer.step()
avg_loss += loss.item()
if i % 5000 == 0:
post_fix = {
"epoch": epoch,
"iter": i,
"avg_loss": avg_loss / (i + 1),
"loss": loss.item(),
}
data_iter.write(str(post_fix))
final_model = dict(zip(edge_types, [dict() for _ in range(edge_type_count)]))
for i in range(num_nodes):
train_inputs = torch.tensor([i for _ in range(edge_type_count)]).to(device) # [i, i]
train_types = torch.tensor(list(range(edge_type_count))).to(device) # [0, 1]
node_neigh = torch.tensor(
[neighbors[i] for _ in range(edge_type_count)] # [2, 2, 10]
).to(device)
node_emb = model(train_inputs, train_types, node_neigh, subgraph=True) #[2, 200]
for j in range(edge_type_count):
final_model[edge_types[j]][index2word[i]] = (
node_emb[j].cpu().detach().numpy()
)
def evaluate(model, true_edges, false_edges):
true_list = list()
prediction_list = list()
true_num = 0
for edge in true_edges:
tmp_score = get_score(model, str(edge[0]), str(edge[1]))
if tmp_score is not None:
true_list.append(1)
prediction_list.append(tmp_score)
true_num += 1
for edge in false_edges:
tmp_score = get_score(model, str(edge[0]), str(edge[1]))
if tmp_score is not None:
true_list.append(0)
prediction_list.append(tmp_score)
sorted_pred = prediction_list[:]
sorted_pred.sort()
threshold = sorted_pred[-true_num]
y_pred = np.zeros(len(prediction_list), dtype=np.int32)
for i in range(len(prediction_list)):
if prediction_list[i] >= threshold:
y_pred[i] = 1
y_true = np.array(true_list)
y_scores = np.array(prediction_list)
ps, rs, _ = precision_recall_curve(y_true, y_scores)
return roc_auc_score(y_true, y_scores), f1_score(y_true, y_pred), auc(rs, ps)
def get_score(local_model, node1, node2):
try:
vector1 = local_model[node1]
vector2 = local_model[node2]
return np.dot(vector1, vector2) / (np.linalg.norm(vector1) * np.linalg.norm(vector2))
except Exception as e:
pass
# train_pairs: size: 452200, form: (node1, node2, layer_id)
# neighbors: [num_nodes=511, 2, 10]
def get_batches(pairs, neighbors):
x, y, t, neigh = [], [], [], []
for pair in pairs:
x.append(pair[0])
y.append(pair[1])
t.append(pair[2])
neigh.append(neighbors[pair[0]])
return torch.tensor(x), torch.tensor(y), torch.tensor(t), torch.tensor(neigh)
final_model = dict(zip(edge_types, [dict() for _ in range(edge_type_count)]))
for i in range(num_nodes):
train_inputs = torch.tensor([i for _ in range(edge_type_count)]).to(device) # [i, i]
train_types = torch.tensor(list(range(edge_type_count))).to(device) # [0, 1]
node_neigh = torch.tensor(
[neighbors[i] for _ in range(edge_type_count)] # [2, 2, 10]
).to(device)
node_emb = model(train_inputs, train_types, node_neigh, subgraph=True) #[2, 200]
for j in range(edge_type_count):
final_model[edge_types[j]][index2word[i]] = (
node_emb[j].cpu().detach().numpy()
)
valid_aucs, valid_f1s, valid_prs = [], [], []
test_aucs, test_f1s, test_prs = [], [], []
for i in range(edge_type_count):
if eval_type == "all" or edge_types[i] in eval_type.split(","):
tmp_auc, tmp_f1, tmp_pr = evaluate(
final_model[edge_types[i]],
valid_true_data_by_edge[edge_types[i]],
valid_false_data_by_edge[edge_types[i]],
)
valid_aucs.append(tmp_auc)
valid_f1s.append(tmp_f1)
valid_prs.append(tmp_pr)
tmp_auc, tmp_f1, tmp_pr = evaluate(
final_model[edge_types[i]],
testing_true_data_by_edge[edge_types[i]],
testing_false_data_by_edge[edge_types[i]],
)
test_aucs.append(tmp_auc)
test_f1s.append(tmp_f1)
test_prs.append(tmp_pr)
print("valid auc:", np.mean(valid_aucs))
print("valid pr:", np.mean(valid_prs))
print("valid f1:", np.mean(valid_f1s))
average_auc = np.mean(test_aucs)
average_f1 = np.mean(test_f1s)
average_pr = np.mean(test_prs)
cur_score = np.mean(valid_aucs)
if cur_score > best_score:
best_score = cur_score
patience = 0
else:
patience += 1
if patience > args.patience:
print("Early Stopping")
```
| github_jupyter |
# Creating a GRU model using Trax: Ungraded Lecture Notebook
For this lecture notebook you will be using Trax's layers. These are the building blocks for creating neural networks with Trax.
```
import trax
from trax import layers as tl
```
Trax allows to define neural network architectures by stacking layers (similarly to other libraries such as Keras). For this the `Serial()` is often used as it is a combinator that allows to stack layers serially using function composition.
Next you can see a simple vanilla NN architecture containing 1 hidden(dense) layer with 128 cells and output (dense) layer with 10 cells on which we apply the final layer of logsoftmax.
```
mlp = tl.Serial(
tl.Dense(128),
tl.Relu(),
tl.Dense(10),
tl.LogSoftmax()
)
```
Each of the layers within the `Serial` combinator layer is considered a sublayer. Notice that unlike similar libraries, **in Trax the activation functions are considered layers.** To know more about the `Serial` layer check the docs [here](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Serial).
You can try printing this object:
```
print(mlp)
```
Printing the model gives you the exact same information as the model's definition itself.
By just looking at the definition you can clearly see what is going on inside the neural network. Trax is very straightforward in the way a network is defined, that is one of the things that makes it awesome!
## GRU MODEL
To create a `GRU` model you will need to be familiar with the following layers (Documentation link attached with each layer name):
- [`ShiftRight`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.ShiftRight) Shifts the tensor to the right by padding on axis 1. The `mode` should be specified and it refers to the context in which the model is being used. Possible values are: 'train', 'eval' or 'predict', predict mode is for fast inference. Defaults to "train".
- [`Embedding`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Embedding) Maps discrete tokens to vectors. It will have shape `(vocabulary length X dimension of output vectors)`. The dimension of output vectors (also called `d_feature`) is the number of elements in the word embedding.
- [`GRU`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.GRU) The GRU layer. It leverages another Trax layer called [`GRUCell`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.GRUCell). The number of GRU units should be specified and should match the number of elements in the word embedding. If you want to stack two consecutive GRU layers, it can be done by using python's list comprehension.
- [`Dense`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) Vanilla Dense layer.
- [`LogSoftMax`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.LogSoftmax) Log Softmax function.
Putting everything together the GRU model will look like this:
```
mode = 'train'
vocab_size = 256
model_dimension = 512
n_layers = 2
GRU = tl.Serial(
tl.ShiftRight(mode=mode), # Do remember to pass the mode parameter if you are using it for interence/test as default is train
tl.Embedding(vocab_size=vocab_size, d_feature=model_dimension),
[tl.GRU(n_units=model_dimension) for _ in range(n_layers)], # You can play around n_layers if you want to stack more GRU layers together
tl.Dense(n_units=vocab_size),
tl.LogSoftmax()
)
```
Next is a helper function that prints information for every layer (sublayer within `Serial`):
_Try changing the parameters defined before the GRU model and see how it changes!_
```
def show_layers(model, layer_prefix="Serial.sublayers"):
print(f"Total layers: {len(model.sublayers)}\n")
for i in range(len(model.sublayers)):
print('========')
print(f'{layer_prefix}_{i}: {model.sublayers[i]}\n')
show_layers(GRU)
```
Hope you are now more familiarized with creating GRU models using Trax.
You will train this model in this week's assignment and see it in action.
**GRU and the trax minions will return, in this week's endgame.**
| github_jupyter |
## Face and Facial Keypoint detection
After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.
1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).
2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.
3. Use your trained model to detect facial keypoints on the image.
---
In the next python cell we load in required libraries for this section of the project.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
#### Select an image
Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## Detect all faces in an image
Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.
In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.
An example of face detection on a variety of images is shown below.
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## Loading in a trained model
Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.
First, load your best model by its filename.
```
import torch
from models import Net
net = Net()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
# net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
## print out your net and prepare it for testing (uncomment the line below)
# net.eval()
net.eval()
```
## Keypoint detection
Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.
### TODO: Transform each detected face into an input Tensor
You'll need to perform the following steps for each detected face:
1. Convert the face from RGB to grayscale
2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
4. Reshape the numpy image into a torch image.
You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.
### TODO: Detect and display the predicted keypoints
After each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
from torch.autograd import Variable
image_copy = np.copy(image)
img_count = 1
fig = plt.figure(figsize=(20, 5))
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
head_img = image_copy[y:y+h, x:x+w]
## TODO: Convert the face region from RGB to grayscale
head_img = cv2.cvtColor(head_img, cv2.COLOR_RGB2GRAY)
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
head_img= head_img/255.0
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
h, w = head_img.shape[:2]
head_img = cv2.resize(head_img, (224, 224))
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
head_img_torch = head_img.reshape(1,224,224)
## TODO: Make facial keypoint predictions using your loaded, trained network
## perform a forward pass to get the predicted facial keypoints
head_img_torch = head_img_torch.reshape(1,1,224,224)
head_img_torch = torch.from_numpy(head_img_torch)
head_img_torch = Variable(head_img_torch)
head_img_torch = head_img_torch.type(torch.FloatTensor)
output_pts = net(head_img_torch)
## TODO: Display each detected face and the corresponding keypoints
predicted_key_pts = output_pts.data.numpy().reshape(-1, 2)
# undo normalization of keypoints
predicted_key_pts = predicted_key_pts * 60 + 100
print(predicted_key_pts.shape)
print(head_img.shape)
ax = fig.add_subplot(1, img_count, img_count)
ax.imshow(head_img, cmap='gray')
ax.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')
img_count += 1
```
| github_jupyter |
# Mask R-CNN - Inference Tool
## Google Colab Only
Executes only if using this notebook on Google Colab (getting files needed). Errors might appear, do not worry about them. Check that files you wanted to retrieve are in the ```Files``` tab on the left, represented by this icon : 
As Google Colab does not refresh the files list quickly, you may want to refresh it manually before checking if the file you are looking for is here, use the following icon to refresh the files list: 
For better performance, enable GPU under hardware accelerator: ```Runtime``` > ```Change runtime type``` or ```Edit``` > ```Notebook Settings``` and then ```'Hardware Accelerator' dropdown list``` > ```GPU```

```
import re
import sys
import os
IN_COLAB = 'google.colab' in sys.modules
max_px = str(pow(2, 32))
if IN_COLAB:
!export TF_CPP_MIN_LOG_LEVEL=3
!export OPENCV_IO_MAX_IMAGE_PIXELS=$max_px
!export CV_IO_MAX_IMAGE_PIXELS=$max_px
print("Executing in Google Colab")
%tensorflow_version 2.x
import shutil
shutil.rmtree('sample_data/', ignore_errors=True)
else:
os.environ['TF_CPP_MIN_LOG_LEVEL']='3'
os.environ['OPENCV_IO_MAX_IMAGE_PIXELS']=max_px
os.environ['CV_IO_MAX_IMAGE_PIXELS']=max_px
```
### Retrieving needed files
You can use this cell to update the files that have been downloaded during the same session and that have been updated on GitHub
```
if IN_COLAB:
GITHUB_REPO = "https://raw.githubusercontent.com/AdrienJaugey/Mask-R-CNN-Inference-Tool/stable/"
files = ['mrcnn/TensorflowDetector.py', 'mrcnn/utils.py', 'mrcnn/visualize.py', 'mrcnn/post_processing.py',
'mrcnn/Config.py', 'mrcnn/statistics.py', 'datasetTools/datasetDivider.py',
'datasetTools/datasetWrapper.py', 'datasetTools/datasetIsolator.py', 'datasetTools/AnnotationAdapter.py',
'datasetTools/ASAPAdapter.py', 'datasetTools/LabelMeAdapter.py', 'datasetTools/CustomDataset.py',
'InferenceTool.py', 'common_utils.py', "skinet.json", "skinet_inf.json"]
for fileToDownload in files:
url = GITHUB_REPO + fileToDownload
!wget -qN $url
if '/' in fileToDownload:
os.makedirs(os.path.dirname(fileToDownload), exist_ok=True)
fileName = os.path.basename(fileToDownload)
!mv $fileName $fileToDownload
```
### Connecting to Google Drive
The first time this cell is executed, a link should appear, asking you to accept to give access to files of a google account.
1. **Follow the link**;
2. **Choose the account** you want to link;
3. **Accept**;
4. **Copy the key** Google gave you;
5. **Paste the key in the text field** that appeared below the first link you used,
6. **Press ENTER**.
```
if IN_COLAB:
from google.colab import drive
drive.mount('/content/drive')
```
### Retrieving your image(s)
Choose how to get your image(s) from the following list on the right
Use ```.jpg``` or ```.png``` images only !
```
if IN_COLAB:
howToGetImage = "From Google Drive" #@param ["Upload", "From Google Drive"]
if IN_COLAB:
!rm -r images/ || true
!mkdir -p images
!mkdir -p images/chain
!mkdir -p images/cortex
!mkdir -p images/main
!mkdir -p images/mest_main
!mkdir -p images/mest_glom
!mkdir -p images/inflammation
```
#### By upload
```
if IN_COLAB and howToGetImage == "Upload":
imageType = 'chain' #@param ["chain", "cortex", "main", "mest_main", "mest_glom", "inflammation"]
print("Please upload the image(s) you want to run the inference on, you can upload the corresponding annotations files too.")
from google.colab import files
import shutil
uploaded = files.upload()
for fileName in uploaded:
shutil.move(fileName, os.path.join("images", imageType, fileName))
```
#### By copy from Google Drive
Be sure to customize the 2 variables for Google Colab to be able find your file(s) in Google Drive.
Let's say you have this hierarchy in your Google Drive:
```
Root directory of Google Drive
├─── Directory1
└─── Directory2
├─── images
│ ├─── example1.png
│ └─── example2.png
└─── saved_models
├─── mode1
└─── mode2.zip
```
1. `execMode` should match the name of inference mode you will run with the images that will be retrieved;
2. `customPathInDrive` must represent all the directories between the root directory and your image file. In the example, it would be `Directory2/images/`. Keep it empty if **the file is directly in the root directory** of Google Drive;
2. `imageFilePath` must represent the file you want to upload. In the example, it would be `example1.png`. It can also be empty, if you want to import all the folder's images *(and annotations files if checkbox is checked)* directly to Google Colab, so in the example `example1.png` and `example2.png` would be imported.
3. `annotationsFile` if checked or set to True, the tool will retrieve annotations files (actually JSON and XML files) from the same folder. Value is `True` or `False`.
Use the text fields available on the right.
```
if IN_COLAB and howToGetImage == "From Google Drive":
execMode = 'chain' #@param ["chain", "cortex", "main", "mest_main", "mest_glom", "inflammation"]
customPathInDrive = "" #@param {type:"string"}
imageFilePath = "" #@param{type:"string"}
annotationsFile = True #@param {type:"boolean"}
pathToDrive = "/content/drive/MyDrive/"
pathToFolder = os.path.join(pathToDrive, customPathInDrive)
annotationsFile = annotationsFile and execMode != "chain"
if imageFilePath != "":
pathToImage = os.path.join(pathToFolder, imageFilePath)
tempPath = os.path.join("images", execMode, imageFilePath)
print(f"Copying {pathToImage} to {tempPath}")
!cp -u $pathToImage $tempPath
if annotationsFile:
annotationsFileName = imageFilePath.split('.')[0] + '.xml'
pathToAnnotations = os.path.join(pathToFolder, annotationsFileName)
tempPath = os.path.join("images", execMode, annotationsFileName)
print(f"Copying {pathToAnnotations} to {tempPath}")
!cp -u $pathToAnnotations $tempPath
else:
fileList = os.listdir(pathToFolder)
ext = ['png', 'jpg']
if annotationsFile:
ext.extend(['xml', 'json'])
for dataFile in fileList:
if dataFile.split('.')[-1] in ext:
pathToFile = "'" + os.path.join(pathToFolder, dataFile) + "'"
tempPath = os.path.join("images", execMode, dataFile)
print(f"Copying {pathToFile} to {tempPath}")
!cp -u $pathToFile $tempPath
```
### Retrieving Weights File
Same thing than retrieving an image file using Google Drive but it is the saved weights file (folder or zip) for inference modes. With the past example, it would be `Directory2/saved_models/` as `customPathInDrive`.
```
if IN_COLAB:
execMode = 'chain' #@param ["chain", "chain_inf", "cortex", "main", "mest_main", "mest_glom", "inflammation"]
# Keep customPathInDrive empty if file directly in root directory of Google Drive
customPathInDrive = "" #@param {type:"string"}
isZipped = True #@param {type:"boolean"}
cortex = "skinet_cortex_v2" #@param {type:"string"}
main = "skinet_main_v1" #@param {type:"string"}
mest_main = "skinet_mest_main_v3" #@param {type:"string"}
mest_glom = "skinet_mest_glom_v3" #@param {type:"string"}
inflammation = "skinet_inflammation_v3" #@param {type:"string"}
pathToDrive = "/content/drive/MyDrive/"
pathToFolder = os.path.join(pathToDrive, customPathInDrive)
paths = {'cortex': cortex, 'main': main, 'mest_main': mest_main,
'mest_glom': mest_glom, 'inflammation': inflammation}
for mode_ in (paths if "chain" in execMode else [execMode]):
path = paths[mode_] + ('.zip' if isZipped else '')
pathToWeights = os.path.join(pathToFolder, path)
print(f"Copying {pathToWeights} to {path}")
if isZipped:
!cp -u $pathToWeights $path
!unzip -q $path
!rm -f $path
else:
!cp -ru $pathToWeights $path
```
## Initialisation
### Configuration
You will have to open the **Files tab** in the **vertical navigation bar on the left** to see the results appearing. They should be downloaded as a zip file automatically at the end, if it is not the case: save them by right-clicking on each file you want to keep (or the zip file that contains everything).
- ```mode```: The inference mode to use. Note that 'chain' will use multiple modes.
- ```displayMode```: Whether to display every step of an inference, or only AP and statistics.
- ```forceFullSizeMasks```: Whether to force usage of full-sized masks instead of mini-masks.
- ```forceLowMemoryUsage```: Force low-memory mode to use the least RAM as possible.
- ```saveResults```: Whether to save the results or not.
- ```saveDebugImages```: Whether to save debug images between post-processing methods.
- ```moveAutoCleanedImageToNextMode```: Whether to move automatically cleaned image image to next mode input folder.
```
mode = 'chain' #@param ["chain", "chain_inf", "cortex", "main", "mest_main", "mest_glom", "inflammation"]
displayMode = "All steps" #@param ["All steps", "Only statistics"]
forceFullSizeMasks = False #@param {type:"boolean"}
forceLowMemoryUsage = False #@param {type:"boolean"}
saveResults = True #@param {type:"boolean"}
saveDebugImages = False #@param {type:"boolean"}
moveAutoCleanedImageToNextMode = True #@param {type:"boolean"}
print(f"Inference Tool in {mode} segmentation mode.")
```
### Information
```
import InferenceTool as iT
images = iT.listAvailableImage(os.path.join('images', 'chain' if 'chain' in mode else mode))
nb = len(images)
print(f"Found {nb if nb > 0 else 'no'} image{'s' if nb > 1 else ''}{':' if nb > 0 else ''}")
print("\n".join([f" - {image}" for image in images]))
```
## Inference
```
import shutil
debugMode = False
shutil.rmtree('data/', ignore_errors=True)
if len(images) > 0:
try:
model is None
if mode != lastMode:
raise NameError()
except NameError:
lastMode = mode
configFile = 'skinet_inf.json' if mode in ['chain_inf', 'inflammation'] else 'skinet.json'
model = iT.InferenceTool(configFile, low_memory=IN_COLAB or forceLowMemoryUsage)
if "chain" not in mode:
model.load(mode=mode, forceFullSizeMasks=forceFullSizeMasks)
model.inference(images, "results/inference/", chainMode=("chain" in mode), save_results=saveResults, displayOnlyStats=(displayMode == "Only statistics"), saveDebugImages=saveDebugImages, chainModeForceFullSize=forceFullSizeMasks, verbose=0, debugMode=debugMode)
shutil.rmtree('data/', ignore_errors=True)
```
### Moving Fusion files (Colab + Local) and downloading results (Colab only)
```
import os
if len(images) > 0:
lastDir = 'inference'
remainingPath = "results/"
lastFoundDir = None
fileList = os.listdir(remainingPath)
fileList.sort()
for resultFolder in fileList:
if lastDir in resultFolder and os.path.isdir(os.path.join(remainingPath, resultFolder)):
lastFoundDir = resultFolder
if lastFoundDir is not None:
lastFoundDirPath = os.path.join(remainingPath, lastFoundDir)
if moveAutoCleanedImageToNextMode and 'chain' not in mode:
from mrcnn.Config import Config
conf = Config(configFile, mode)
if conf.has_next_mode() and conf.has_to_export_cleaned_img():
import re
for export_param in conf.get_export_param_cleaned_img():
name = ""
if export_param.get('name', None) is not None:
name = export_param['name'] + '|'
reg = re.compile(r"_cleaned_(" + name + "[0-9]{2,})\.[a-zA-Z0-9\-]+$")
for imageFolder in os.listdir(lastFoundDirPath):
imageFolderPath = os.path.join(lastFoundDirPath, imageFolder)
if os.path.isdir(imageFolderPath):
for aFile in os.listdir(imageFolderPath):
if reg.search(aFile) is not None:
cleanedImagePath = os.path.join(imageFolderPath, aFile)
if os.path.exists(cleanedImagePath):
shutil.copy2(cleanedImagePath, os.path.join("images", conf.get_next_mode(), imageFolder + "_cleaned.jpg"))
if IN_COLAB:
zipName = lastFoundDir + '.zip'
!zip -qr $zipName $lastFoundDirPath
print("Results can be downloaded on the Files tab on the left")
print("Zip file name is :", zipName)
```
This cell may run indefinitely, Google Colab has problem with downloading automatically large files.
```
if IN_COLAB and len(images) > 0:
from google.colab import files
files.download(zipName)
```
| github_jupyter |
## Icolos Docking Workflow Demo
Icolos can perform automated docking, with support for advanced features such as ensemble docking and pose rescoring.
In this notebook, we demonstrate a minimal working example of a docking workflow, using LigPrep and Glide. More complex workflows examples, including rescoring methods and ensemble docking, and a comprehensive list of additional settings can be found in the documentation.
Files required to execute the workflow are provided in the accompanying IcolosData repository, available at https://github.com/MolecularAI/IcolosData.
Note, we provide an `icoloscommunity` environment which should be used for this notebook. It contains the `jupyter` dependencies in addition to the Icolos production environment requirements, allowing you to execute workflows from within the notebook
### Step 1: Prepare input files
The following files are required to start the docking run
* Receptor grid (normally prepared in the Maestro GUI)
* smiles strings for the compounds to dock, in `.smi` or `.csv` format
* Icolos config file: a `JSON` file containing the run settings. Templates for the most common workflows can be found in the `examples` folder of main Icolos repository.
```
import os
import json
import subprocess
import pandas as pd
# set up some file paths to use the provided test data
# please amend as appropriate
icolos_path = "~/Icolos"
data_dir = "~/IcolosData"
output_dir = "../output"
config_dir = "../config/docking"
for path in [output_dir, config_dir]:
os.makedirs(path, exist_ok=True)
grid_path = os.path.expanduser(os.path.join(data_dir, "Glide/1UYD_grid_constraints.zip"))
smiles_path = os.path.expanduser(os.path.join(data_dir, "molecules/paracetamol.smi"))
conf={
"workflow": {
"header": {
"workflow_id": "docking_minimal",
"description": "demonstration docking job with LigPrep + Glide",
"environment": {
"export": [
]
},
"global_variables": {
}
},
"steps": [{
"step_id": "initialization_smile",
"type": "initialization",
"input": {
# specify compounds parsed from the .smi file
"compounds": [{
"source": smiles_path,
"source_type": "file",
"format": "SMI"
}
]
}
}, {
"step_id": "Ligprep",
"type": "ligprep",
"execution": {
"prefix_execution": "module load schrodinger/2021-2-js-aws",
"parallelization": {
"cores": 2,
"max_length_sublists": 1
},
# automatic resubmission on job failure
"failure_policy": {
"n_tries": 3
}
},
"settings": {
"arguments": {
# flags and params passed straight to LigPrep
"flags": ["-epik"],
"parameters": {
"-ph": 7.0,
"-pht": 2.0,
"-s": 10,
"-bff": 14
}
},
"additional": {
"filter_file": {
"Total_charge": "!= 0"
}
}
},
"input": {
# load initialized compounds from the previous step
"compounds": [{
"source": "initialization_smile",
"source_type": "step"
}
]
}
}, {
"step_id": "Glide",
"type": "glide",
"execution": {
"prefix_execution": "module load schrodinger/2021-2-js-aws",
"parallelization": {
"cores": 4,
"max_length_sublists": 1
},
"failure_policy": {
"n_tries": 3
}
},
"settings": {
"arguments": {
"flags": [],
"parameters": {
"-HOST": "cpu-only"
}
},
"additional": {
# glide configuration for the .in file
"configuration": {
"AMIDE_MODE": "trans",
"EXPANDED_SAMPLING": "True",
"GRIDFILE": [grid_path],
"NENHANCED_SAMPLING": "1",
"POSE_OUTTYPE": "ligandlib_sd",
"POSES_PER_LIG": "3",
"POSTDOCK_NPOSE": "25",
"POSTDOCKSTRAIN": "True",
"PRECISION": "SP",
"REWARD_INTRA_HBONDS": "True"
}
}
},
"input": {
# take embedded compounds from the previous step
"compounds": [{
"source": "Ligprep",
"source_type": "step"
}
]
},
"writeout": [
# write a sdf file with all conformers
{
"compounds": {
"category": "conformers"
},
"destination": {
"resource": os.path.join(output_dir,"docked_conformers.sdf"),
"type": "file",
"format": "SDF"
}
},
# write a csv file with the top docking score per compound
{
"compounds": {
"category": "conformers",
"selected_tags": ["docking_score"],
"aggregation": {
"mode": "best_per_compound",
"key": "docking_score"
}
},
"destination": {
"resource": os.path.join(output_dir, "docked_conformers.csv"),
"type": "file",
"format": "CSV"
}
}
]
}
]
}
}
with open(os.path.join(config_dir, "docking_conf.json"), 'w') as f:
json.dump(conf, f, indent=4)
```
The workflow can be executed by running the following command (with paths amended as necessary), in a terminal.
```
# this run will take a few seconds to complete
docking_conf = os.path.join(config_dir, "docking_conf.json")
command = f"icolos -conf {docking_conf}"
subprocess.run(command, shell=True)
```
We will briefly inspect the results files.
```
results = pd.read_csv(os.path.join(output_dir, "docked_conformers.csv"))
results.head()
```
| github_jupyter |
# Gutenberg Chapters Dataset with CNN-Kim
CNN-Kim analysis with the Gutenberg chapters dataset. Using the following configuration:
1. Using Learned Embedding
1. Embedding size: 100
1. Using chapter length of 2,500
1. Top vocabulary count 5,000
1. Using filter sizes of 3, 4, 5, 6
1. Adam Learning Rate of 1e-4
1. L2-contraint 0.001
```
%matplotlib inline
import os
import sys
ai_lit_path = os.path.abspath(os.path.join(os.getcwd(), os.pardir, os.pardir, os.pardir))
print("Loading AI Lit system from path", ai_lit_path)
sys.path.append(ai_lit_path)
import numpy as np
import pandas as pd
import tensorflow as tf
from ai_lit.analysis import analysis_util
from ai_lit.input.gutenberg_dataset import gb_input
from ai_lit.university.gutenberg import gb_chap_cnn_kim
# use the flags imported from the univserity and the model to set the configuration
tf.flags.FLAGS.chapter_length = 2500
tf.flags.FLAGS.vocab_count = 5000
tf.flags.FLAGS.embedding_size=100
tf.flags.FLAGS.epochs=10
dataset_wkspc = os.path.join(ai_lit_path, 'workspace', 'gb_input')
training_wkspc = os.path.join(ai_lit_path, 'workspace', 'gutenberg_chapters')
model_name = 'cnn_kim_l2_learned_embedding'
subjects = gb_input.get_subjects(dataset_wkspc)
evaluation_name = 'standard_eval'
univ = gb_chap_cnn_kim.GbChaptersCnnKimUniversity(model_name, training_wkspc, dataset_wkspc)
accuracy, f1, cm = analysis_util.train_and_evaluate(univ, model_name, evaluation_name)
evaluation_name = 'book_aggregation_eval'
univ = gb_chap_cnn_kim.GbChaptersCnnKimUniversity(model_name, training_wkspc, dataset_wkspc)
accuracy, f1, cm = analysis_util.train_and_evaluate(univ, model_name, evaluation_name, **{"aggr_books": True})
df = pd.DataFrame([[book_id, record] for book_id, record in records.items()], columns=["book_id", "record"])
df['target'] = df['record'].apply(lambda r: r[0].target)
df['prediction'] = df['record'].apply(lambda r: np.bincount([c.pred for c in r]).argmax())
def get_chapter_distribution(records, bin_count):
c_right = [0] * bin_count
c_total = [0] * bin_count
bins = np.arange(0, 1, 1/bin_count)
for r in records:
max_c = r[-1].chap_idx
c_hist = np.digitize([c.chap_idx / max_c for c in r], bins=bins, right=True)
for c, bin in zip(r, c_hist):
if c.pred == c.target:
c_right[bin-1] = c_right[bin-1] + 1
c_total[bin-1] = c_total[bin-1] + 1
return np.divide(c_right, c_total)
bar_locs = np.arange(0, 10, 1)
for i in range(0, len(subjects)):
dist = get_chapter_distribution(df[df['target'] == i]['record'], 10)
print(subjects[i])
fig, ax = plt.subplots()
ax.bar(bar_locs, dist, align="edge",
tick_label=["0%", "10%", "20%", "30%", "40%",
"50%", "60%", "70%", "80%", "90%"])
ax.set_ylabel('Accuracy by Chapter')
ax.set_title("{} Genre Distribution over Time".format(subjects[i]))
plt.xlim([0,10])
plt.show()
print(dist)
print()
```
| github_jupyter |
```
!nvidia-smi
!pip install gdown
!pip install tensorflow-gpu
```
## Import Libraries
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from pandas.plotting import register_matplotlib_converters
%matplotlib inline
%config InlineBackend.figure_format='retina'
register_matplotlib_converters()
sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 22, 10
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
```
## Load and Inspect the SENSEX Index Data
```
df = pd.read_csv('/content/BSE30.csv', parse_dates=['date'], index_col='date')
df.head()
plt.plot(df, label='close price')
plt.legend();
```
## Data Preprocessing
```
train_size = int(len(df) * 0.90)
test_size = len(df) - train_size
train, test = df.iloc[0:train_size], df.iloc[train_size:len(df)]
print(train.shape, test.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler = scaler.fit(train[['close']])
train['close'] = scaler.transform(train[['close']])
test['close'] = scaler.transform(test[['close']])
```
## Create Training and Test Splits
```
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
Xs.append(v)
ys.append(y.iloc[i + time_steps])
return np.array(Xs), np.array(ys)
TIME_STEPS = 30
# reshape to [samples, time_steps, n_features]
X_train, y_train = create_dataset(train[['close']], train.close, TIME_STEPS)
X_test, y_test = create_dataset(test[['close']], test.close, TIME_STEPS)
print(X_train.shape)
```
## Build an LSTM Autoencoder
```
model = keras.Sequential()
model.add(keras.layers.LSTM(
units=64,
input_shape=(X_train.shape[1], X_train.shape[2])
))
model.add(keras.layers.Dropout(rate=0.2))
model.add(keras.layers.RepeatVector(n=X_train.shape[1]))
model.add(keras.layers.LSTM(units=64, return_sequences=True))
model.add(keras.layers.Dropout(rate=0.2))
model.add(keras.layers.TimeDistributed(keras.layers.Dense(units=X_train.shape[2])))
model.compile(loss='mae', optimizer='adam')
```
## Train the Autoencoder
```
history = model.fit(
X_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2,
shuffle=False
)
```
## Plot Metrics and Evaluate the Model
```
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend();
X_train_pred = model.predict(X_train)
train_mae_loss = np.mean(np.abs(X_train_pred - X_train), axis=1)
sns.distplot(train_mae_loss, bins=50, kde=True);
X_test_pred = model.predict(X_test)
test_mae_loss = np.mean(np.abs(X_test_pred - X_test), axis=1)
```
## Detect Anomalies in the SENSEX Index Data
```
THRESHOLD = 0.27
test_score_df = pd.DataFrame(index=test[TIME_STEPS:].index)
test_score_df['loss'] = test_mae_loss
test_score_df['threshold'] = THRESHOLD
test_score_df['anomaly'] = test_score_df.loss > test_score_df.threshold
test_score_df['close'] = test[TIME_STEPS:].close
plt.plot(test_score_df.index, test_score_df.loss, label='loss')
plt.plot(test_score_df.index, test_score_df.threshold, label='threshold')
plt.xticks(rotation=25)
plt.legend();
anomalies = test_score_df[test_score_df.anomaly == True]
anomalies
plt.plot(
test[TIME_STEPS:].index,
scaler.inverse_transform(test[TIME_STEPS:].close),
label='close price'
);
sns.scatterplot(
anomalies.index,
scaler.inverse_transform(anomalies.close),
color=sns.color_palette()[3],
s=52,
label='anomaly'
)
plt.xticks(rotation=25)
plt.legend();
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(x = test[TIME_STEPS:].index, y = scaler.inverse_transform(test[TIME_STEPS:].close),mode='lines',name='Close Price'))
fig.add_trace(go.Scatter(x = anomalies.index, y = scaler.inverse_transform(anomalies.close),mode='markers',name='Anomalies'))
fig.update_layout(showlegend = True)
fig.show()
```
| github_jupyter |
# Quelques références
* Généralités sur le perceptron :
> * Introduction au Machine Learning (Chloé-Agathe Azencott) - Chap 7
* Algèbre linéaire
> * [Linear Algebra Review and Reference - Zico Kolter](http://www.cs.cmu.edu/~zkolter/course/linalg/linalg_notes.pdf )
* Numpy efficace
> * [intro Numpy - S. Raschka](https://sebastianraschka.com/pdf/books/dlb/appendix_f_numpy-intro.pdf)
> *[Look Ma, no for-loops](https://realpython.com/numpy-array-programming/)
* Pandas \& Matplotlib
> * [Manipulation de données](https://pandas.pydata.org/pandas-docs/stable/10min.html)
> * [Représentation des données et figures](http://matplotlib.org/users/beginner.html)
# Echauffement
Pour illustrer l'intérêt de la vectorisation, nous allons comparer le temps de calcul d'une méthode simple suivant si on l'implémente naîvement ou si on utilise la vectorisation.
Dans un premier temps, on donne un code permettant de mesurer le temps d'exécution d'un segment de code.
```
import time
tic = time.time()
# code dont on cherche à mesuer le temps d'execution
toc = time.time()
print("Temps mesuré: ", toc - tic)
# en pratique, pour une estimation plus robuste, on préférera faire une moyenne sur plusieurs itérations
```
On va chercher à calculer pour A et B, deux tenseurs d'ordre 3, la quantité suivante : \\
$\sum_i \sum_j \sum_k (A_{ijk} - B_{ijk})^2$
1. Implémenter ce calcul avec une triple boucle dans la fonctione l2_loop
2. Implémenter ce calcul grâce à la vectorisation dans la fonction l2_vectorise
3. Comparer les temps de calculs de ces fonctions (on pourra jouer sur les dimensions des tenseurs).
```
import timeit
import numpy as np
# dimensions des tenseurs A et B
la=500
lb=1200
lc = 3
# Version naive (avec une triple boucle)
def l2_loop(image1, image2):
return sum((image1[i,j,k]-image2[i,j,k])**2 for i in range(image1.shape[0])
for j in range(image1.shape[1])
for k in range(image1.shape[2]))
# Version efficace (vectorisation)
def l2_vectorise(image1, image2):
return ((image1-image2)**2).sum()
# generation aleatoire des deux tenseurs
image1 = np.random.rand(la,lb,lc)
image2 = np.random.rand(la,lb,lc)
tic = time.time()
l2_loop(image1,image2)
toc = time.time()
print(f'Ellapsed Time [Loops] : {toc - tic}')
tic = time.time()
l2_vectorise(image1,image2)
toc = time.time()
print(f'Ellapsed Time [Vectors] : {toc - tic}')
#comparaison des temps de calcul des deux fonctions appliquées sur les deux tenseurs générés
```
## Implémentation du perceptron en Python
On va implémenter un Perceptron en suivant une démarche objet similaire à celle utilisée dans scikit-learn.
On donne le squelette de la classe Perceptron.
1. Implémenter la fonction predict, qui implémente
$\phi (z)\begin{cases}1 & \text{si } z\geq 0 \\-1 & \text{sinon}\end{cases}$ \\
*indication* $z$ est calculée avec la fonction net_input
2. Compléter l'implémentation de fit (qui applique la règle du perceptron)
```
import numpy as np
class Perceptron(object):
"""Perceptron.
Parametres
------------
eta : float
Learning rate (entre 0.0 et 1.0)
n_iter : int
Nombre de passes sur l'ensemble d'apprentissage.
random_state : int
Graine pour le générateur de nombres pseudo-aléatoire pour
l'initialisation aléatoire des poinds.
Attributs
-----------
w_ : 1d-array
Poids apres l'apprentissage.
errors_ : list
Nombre de predictions erronees a chaque passe (epoch).
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Apprentissage des poids sur les donnees d'apprentissage.
Parameters
----------
X : {array-like}, shape = [n_examples, n_features]
Exemples d'apprentissages (vecteurs), ou n_examples est le nombre
d'echantillons et n_features est le nombre de variables (features).
y : array-like, shape = [n_examples]
Labels.
Retourne
-------
self : object
"""
#initiatlisation aléatoire des poids w
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.errors_ = [] # dans cette liste, on stocke le nombre d'erreurs commises à chaque itération
for _ in range(self.n_iter):
errors = 0
for x_i,y_i in zip(X,y):
update = self.eta * (y_i - self.predict(x_i))
self.w_[0] += update
self.w_[1:] += update * x_i
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calcul de la sortie du reseau"""
# cette fonction permet de calculer le produit scalaire de w et de chaque exemple dans X
#la terminologie est un peu trompeuse ici (mais on verra pourquoi après)
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Classes predites par le reseau (avec les poids courants)"""
return np.where(self.net_input(X) >= 0.0 , 1 , -1 )
```
# Apprentissage sur la base Iris
Pour tester la classe implémentée précédemment, on va charger le jeu de données Iris (en se restreignant à deux classes et deux variables).
Le code pour charger et afficher ces données est fourni ici.
1. Au vue de la Figure affichée, peut-on espérer que la règle du perceptron converge ?
2. Créer un objet de type Perceptron et appliquer la fonction fit. Pour vérifier que l'apprentissage a convergé (en traçant le nombre d'erreurs commises à chaque itération). on pourra jouer sur les hyperparamètres (learning rate et nombre d'itérations max).
3. on fournit le code pour afficher la frontière de décision d'un classifieur entraîné. Appliquer le à votre perceptron. La frontière de décision semble-t-elle correcte ?
*Remarque * le code plot_decision_regions applique la fonction predict de la classe sur une grille suffisament fine. (on comprend avec ce code, l'intérêt d'utiliser des objets avec des signatures similaires pour les classifieurs)
```
import os
import pandas as pd
s = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
print('URL:', s)
df = pd.read_csv(s,
header=None,
encoding='utf-8')
df.tail()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# selectionn de setosa et versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
# extraction de la longueur du sepale length et de la longueur du petale
X = df.iloc[0:100, [0, 2]].values
# plot data
plt.scatter(X[:50, 0], X[:50, 1],
color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('longueur du sepale [cm]')
plt.ylabel('longueur du petale [cm]')
plt.legend(loc='upper left')
plt.show()
# creation de l'objet Perceptron
# apprentissage sur les donnes iris
# figure erreurs vs iterations
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
# affichage de la frontiere de decision apprise
ppn = Perceptron(eta = 0.01 , n_iter = 10)
ppn.fit(X,y)
plt.plot(range(1,len(ppn.errors_)+1),ppn.errors_ , marker='o' , color = 'k')
plt.xlabel('Epochs')
plt.ylabel('Erros')
plt.show()
# Plot decision boundries
plot_decision_regions(X,y,ppn)
```
# ADALINE
On va implémenter cette fois un Adaline de la même façon.
On donne le squelette de la classe AdalineGD.
1. Recopier la fonction predict de la classe Perceptron
2. Compléter l'implémentation de fit (qui permet applique la règle d'Adaline avec une descente de gradient)
3. Créer un objet de type AdalineGD et appliquer la fonction fit. Pour vérifier que l'apprentissage a convergé (en traçant la fonction cout à chaque itération) *Attention* aux choix du learning rate et du nombre d'itérations max.
4. Appliquer le code de visualtion de la frontière de décision à votre adaline. La frontière de décision semble-t-elle correcte ?
Bonus :
5. standardiser les donnée et appliquer à nouveau votre Adaline.
6. QUelles modifications apporter pour utiliser un gradient stochastique dans ADALINE ?
```
class AdalineGD(object):
"""ADAptive LInear NEuron classifier.
Parametres
------------
eta : float
Learning rate (entre 0.0 et 1.0)
n_iter : int
Nombre de passes sur l'ensemble d'apprentissage.
random_state : int
Graine pour le générateur de nombres pseudo-aléatoire pour
l'initialisation aléatoire des poinds.
Attributs
-----------
w_ : 1d-array
Poids apres l'apprentissage.
cost_ : list
Valeur de la fonction cout a chaque passe (epoch).
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Apprentissage des poids sur les donnees d'apprentissage.
Parameters
----------
X : {array-like}, shape = [n_examples, n_features]
Exemples d'apprentissages (vecteurs), ou n_examples est le nombre
d'echantillons et n_features est le nombre de variables (features).
y : array-like, shape = [n_examples]
Labels.
Retourne
-------
self : object
"""
#initialisation des poids aléatoires
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.cost_ = [] # dans cette liste, on stocke la valeur de la fonction cout
self.errors_ = [] # Nombre d'erreurs commises
for _ in range(self.n_iter):
cost = errors = 0
for x_i,y_i in zip(X,y):
phi = self.predict(x_i)
update = self.eta * (y_i - phi)
self.w_[0] += update
self.w_[1:] += update * x_i
cost += l2_vectorise(y_i,phi)
errors += int(update != 0.0)
self.cost_.append(1/2*cost)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calcul de la sortie du reseau"""
# cette fonction permet de calculer le produit scalaire de w et de chaque exemple dans X
#la terminologie est un peu trompeuse ici (mais on verra pourquoi après)
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Calcul d'une fonction d'activation linéaire (identité)"""
return X
def predict(self, X):
"""Classes predites par le reseau (avec les poids courants)"""
return np.where(self.activation(self.net_input(X)) >= 0.0 , 1 , -1 )
# creation de l'objet AdalineGD
# apprentissage sur les donnes iris
# figure erreurs vs iterations
# affichage de la frontiere de decision apprise
# affichage de la frontiere de decision apprise
adn = AdalineGD(eta = 0.01 , n_iter = 10)
adn.fit(X,y)
# Plot decision boundries
plot_decision_regions(X,y,adn)
# standardiser les données
# creation de l'objet AdalineGD
# apprentissage sur les donnes iris
# figure erreurs vs iterations
# affichage de la frontiere de decision apprise
plt.plot(range(1,len(adn.errors_)+1),adn.errors_ , marker='o' , color = 'b')
plt.xlabel('Epochs')
plt.ylabel('Erros')
plt.show()
plt.plot(range(1,len(adn.cost_)+1),adn.cost_ , marker='o' , color = 'r' , label ='Cost function')
plt.xlabel('Epochs')
plt.ylabel('Cost function')
plt.legend()
plt.show()
```
| github_jupyter |
# Automating Boilerplate for a dbt Project
Setting up `dbt` project from scratch often involves writing a lot of boilerplate from configuring the project to bringing in the sources and create staging models. While there are tools to semi-automate this process, there is still a lot of manual heavy-lifting that is required. In this notebook, I explore ways to automate this flow based on a highly opinionated way of organizing staging models. I will turn this into a Python package once I am settled on the API.
## Initialize Project
We start by initialize a dbt project. We will use a postgres adapter that will allow us to explore the project locally.
```
%%bash
dbt init dbt-foo --adapter postgres
```
## Update project name and profile name
The default configuration file `dbt_project.yml` has a dummy project name (`my_new_project`) and profile (`default`). Let us update it based on the project name.
```
%%bash
sed -i 's/my_new_project/dbt_foo/g' dbt-foo/dbt_project.yml
sed -i 's/default/foo/g' dbt-foo/dbt_project.yml
head -n -5 dbt-foo/dbt_project.yml > tmp.yml && mv tmp.yml dbt-foo/dbt_project.yml
rm -rf models/example
!dbt debug --project-dir dbt-foo
```
## Identify Sources
The next step is to identify the sources to build the data models on top. A list of sources can be identified by listing the schemas under the database connection configured in `~/.dbt/profiles.yml`.
```
%load_ext sql
%sql postgresql://corise:corise@localhost:5432/dbt
%config SqlMagic.displaylimit=5
%config SqlMagic.displaycon = False
!psql -U postgres -c 'SELECT nspname AS schema FROM pg_catalog.pg_namespace;'
```
Before we go on to the next step, let us import some useful python packages and write some handy utiility functions that will let us run `dbt` command line operations from the notebook.
```
import subprocess
import yaml
import json
from pathlib import Path
from contextlib import contextmanager
from pathlib import Path
import os
@contextmanager
def cwd(path: Path):
"""Sets the cwd within the context
Args:
path (Path): The path to the cwd
Yields:
None
"""
origin = Path().absolute()
try:
os.chdir(path)
yield
finally:
os.chdir(origin)
def dbt_run_operation(operation, **kwargs):
args_json = json.dumps(kwargs)
cmd = f"dbt run-operation {operation} --args '{args_json}' | tail -n +2"
out = subprocess.getoutput(cmd)
return(out)
def write_as_yaml(x, file=None):
x_yaml = yaml.dump(x, sort_keys=False)
if file is None:
print(x_yaml)
else:
Path(file).write_text(x_yaml)
%%writefile dbt-foo/packages.yml
packages:
- package: dbt-labs/codegen
version: 0.4.0
!dbt deps --project-dir dbt-foo
```
## Generate Source
The next step to modeling in `dbt` is to identify the sources that need to be modelled. `dbt` has a command line tool that makes it easy to query a database schema and identify the tables in it. The `dbt_generate_source` function uses this tool to generate the source configuration based on a `database` and a `schema`. The `dbt_write_source` function writes a yaml file for the source config to `models/staging/<source_name>/<source_name>.yml`. This is a highly opinionated way of organizing the staging layer, and is based on the setup recommended by [dbt Labs](https://github.com/dbt-labs/corp/blob/master/dbt_style_guide.md).
```
def dbt_generate_source(database, schema, name):
if name is None:
name = schema
source_yaml = dbt_run_operation('generate_source', database_name=database, schema_name=schema)
source_dict = yaml.safe_load(source_yaml)
return ({
"version": source_dict['version'],
"sources": [{
"name": name,
"database": database,
"schema": schema,
"tables": source_dict['sources'][0]['tables']
}]
})
def dbt_write_source(source):
source_name = source['sources'][0]['name']
source_dir = Path(f"models/staging/{source_name}")
source_dir.mkdir(parents=True, exist_ok=True)
source_file = source_dir / f"src_{source_name}.yml"
print(f"Writing source.yaml for {source_name} to {source_file}")
write_as_yaml(source, source_file)
with cwd('dbt-foo'):
source = dbt_generate_source('dbt', 'public', 'foo')
dbt_write_source(source)
```
## Generate Staging Models
The next step is to bootstrap staging models for every source table. Once again `dbt` provides a really handy command line tool to generate the models and their configuration. The `dbt_generate_staging_models` function uses this tool to generate the boilerplate SQL for the staging model for every source table. The `dbt_write_staging_models` function writes these models to `models/staging/<source_name>/stg_<source_name>_<table_name>.sql`.
```
from tqdm import tqdm
def dbt_generate_staging_models(source):
source_database = source['sources'][0]['database']
source_schema = source['sources'][0]['schema']
source_name = source['sources'][0]['name']
table_names = [table['name'] for table in source['sources'][0]['tables']]
staging_models = {"name": source_name, "models": {}}
pbar = tqdm(table_names)
for table_name in pbar:
pbar.set_description(f"Generating staging model for {table_name}")
sql = dbt_run_operation('generate_base_model', source_name = source_name, table_name = table_name)
staging_models['models'][table_name] = sql
return staging_models
def dbt_write_staging_models(staging_models):
source_name = staging_models['name']
staging_model_dir = Path(f"models/staging/{source_name}")
staging_model_dir.mkdir(parents=True, exist_ok=True)
staging_model_items = tqdm(staging_models['models'].items())
staging_model_items.set_description(f"Writing staging models to {staging_model_dir}")
for staging_model_name, staging_model_sql in staging_model_items:
staging_model_file = staging_model_dir / f"stg_{source_name}__{staging_model_name}.sql"
staging_model_file.write_text(staging_model_sql)
with cwd('dbt-foo'):
# staging_models = dbt_generate_staging_models(source)
dbt_write_staging_models(staging_models)
```
It is very important to think documentation first while building data models. Once again, `dbt` has a very useful utility to bootstrap the documentation for a single model. The `dbt_generate_staging_models_yaml` function uses this utility to loop through all staging models and returns a dictionary with the boilerplate documentation for all these models. The `dbt_write_staging_models_yaml` function then writes this to `models/staging/<source_name>/stg_<source_name>.yml`. It is important to run `dbt run` before running these two funtions, since otherwise, the column documentation is NOT generated.
```
!dbt run --project-dir dbt-foo
def dbt_generate_staging_models_yaml(staging_models):
source_name = staging_models['name']
staging_models_yaml_dict = []
staging_model_names = tqdm(list(staging_models['models'].keys()))
for staging_model_name in staging_model_names:
staging_model_names.set_description(f"Preparing model yaml for {staging_model_name}")
staging_model_name = f"stg_{source_name}__{staging_model_name}"
# print(f"Generating yaml for staging model {staging_model_name}")
staging_model_yaml = dbt_run_operation('generate_model_yaml', model_name = staging_model_name)
staging_model_yaml_dict = yaml.safe_load(staging_model_yaml)
staging_models_yaml_dict = staging_models_yaml_dict + staging_model_yaml_dict['models']
return {'name': source_name, 'models': staging_models_yaml_dict}
def dbt_write_staging_models_yaml(staging_models_yaml):
source_name = staging_models_yaml['name']
staging_model_yaml_file = Path(f"models/staging/{source_name}/stg_{source_name}.yml")
out = {'version': 2, 'models': staging_models_yaml['models']}
print(f"Writing model yaml to {staging_model_yaml_file}")
write_as_yaml(out, staging_model_yaml_file)
def dbt_write_staging_models_yaml_one_per_model(staging_models_yaml):
source_name = staging_models_yaml['name']
for staging_model in staging_models_yaml['models']:
model_name = staging_model['name']
staging_model_yaml_file = Path(f"models/staging/{source_name}/{model_name}.yml")
out = {'version': 2, 'models': [staging_model]}
print(f"Writing model yaml to {staging_model_yaml_file}")
write_as_yaml(out, staging_model_yaml_file)
def dbt_write_staging_models_yaml_as_docstrings(staging_models_yaml):
source_name = staging_models_yaml['name']
for staging_model in staging_models_yaml['models']:
model_name = staging_model['name']
staging_model_file = Path(f"models/staging/{source_name}/{model_name}.sql")
staging_model_sql = staging_model_file.read_text()
staging_model_yaml = yaml.dump({"columns": staging_model["columns"]}, sort_keys=False)
out = f"/*\n## Table {model_name}\n\n\n```dbt \n{ staging_model_yaml }```\n*/\n{ staging_model_sql }"
staging_model_file.write_text(out)
with cwd('dbt-foo'):
staging_models_yaml = dbt_generate_staging_models_yaml(staging_models)
dbt_write_staging_models_yaml_one_per_model(staging_models_yaml)
!dbt build --project-dir dbt-foo
%%bash
cp dbt-greenery/target/catalog.json docs
cp dbt-greenery/target/manifest.json docs
cp dbt-greenery/target/run_results.json docs
cp dbt-greenery/target/index.html docs
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
## OFI computation
OFI preprocessing and computation is carried out in `order_flow_imbalance/ofi_computation.py` by following the steps below.
1. Clean timestamps according to the date of acquisition: we consider only meaningful timestamps that are within the same day of acquisition.
2. Rescaling prices with the tick size;
3. Compute the quantities $\Delta W$ and $\Delta V$ for each timestamp;
4. Discretizig time and summing the single $e_n$ over the time interval in order to compute OFI, we fix a time interval of 1 min.
```
ofi = pd.read_csv('../data_cleaned/ofi_10_levels.csv')
OFI_values = ofi.drop(['mid_price_delta', 'time_bin', 'bin_label'], axis=1)
```
## Distributions
First of all, let us have a look at the time distributions of MLOFI at different levels. The plot below and further analysis suggest that the distributions of MLOFI are quite similar at different levels and that every level is characterized by the presence of outliers which are significantly distant from the mean.
The first observation can be formalized by means of a Kolmogorov-Smirnov test with two samples, while the second observation justifies the usage of a strategy to get rid of outliers that could be present in the distributions.
```
sns.set_theme(style='white', font_scale=1.5, palette = 'magma')
fig, ax = plt.subplots(figsize=(17,6))
df_distr = OFI_values.copy()
df_distr = df_distr.drop(['OFI_9', 'OFI_8', 'OFI_7', 'OFI_6', 'OFI_4'], axis=1)
categorical_ofi = []
levels = []
for c in df_distr.columns:
categorical_ofi = np.concatenate([categorical_ofi, OFI_values[c]])
levels = np.concatenate([levels, np.repeat(c, OFI_values.shape[0])])
cat_ofi = pd.DataFrame({'OFI':categorical_ofi, 'level':levels})
sns.violinplot(data=cat_ofi, x='level',y='OFI', ax=ax)
ax.set_title('OFI distribution by level')
ax.set_xlabel('OFI level')
ax.set_ylabel('OFI value')
from scipy.stats import ks_2samp
# 0 vs 1
print('OFI 0 vs OFI 1: KS distance: {:.2f} \t p_value: {:.2f}'.format(*ks_2samp(OFI_values['OFI_0'], OFI_values['OFI_1'])))
# 0 vs 2
print('OFI 0 vs OFI 2: KS distance: {:.2f} \t p_value: {:.2f}'.format(*ks_2samp(OFI_values['OFI_0'], OFI_values['OFI_2'] )))
# 0 vs 3
print('OFI 0 vs OFI 3: KS distance: {:.2f} \t p_value: {:.2f}'.format(*ks_2samp(OFI_values['OFI_0'], OFI_values['OFI_3'])))
# 0 vs 4
print('OFI 0 vs OFI 4: KS distance: {:.2f} \t p_value: {:.2f}'.format(*ks_2samp(OFI_values['OFI_0'], OFI_values['OFI_4'])))
# 0 vs 5
print('OFI 0 vs OFI 5: KS distance: {:.2f} \t p_value: {:.2f}'.format(*ks_2samp(OFI_values['OFI_0'], OFI_values['OFI_5'])))
fig, ax = plt.subplots(figsize=(10,6))
sns.set_theme(style='white', font_scale=1.5, palette = 'magma')
lw=3
sns.histplot(data=ofi, x='OFI_0', ax=ax, cumulative=True, element = 'step', fill=False, linewidth=lw, label='OFI 0', color='darkorange')
sns.histplot(data=ofi, x='OFI_1', ax=ax, cumulative=True, element = 'step', fill=False, linewidth=lw, label='OFI 1')
ax.set_title('Cumulative distribution')
ax.set_xlabel('OFI value')
ax.set_ylabel('Cumulative frequency')
ax.legend()
```
## Outlier detection with Isolation Forest and Linear Fit
The OFI, can be a good price predictor since it has been shown (Cont et al., (2011)) that it stands in a linear relation with the midprice, thus with the price at which it is more likely that a trades occur.
$$ \Delta P_k = \beta \,\, OFI_k + \epsilon$$
where $ \Delta P_k $ is the variation in price at time $\tau_k$, $\beta$ is the price impact coefficient, $OFI_k$ is the order flow imbalance at time $\tau_k$, and $\epsilon$ is the error term.
Here we study not only the first level of the book, but all the first six levels, in order to verify if such linear relation holds for the whole book.
```
from sklearn.ensemble import IsolationForest
import numpy as np
n_fit = 6
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(16,8))
sns.set_theme(style='white', font_scale=1.5)
j=0
k=0
a_coeff, b_coeff, r2_scores = [], [], []
for i in range(n_fit):
print('Fitting level {}'.format(i))
if i==3:
j=0
k=1
#removing outliers
trend_data = np.array([ofi['OFI_{}'.format(i)], ofi['mid_price_delta']], dtype=np.float64).T
clf = IsolationForest(n_estimators=100)
clf.fit(trend_data)
outliers = [True if x==1 else False for x in clf.predict(trend_data)]
trend_data=trend_data[outliers].T
# linear fit
from sklearn.linear_model import Ridge
from sklearn.metrics import r2_score, mean_squared_error
model=Ridge()
model.fit(trend_data[0].reshape(-1,1),trend_data[1])
a, b = model.coef_[0], model.intercept_
a_coeff.append(a)
b_coeff.append(b)
# r2_score: proportion of the variation in the dependent
# variable that is predictable from the independent variable
r2_scores.append((r2_score(trend_data[1], model.predict(trend_data[0].reshape(-1,1)))))
#plot
predicted=[a*x+b for x in trend_data[0]]
sns.scatterplot(x=trend_data[0], y=trend_data[1], ax=ax[k,j], \
s=60, marker='o', color ='cornflowerblue',linewidth=0, alpha=0.8, label='Data')
g=sns.lineplot(x=trend_data[0], y=predicted, ax=ax[k,j], lw=3, color='darkred', label='Fit')
g.legend(loc='center left', bbox_to_anchor=(1, 0.8))
if k!=0 and j!=0: ax[k,j].get_legend().remove()
ax[k,j].set_xlabel('')
ax[k,j].set_ylabel('')
ax[k,j].set_xlim(-1.9e7, 1.9e7)
ax[k,j].set_ylim(-3500, 3500)
ax[k,j].text(-1.5e7, 2500, 'Level {}'.format(i), weight='bold')
j+=1
#Options for the plot
fig.suptitle('OFI levels')
ax[0,0].get_shared_x_axes().join(ax[0,0], ax[1,0])
ax[0,0].set_xticklabels([])
ax[0,1].set_yticklabels('')
ax[1,1].set_yticklabels('')
ax[1,2].set_yticklabels('')
ax[0,2].set_yticklabels('')
fig.text(0, 0.5, 'Mid Price variation', rotation=90, va='center', fontsize=25)
fig.text(0.3, 0, 'Order Flow Imbalance (OFI) ', va='center', fontsize=25)
fig.subplots_adjust(hspace=.0, wspace=0.)
#output
import os
if os.path.isdir('../figures')==False:
os.mkdir('../figures')
fig.savefig('../figures/OFI_levels_fit.png', bbox_inches='tight')
#results
from IPython.display import display, Math
for i in range(n_fit):
display(Math(r'Level \,\,{} \quad \quad \Delta \overline P = {:.4f}\,\, OFI_{} + {:.4f}'.format(i, a_coeff[i], i, b_coeff[i])+
'\quad R^2 = {:.2f}, \quad MSE= {:.2f}'.format(r2_scores[i], mse_scores[i])))
```
### Multi dimensional linear fit
Now that we verified that a linear relation occurs, even though the quality of the fit does not allow us to descrie all the variance of the mid price in the book, we can use the same procedure to study the OFI in the first ten levels of the book by applying a multi dimensional linear fit. Moreover, this same strategy can be also seen as the definition of a new feature as the linear combination of the multi-level OFIs.
So we propose two strategies:
1. We apply the startegy proposed by K. Xu, M. D. Gould, and S. D. Howison (Multi-Level Order-Flow Imbalance in a Limit Order Book), which consist in a multi-dimensional linear fit by means of Ridge regression of the OFI in the first ten levels of the book:$$\Delta P_k = \alpha+ \sum_m \beta_m OFI_m^k$$
where $P_k $ is defined as before, and OFI_m^k$ is the OFI in the $m^{th}$ level of the book at time $\tau_k$.
2. We define a new feature as the weighted sum of the first 10 levels OFI and we optimize the r2 score of a linear regression vs the mid price evolution of such feature. Then the weights are employed to define the feature and to perform a second linear fit: $$ f = \sum_m \beta_m OFI_m $$ $$ \Delta P = \alpha+ \gamma f $$
The second strategy was employed to test if a multiple optimization performed by combining a gradient based method (sklearn linear regression) with a gradient free approach (powell and cobyla) could lead to better results, nevertheless results are statistically similar to the first strategy. Thus we conclude that the results do not depend on the computational strategy employed, and we can actually describe arounf 40% of the variance of the mid price in the book by means of the OFI.
```
mid_price_delta = ofi['mid_price_delta']
# linear regression with sklearn
model=Ridge()
model.fit(OFI_values, mid_price_delta)
betas, alpha = model.coef_, model.intercept_
r2_scores=r2_score(mid_price_delta, model.predict(OFI_values))
print('MULTIDIMENSIONAL LINEAR REGRESSION')
display(Math(r'\Delta P = \alpha+ \sum_m \beta_m OFI_m'))
display(Math(r'\alpha = {:.4f}'.format(alpha)))
display(Math(r'\beta =['+', \,\,'.join(['{:.6f}'.format(b) for b in betas])+']'))
display(Math(r'R^2 = {:.2f}'.format(r2_scores)))
def linear_combination(weights, data):
"""
args:
weights (list or np.array): list of weights
data (list or np.array): list of OFI
returns:
linear combination of data
"""
return sum([w*d for w,d in zip(weights, data)])
sns.set_theme(style='white', font_scale=1.5)
fig, ax = plt.subplots(figsize=(10,6))
new_feature = [linear_combination(betas, OFI_values.iloc[i,:]) for i in range(len(OFI_values))]
sns.scatterplot(x=new_feature, y=mid_price_delta, ax=ax, s=60, marker='o',
color ='cornflowerblue',linewidth=0, alpha=0.8, label='Data')
sns.lineplot(x=new_feature, y=alpha+new_feature, ax=ax, lw=3, color='darkred', label='Fit')
ax.set_ylabel('Mid Price variation')
ax.set_xlabel('linear combination of OFI')
ax.set_title('Multidimensional fit')
# optimization of the new feature
def loss(weights, data_ofi, mid_price_delta):
"""
args:
weights: list of weights
data_ofi: list of OFI
mid_price_delta: list of mid price delta
returns:
loss of linear combination of data
"""
if len(weights)!=len(data_ofi.columns):
raise ValueError('weights and data_ofi.columns must have the same length')
if len(data_ofi)!=len(mid_price_delta):
raise ValueError('data_ofi and mid_price_delta must have the same length')
new_feature = np.array([linear_combination(weights, data_ofi.iloc[i,:]) for i in range(len(data_ofi))])
# We optimize over tthe weights once we defined a new feature which is the weighted sum of the OFI
# objective is the r2 score of the linear fit
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(new_feature.reshape(-1,1), mid_price_delta)
r2 = r2_score(mid_price_delta, model.predict(new_feature.reshape(-1,1)))
return -r2
from scipy.optimize import minimize
r = minimize(loss, x0=np.random.uniform(size=10), args=(OFI_values, mid_price_delta),
method='powell', bounds=[(0, None) for i in range(10)], options={'disp': True})
weights = r.x
new_feature = np.array([linear_combination(weights, OFI_values.iloc[i,:]) for i in range(len(OFI_values))])
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(new_feature.reshape(-1,1), mid_price_delta)
r2 = r2_score(mid_price_delta, model.predict(new_feature.reshape(-1,1)))
alpha = model.intercept_
betas = weights
gamma = model.coef_
print('OPTIMIZATION COMBINED WITH REGRESSION')
display(Math(r'\Delta P = \alpha+ \gamma \sum_m \beta_m OFI_m'))
display(Math(r'\alpha = {:.4f}'.format(alpha)))
display(Math(r'\gamma = {:.5f}'.format(*gamma)))
display(Math(r'\beta =['+', \,\,'.join(['{:.6f}'.format(b) for b in betas])+']'))
display(Math(r'\beta*\gamma =['+', \,\,'.join(['{:.6f}'.format(b*gamma[0]) for b in betas])+']'))
display(Math(r'R^2 = {:.2f}'.format(r2)))
```
## PCA and correlations
Finally, since we verified that different levels of the book exhibit the same relation with the mid price time evolution, we would expect to observe correlations within different OFI.
To formalize this, we can use the PCA to study the correlation between the OFI in the first ten levels of the book.
We then provide the correlation matrix, and the explained variance of the principal components computed after applying PCA to the data.
We can deduce that the first four levels tend to be more correlated if compared with higher levels, while lower levels of correlations are observed in the rest of the book. The analysis of the explained variance ratio also shows that in order to explain at least the 80% of the variance of the data we should consider at least four components in the eigenvalues space.
```
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
scaler = StandardScaler()
scaled = scaler.fit_transform(OFI_values)
pca = PCA(n_components=None)
pca.fit(scaled)
new_ofi = pca.transform(scaled)
sns.set_theme(style='white', font_scale=1.5, palette = 'magma')
explained_var = pca.explained_variance_ratio_
fig, ax = plt.subplots(1,2, figsize=(17,6))
sns.barplot(np.arange(len(explained_var)), explained_var, alpha=0.5, color = 'navy', ax=ax[1], label='Explained Variance')
ax[1].step(np.arange(len(explained_var)), np.cumsum(explained_var),\
drawstyle='steps-pre', color='darkorange', lw=4, where = 'mid', label='Cumulative')
plt.legend(loc='center right')
sns.heatmap(OFI_values.corr(), cmap='inferno', fmt='.1f', ax=ax[0])#annot=True
ax[1].set_xlabel('Component')
ax[1].set_ylabel('Explained variance')
```
| github_jupyter |
# Option Pricing with with PyTorch on GPU
Copyright Matthias Groncki, 2018
This is a port of one of my previous blog posts about using TensorFlow to price options.
After using PyTorch for another project, i was impressed how straight forward it is, so I've decided to revisit my previous examples and use PyTorch this time
```
import numpy as np
import torch
import datetime as dt
```
## Monte Carlo Pricing for Single Barrier Option on a GPU vs CPU
```
def monte_carlo_down_out_py(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier, steps, samples):
stdnorm_random_variates = np.random.randn(samples, steps)
S = S_0
K = strike
dt = time_to_expiry / stdnorm_random_variates.shape[1]
sigma = implied_vol
r = riskfree_rate
B = barrier
# See Advanced Monte Carlo methods for barrier and related exotic options by Emmanuel Gobet
B_shift = B*np.exp(0.5826*sigma*np.sqrt(dt))
S_T = S * np.cumprod(np.exp((r-sigma**2/2)*dt+sigma*np.sqrt(dt)*stdnorm_random_variates), axis=1)
non_touch = (np.min(S_T, axis=1) > B_shift)*1
call_payout = np.maximum(S_T[:,-1] - K, 0)
npv = np.mean(non_touch * call_payout)
return np.exp(-time_to_expiry*r)*npv
%%timeit
monte_carlo_down_out_py(100., 110., 2., 0.2, 0.03, 90., 1000, 100000)
def monte_carlo_down_out_torch_cuda(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier, steps, samples):
stdnorm_random_variates = torch.cuda.FloatTensor(steps, samples).normal_()
S = S_0
K = strike
dt = time_to_expiry / stdnorm_random_variates.shape[1]
sigma = implied_vol
r = riskfree_rate
B = barrier
# See Advanced Monte Carlo methods for barrier and related exotic options by Emmanuel Gobet
B_shift = B*torch.exp(0.5826*sigma*torch.sqrt(dt))
S_T = S * torch.cumprod(torch.exp((r-sigma**2/2)*dt+sigma*torch.sqrt(dt)*stdnorm_random_variates), dim=1)
non_touch = torch.min(S_T, dim=1)[0] > B_shift
non_touch = non_touch.type(torch.cuda.FloatTensor)
call_payout = S_T[:,-1] - K
call_payout[call_payout<0]=0
npv = torch.mean(non_touch * call_payout)
return torch.exp(-time_to_expiry*r)*npv
%%timeit
S = torch.tensor([100.],requires_grad=True, device='cuda')
K = torch.tensor([110.],requires_grad=True, device='cuda')
T = torch.tensor([2.],requires_grad=True, device='cuda')
sigma = torch.tensor([0.2],requires_grad=True, device='cuda')
r = torch.tensor([0.03],requires_grad=True, device='cuda')
B = torch.tensor([90.],requires_grad=True, device='cuda')
monte_carlo_down_out_torch_cuda(S, K, T, sigma, r, B, 1000, 100000)
%%timeit
S = torch.tensor([100.],requires_grad=True, device='cuda')
K = torch.tensor([110.],requires_grad=True, device='cuda')
T = torch.tensor([2.],requires_grad=True, device='cuda')
sigma = torch.tensor([0.2],requires_grad=True, device='cuda')
r = torch.tensor([0.03],requires_grad=True, device='cuda')
B = torch.tensor([90.],requires_grad=True, device='cuda')
npv_torch_mc = monte_carlo_down_out_torch_cuda(S, K, T, sigma, r, B, 1000, 100000)
npv_torch_mc.backward()
```
| github_jupyter |
# 卷积神经网络(LeNet)
:label:`sec_lenet`
通过之前几节,我们学习了构建一个完整卷积神经网络的所需组件。
回想一下,之前我们将softmax回归模型( :numref:`sec_softmax_scratch`)和多层感知机模型( :numref:`sec_mlp_scratch`)应用于Fashion-MNIST数据集中的服装图片。
为了能够应用softmax回归和多层感知机,我们首先将每个大小为$28\times28$的图像展平为一个784维的固定长度的一维向量,然后用全连接层对其进行处理。
而现在,我们已经掌握了卷积层的处理方法,我们可以在图像中保留空间结构。
同时,用卷积层代替全连接层的另一个好处是:模型更简洁、所需的参数更少。
在本节中,我们将介绍LeNet,它是最早发布的卷积神经网络之一,因其在计算机视觉任务中的高效性能而受到广泛关注。
这个模型是由AT&T贝尔实验室的研究员Yann LeCun在1989年提出的(并以其命名),目的是识别图像 :cite:`LeCun.Bottou.Bengio.ea.1998`中的手写数字。
当时,Yann LeCun发表了第一篇通过反向传播成功训练卷积神经网络的研究,这项工作代表了十多年来神经网络研究开发的成果。
当时,LeNet取得了与支持向量机(support vector machines)性能相媲美的成果,成为监督学习的主流方法。
LeNet被广泛用于自动取款机(ATM)机中,帮助识别处理支票的数字。
时至今日,一些自动取款机仍在运行Yann LeCun和他的同事Leon Bottou在上世纪90年代写的代码呢!
## LeNet
总体来看,(**LeNet(LeNet-5)由两个部分组成:**)(~~卷积编码器和全连接层密集块~~)
* 卷积编码器:由两个卷积层组成;
* 全连接层密集块:由三个全连接层组成。
该架构如 :numref:`img_lenet`所示。

:label:`img_lenet`
每个卷积块中的基本单元是一个卷积层、一个sigmoid激活函数和平均汇聚层。请注意,虽然ReLU和最大汇聚层更有效,但它们在20世纪90年代还没有出现。每个卷积层使用$5\times 5$卷积核和一个sigmoid激活函数。这些层将输入映射到多个二维特征输出,通常同时增加通道的数量。第一卷积层有6个输出通道,而第二个卷积层有16个输出通道。每个$2\times2$池操作(步骤2)通过空间下采样将维数减少4倍。卷积的输出形状由批量大小、通道数、高度、宽度决定。
为了将卷积块的输出传递给稠密块,我们必须在小批量中展平每个样本。换言之,我们将这个四维输入转换成全连接层所期望的二维输入。这里的二维表示的第一个维度索引小批量中的样本,第二个维度给出每个样本的平面向量表示。LeNet的稠密块有三个全连接层,分别有120、84和10个输出。因为我们仍在执行分类,所以输出层的10维对应于最后输出结果的数量。
通过下面的LeNet代码,你会相信用深度学习框架实现此类模型非常简单。我们只需要实例化一个`Sequential`块并将需要的层连接在一起。
```
import tensorflow as tf
from d2l import tensorflow as d2l
def net():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation='sigmoid',
padding='same'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5,
activation='sigmoid'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='sigmoid'),
tf.keras.layers.Dense(84, activation='sigmoid'),
tf.keras.layers.Dense(10)])
```
我们对原始模型做了一点小改动,去掉了最后一层的高斯激活。除此之外,这个网络与最初的LeNet-5一致。
下面,我们将一个大小为$28 \times 28$的单通道(黑白)图像通过LeNet。通过在每一层打印输出的形状,我们可以[**检查模型**],以确保其操作与我们期望的 :numref:`img_lenet_vert`一致。

:label:`img_lenet_vert`
```
X = tf.random.uniform((1, 28, 28, 1))
for layer in net().layers:
X = layer(X)
print(layer.__class__.__name__, 'output shape: \t', X.shape)
```
请注意,在整个卷积块中,与上一层相比,每一层特征的高度和宽度都减小了。
第一个卷积层使用2个像素的填充,来补偿$5 \times 5$卷积核导致的特征减少。
相反,第二个卷积层没有填充,因此高度和宽度都减少了4个像素。
随着层叠的上升,通道的数量从输入时的1个,增加到第一个卷积层之后的6个,再到第二个卷积层之后的16个。
同时,每个汇聚层的高度和宽度都减半。最后,每个全连接层减少维数,最终输出一个维数与结果分类数相匹配的输出。
## 模型训练
现在我们已经实现了LeNet,让我们看看[**LeNet在Fashion-MNIST数据集上的表现**]。
```
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
```
虽然卷积神经网络的参数较少,但与深度的多层感知机相比,它们的计算成本仍然很高,因为每个参数都参与更多的乘法。
如果你有机会使用GPU,可以用它加快训练。
[**为了使用GPU,我们还需要一点小改动**]。
与 :numref:`sec_softmax_scratch`中定义的`train_epoch_ch3`不同,在进行正向和反向传播之前,我们需要将每一小批量数据移动到我们指定的设备(例如GPU)上。
如下所示,训练函数`train_ch6`也类似于 :numref:`sec_softmax_scratch`中定义的`train_ch3`。
由于我们将实现多层神经网络,因此我们将主要使用高级API。
以下训练函数假定从高级API创建的模型作为输入,并进行相应的优化。
我们使用在 :numref:`subsec_xavier`中介绍的Xavier随机初始化模型参数。
与全连接层一样,我们使用交叉熵损失函数和小批量随机梯度下降。
```
class TrainCallback(tf.keras.callbacks.Callback): #@save
"""一个以可视化的训练进展的回调"""
def __init__(self, net, train_iter, test_iter, num_epochs, device_name):
self.timer = d2l.Timer()
self.animator = d2l.Animator(
xlabel='epoch', xlim=[1, num_epochs], legend=[
'train loss', 'train acc', 'test acc'])
self.net = net
self.train_iter = train_iter
self.test_iter = test_iter
self.num_epochs = num_epochs
self.device_name = device_name
def on_epoch_begin(self, epoch, logs=None):
self.timer.start()
def on_epoch_end(self, epoch, logs):
self.timer.stop()
test_acc = self.net.evaluate(
self.test_iter, verbose=0, return_dict=True)['accuracy']
metrics = (logs['loss'], logs['accuracy'], test_acc)
self.animator.add(epoch + 1, metrics)
if epoch == self.num_epochs - 1:
batch_size = next(iter(self.train_iter))[0].shape[0]
num_examples = batch_size * tf.data.experimental.cardinality(
self.train_iter).numpy()
print(f'loss {metrics[0]:.3f}, train acc {metrics[1]:.3f}, '
f'test acc {metrics[2]:.3f}')
print(f'{num_examples / self.timer.avg():.1f} examples/sec on '
f'{str(self.device_name)}')
#@save
def train_ch6(net_fn, train_iter, test_iter, num_epochs, lr, device):
"""用GPU训练模型(在第六章定义)"""
device_name = device._device_name
strategy = tf.distribute.OneDeviceStrategy(device_name)
with strategy.scope():
optimizer = tf.keras.optimizers.SGD(learning_rate=lr)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
net = net_fn()
net.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
callback = TrainCallback(net, train_iter, test_iter, num_epochs,
device_name)
net.fit(train_iter, epochs=num_epochs, verbose=0, callbacks=[callback])
return net
```
现在,我们[**训练和评估LeNet-5模型**]。
```
lr, num_epochs = 0.9, 10
train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
```
## 小结
* 卷积神经网络(CNN)是一类使用卷积层的网络。
* 在卷积神经网络中,我们组合使用卷积层、非线性激活函数和汇聚层。
* 为了构造高性能的卷积神经网络,我们通常对卷积层进行排列,逐渐降低其表示的空间分辨率,同时增加通道数。
* 在传统的卷积神经网络中,卷积块编码得到的表征在输出之前需由一个或多个全连接层进行处理。
* LeNet是最早发布的卷积神经网络之一。
## 练习
1. 将平均汇聚层替换为最大汇聚层,会发生什么?
1. 尝试构建一个基于LeNet的更复杂的网络,以提高其准确性。
1. 调整卷积窗口大小。
1. 调整输出通道的数量。
1. 调整激活函数(如ReLU)。
1. 调整卷积层的数量。
1. 调整全连接层的数量。
1. 调整学习率和其他训练细节(例如,初始化和轮数)。
1. 在MNIST数据集上尝试以上改进的网络。
1. 显示不同输入(例如毛衣和外套)时,LeNet第一层和第二层的激活值。
[Discussions](https://discuss.d2l.ai/t/1859)
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy.stats as stats
mpl.rcParams['figure.dpi'] = 100
mpl.rcParams['figure.figsize'] = (8, 6)
%config InlineBackend.figure_format = 'retina'
def instantaneous_slope(y, x):
slope = np.zeros(len(x))
for i in range(len(x)):
if i == 0:
slope[0] = (y[1] - y[0]) / (x[1] - x[0])
elif i == len(x) - 1:
slope[-1] = (y[-1] - y[-2]) / (x[-1] - x[-2])
else:
# slope[i] = (y[i+1] - y[i-1]) / (x[i+1] - x[i-1])
# slope[i] = (y[i] - y[i-1]) / (x[i] - x[i-1])
xp = x[i+1]
xc = x[i]
xm = x[i-1]
yp = y[i+1]
yc = y[i]
ym = y[i-1]
X = np.array([[xp ** 2, xp , 1], [xc **2, xc, 1], [xm ** 2, xm, 1]])
B = np.array([yp, yc, ym])
a = np.linalg.solve(X, B)
slope[i] = 2 * a[0] * xc + a[1]
return slope
path_to_results = '/home/ng213/2TB/pazy_code/pazy-aepw3-results/02_Torsion/'
output_figures_folder = '../figures_aiaaj/'
torsion_results = {}
torsion_results['sharpy_w_skin'] = {'file': path_to_results + '/torsion_SHARPy_w_skin.txt',
'skin': True,
'marker': 'o',
'ms': 6,
'label':'Undeformed ref. line (SHARPy)'}
torsion_results['sharpy_wo_skin'] = {'file': path_to_results + '/torsion_SHARPy_wo_skin.txt',
'skin': False,
'marker': 'o',
'ms': 6,
'label':'Undeformed ref. line (SHARPy)'}
torsion_results['technion_mrm_w_skin'] = {'file': path_to_results + '/torsion_mrm_umbeam_w_skin.txt',
'skin': True,
'marker': '^',
'ms': 6,
'label':'Curvature incl. (MRM)'}
torsion_results['technion_mrm_wo_skin'] = {'file': path_to_results + '/torsion_mrm_umbeam_wo_skin.txt',
'marker': '^',
'ms': 6,
'skin': False,
'label':'Curvature incl. (MRM)'}
# torsion_results['technion_ansys_w_skin'] = {'file': path_to_results + '/torsion_ansys_w_skin.txt',
# 'skin': True,
# 'marker': 's',
# 's': 4,
# 'label': 'MRM Ansys modes', 'linestyle':{'alpha': 0.6}}
# torsion_results['technion_ansys_wo_skin'] = {'file': path_to_results + '/torsion_ansys_wo_skin.txt',
# 'skin': False,
# 'marker': 's',
# 's': 4,
# 'label': 'MRM Ansys modes', 'linestyle':{'alpha': 0.6}}
torsion_results['technion_experimental_w_skin'] = {'file': path_to_results + '/torsion_technion_experimental_w_skin.txt',
'skin': True,
'label': 'Experimental',
'marker': 'x',
'ms': 6,
'ls':'none'
}
torsion_results['technion_experimental_wo_skin'] = {'file': path_to_results + '/torsion_technion_experimental_wo_skin.txt',
'skin': False,
'label': 'Experimental',
'marker': 'x',
'ms': 6,
'ls':'none'
}
# torsion_results['um_w_skin'] = {'file': path_to_results + '/torsion_UMNAST_w_skin.txt',
# 'skin': True,
# 'marker': 's',
# 'ms': 6,
# 'label':'UM/NAST',
# 'linestyle': {'markevery': 5}}
# torsion_results['um_wo_skin'] = {'file': path_to_results + '/torsion_UMNAST_wo_skin.txt',
# 'skin': False,
# 'marker': 's',
# 'ms': 6,
# 'label': 'UM/NAST',
# 'linestyle': {'markevery': 5}}
# torsion_results['nastran'] = {'file': path_to_results + '/torsion_UMNAST_parentFEM_wo_skin.txt',
# 'skin': False,
# 'marker': '+',
# 'ms': 6,
# 'label': 'Nastran FEM',
# 'ls':'none'}
for key, case in torsion_results.items():
case['data'] = np.loadtxt(case['file'])
```
## Vertical Displacement
```
cm2in = 1/2.54
ar = 1.57
ar = 3
width_cm = 20
remove_offset = True
figsize = (width_cm * cm2in, width_cm / ar * cm2in)
fig, ax = plt.subplots(ncols=2, figsize=figsize)
for case in torsion_results.values():
if case['skin']:
a = ax[0]
else:
a = ax[1]
if remove_offset:
offset = case['data'][0, 1]
else:
offset = 0
a.plot(case['data'][:, 0], case['data'][:, 1] - offset, label=case['label'], marker=case['marker'], ms=4, mfc='none', ls='none',
lw=0.5, color='k', markeredgewidth=0.7,
**case.get('linestyle', {}))
if case['label'] == 'SHARPy':
slope = instantaneous_slope(case['data'][:, 1], case['data'][:, 0])
# for pt in range(7):
# a.plot(case['data'][:, 0], slope[pt] * (case['data'][:, 0] - case['data'][pt, 0]) + case['data'][pt, 1])
# a.scatter(case['data'][pt, 0], case['data'][pt, 1])
for a in ax:
a.legend(fontsize=8)
a.set_xlim(0, 3.5)
a.set_ylim(-0.3, 0.0)
a.set_xlabel('Tip Load, kg')
a.set_ylabel('Wing tip vertical displacement, m')
a.grid()
for item in ([a.title, a.xaxis.label, a.yaxis.label] +
a.get_xticklabels() + a.get_yticklabels()):
item.set_fontsize(8)
plt.tight_layout()
plt.savefig(output_figures_folder + '02a_Torsion_Displacement.pdf')
```
### Twist Angle
```
cm2in = 1/2.54
ar = 1.57
ar = 3
width_cm = 20
remove_offset = True
figsize = (width_cm * cm2in, width_cm / ar * cm2in)
fig, ax = plt.subplots(ncols=2, figsize=figsize)
for key, case in torsion_results.items():
if case['skin']:
a = ax[0]
else:
a = ax[1]
if remove_offset:
offset = case['data'][0, 2]
else:
offset = 0
a.plot(case['data'][:, 0], case['data'][:, 2] - offset, label=case['label'], marker=case['marker'], ms=4, mfc='none', ls='none',
lw=0.5, color='k', markeredgewidth=0.7,
**case.get('linestyle', {}))
for a in ax:
a.legend(fontsize=8)
a.set_xlim(0, 3.5)
a.set_ylim(-12, 0)
a.set_xlabel('Tip Load, kg')
a.set_ylabel('Twist angle, degrees')
a.grid()
a.xaxis.set_tick_params(which='major', direction='in', top='on', width=0.5)
a.xaxis.set_tick_params(which='minor', direction='in', top='on', width=0.5)
a.yaxis.set_tick_params(which='major', direction='in', right='on', width=0.5)
a.yaxis.set_tick_params(which='minor', direction='in', right='on', width=0.5)
for item in ([a.title, a.xaxis.label, a.yaxis.label] +
a.get_xticklabels() + a.get_yticklabels()):
item.set_fontsize(8)
plt.tight_layout()
plt.savefig(output_figures_folder + '02a_Torsion_Angle.pdf')
```
# Instantaneous Slope
```
cm2in = 1/2.54
ar = 1.57
ar = 3
width_cm = 20
figsize = (width_cm * cm2in, width_cm / ar * cm2in)
fig, ax = plt.subplots(ncols=2, figsize=figsize)
for case in torsion_results.values():
if case['skin']:
a = ax[0]
else:
a = ax[1]
if case['label'] == 'Experimental':
continue
slope = instantaneous_slope(case['data'][:, 1], case['data'][:, 0])
a.plot(case['data'][:, 0], 100 * slope, label=case['label'], marker=case['marker'], ms=4, mfc='none', ls='none',
lw=0.5, color='k', markeredgewidth=0.7,
**case.get('linestyle', {}))
for a in ax:
a.legend(fontsize=8)
a.set_xlim(0, 3.5)
# a.set_ylim(-0.35, 0.05)
a.set_xlabel('Tip Load, kg')
a.set_ylabel('Wing displacement gradient, cm/kg')
a.grid()
for item in ([a.title, a.xaxis.label, a.yaxis.label] +
a.get_xticklabels() + a.get_yticklabels()):
item.set_fontsize(8)
plt.tight_layout()
plt.savefig(output_figures_folder + '02a_Torsion_Displacement_slope.pdf')
cm2in = 1/2.54
ar = 1.57
ar = 3
width_cm = 20
figsize = (width_cm * cm2in, width_cm / ar * cm2in)
fig, ax = plt.subplots(ncols=2, figsize=figsize)
for key, case in torsion_results.items():
if case['skin']:
a = ax[0]
else:
a = ax[1]
if case['label'] == 'Experimental':
continue
slope = instantaneous_slope(case['data'][:, 2], case['data'][:, 0])
a.plot(case['data'][:, 0], slope, label=case['label'], marker=case['marker'], ms=4, mfc='none', ls='none',
lw=0.5, color='k', markeredgewidth=0.7,
**case.get('linestyle', {}))
for a in ax:
a.legend(fontsize=8)
a.set_xlim(0, 3.5)
a.set_xlabel('Tip Load, kg')
a.set_ylabel('Twist angle, degrees')
a.grid()
for item in ([a.title, a.xaxis.label, a.yaxis.label] +
a.get_xticklabels() + a.get_yticklabels()):
item.set_fontsize(8)
plt.tight_layout()
plt.savefig(output_figures_folder + '02a_Torsion_Angle_slope.pdf')
```
| github_jupyter |
# Evaluation funtion
```
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM, BertConfig
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def sentencepair2tensor(tokenizer, tokens_a, tokens_b, max_seq_length):
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
assert len(tokens_b) > 0
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
masked_index = tokens.index("[MASK]")
sep_index = tokens.index("[MASK]")
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(segment_ids) == max_seq_length
tokens_tensor = torch.tensor([input_ids])
segments_tensors = torch.tensor([segment_ids])
return tokens_tensor, segments_tensors, masked_index, sep_index
df = pd.read_csv('data/generation/count_test.csv')
# Load pre-trained model tokenizer (vocabulary)
modelpath = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(modelpath)
# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained(modelpath)
model.eval()
def eval_naive(model,
sentence1,
sentence2,
label,
tokenizer,
max_seq_length,
top=10):
sentence1 = tokenizer.tokenize(sentence1)
sentence2 = tokenizer.tokenize(sentence2)
target = tokenizer.convert_tokens_to_ids([label])[0]
tokens_tensor, segments_tensors, masked_index, sep_index = sentencepair2tensor(tokenizer,
sentence1,
sentence2,
max_seq_length)
predictions = model(tokens_tensor, segments_tensors)
_, indices = torch.topk(predictions[0, masked_index], top)
indices = list(indices.numpy())
try:
v = 1/ (indices.index(target) + 1)
except ValueError:
v = 0
return v
```
## Evaluating the pre-trained model
```
values = []
for a,b,l in tqdm(zip(df.sentence1.values,df.sentence2_masked.values,df.label.values)):
v = eval_naive(model=model,
sentence1=a,
sentence2=b,
label=l,
tokenizer=tokenizer,
max_seq_length=128)
values.append(v)
df["bert_base_uncased_pre_trained"] = values
np.mean(df["bert_base_uncased_pre_trained"].values)
```
## Evaluating model withou pre-training
```
config = BertConfig(vocab_size_or_config_json_file=30522, hidden_size=768,
num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072)
model = BertForMaskedLM(config)
model.eval()
values = []
for a,b,l in tqdm(zip(df.sentence1.values,df.sentence2_masked.values,df.label.values)):
v = eval_naive(model=model,
sentence1=a,
sentence2=b,
label=l,
tokenizer=tokenizer,
max_seq_length=128)
values.append(v)
df["bert_base_uncased_no_pre_trained"] = values
np.mean(df["bert_base_uncased_no_pre_trained"].values)
```
## Evaluating model with fine tunning
```
# Load pre-trained model tokenizer (vocabulary)
modelpath = "bert-base-uncased"
output_model_file_ = "out/count_pytorch_model.bin"
model_state_dict = torch.load(output_model_file_)
tokenizer = BertTokenizer.from_pretrained(modelpath)
model = BertForMaskedLM.from_pretrained(modelpath,
state_dict=model_state_dict)
model.eval()
values = []
for a,b,l in tqdm(zip(df.sentence1.values,df.sentence2_masked.values,df.label.values)):
v = eval_naive(model=model,
sentence1=a,
sentence2=b,
label=l,
tokenizer=tokenizer,
max_seq_length=128)
values.append(v)
df["bert_base_uncased_fine_tuned"] = values
np.mean(df["bert_base_uncased_fine_tuned"].values)
```
## results
```
df.describe()
df.to_csv("results/count_basic.csv")
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/get_image_resolution.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/get_image_resolution.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
naip = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')
Map.setCenter(-122.466123, 37.769833, 17)
Map.addLayer(naip, {'bands': ['N', 'R','G']}, 'NAIP')
naip_resolution =naip.select('N').projection().nominalScale()
print("NAIP resolution: ", naip_resolution.getInfo())
landsat = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
landsat_resolution =landsat.select('B1').projection().nominalScale()
print("Landsat resolution: ", landsat_resolution.getInfo())
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
import os
import datetime
import logging
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import datetime as dt
from sklearn.preprocessing import MinMaxScaler
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
import warnings
warnings.filterwarnings("ignore")
from tensorflow.python.client import device_lib
#Some settings
strategy = tf.distribute.MirroredStrategy()
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
print(device_lib.list_local_devices())
tf.keras.backend.set_floatx('float64')
for chunk in pd.read_csv("smartmeter.csv", chunksize= 10**6):
print(chunk)
data = pd.DataFrame(chunk)
data = data.drop(['device_id', 'device_name', 'property'], axis = 1)
# Creating daytime input
def time_d(x):
k = datetime.datetime.strptime(x, "%H:%M:%S")
y = k - datetime.datetime(1900, 1, 1)
return y.total_seconds()
daytime = data['timestamp'].str.slice(start = 11 ,stop=19)
secondsperday = daytime.map(lambda i: time_d(i))
data['timestamp'] = data['timestamp'].str.slice(stop=19)
data['timestamp'] = data['timestamp'].map(lambda i: dt.datetime.strptime(i, '%Y-%m-%d %H:%M:%S'))
parse_dates = [data['timestamp']]
# Creating Weekday input
wd_input = np.array(data['timestamp'].map(lambda i: int(i.weekday())))
# Creating inputs sin\cos
seconds_in_day = 24*60*60
data_seconds = np.array(data['timestamp'].map(lambda i: i.weekday()))
input_sin = np.array(np.sin(2*np.pi*secondsperday/seconds_in_day))
input_cos = np.array(np.cos(2*np.pi*secondsperday/seconds_in_day))
# Putting inputs together in array
df = pd.DataFrame(data = {'value':data['value'], 'input_sin':input_sin, 'input_cos':input_cos, 'input_wd': wd_input})
column_indices = {name: i for i, name in enumerate(data.columns)}
n = len(df)
train_df = pd.DataFrame(df[0:int(n*0.7)])
val_df = pd.DataFrame(df[int(n*0.7):int(n*0.9)])
test_df = pd.DataFrame(df[int(n*0.9):])
num_features = df.shape[1]
# Standardization
train_mean = train_df['value'].mean()
train_std = train_df['value'].std()
train_df['value'] = (train_df['value'] - train_mean) / train_std
val_df['value'] = (val_df['value'] - train_mean) / train_std
test_df['value'] = (test_df['value'] - train_mean) / train_std
# 1st degree differencing
train_df['value'] = train_df['value'] - train_df['value'].shift()
# Handle negative values in 'value' for loging
train_df['value'] = train_df['value'].map(lambda i: abs(i))
train_df.loc[train_df.value <= 0, 'value'] = 0.000000001
train_df['value'] = train_df['value'].map(lambda i: np.log(i))
train_df = train_df.replace(np.nan, 0.000000001)
# 1st degree differencing
val_df['value'] = val_df['value'] - val_df['value'].shift()
# Handle negative values in 'value' for loging
val_df['value'] = val_df['value'].map(lambda i: abs(i))
val_df.loc[val_df.value <= 0, 'value'] = 0.000000001
val_df['value'] = val_df['value'].map(lambda i: np.log(i))
val_df = val_df.replace(np.nan, 0.000000001)
# 1st degree differencing
test_df['value'] = test_df['value'] - test_df['value'].shift()
# Handle negative values in 'value' for loging
test_df['value'] = test_df['value'].map(lambda i: abs(i))
test_df.loc[test_df.value <= 0, 'value'] = 0.000000001
test_df['value'] = test_df['value'].map(lambda i: np.log(i))
test_df = test_df.replace(np.nan, 0.000000001)
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
def plot(self, model=None, plot_col='value', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
def make_dataset(self, data):
data = np.array(data, dtype=np.float64)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['value'])
baseline = Baseline(label_index=column_indices['value'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
wide_window = WindowGenerator(
input_width=25, label_width=25, shift=1,
label_columns=['value'])
wide_window
wide_window.plot(baseline)
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.SGD(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
### LSTM ###
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
wide_window = WindowGenerator(
input_width=50, label_width=50, shift=1,
label_columns=['value'])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
```
| github_jupyter |
**Chapter 14 – Recurrent Neural Networks**
_This notebook contains all the sample code and solutions to the exercises in chapter 14._
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
```
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rnn"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
Then of course we will need TensorFlow:
```
import tensorflow as tf
```
# Basic RNNs
## Manual RNN
```
reset_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
import numpy as np
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
```
## Using `static_rnn()`
```
n_inputs = 3
n_neurons = 5
reset_graph()
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1],
dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "b<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
```
## Packing sequences
```
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, X_seqs,
dtype=tf.float32)
outputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])
init = tf.global_variables_initializer()
X_batch = np.array([
# t = 0 t = 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
print(np.transpose(outputs_val, axes=[1, 0, 2])[1])
```
## Using `dynamic_rnn()`
```
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
show_graph(tf.get_default_graph())
```
## Setting the sequence lengths
```
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
```
## Training a sequence classifier
Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function. The main differences relevant to this chapter are:
* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.
* the default `activation` is now `None` rather than `tf.nn.relu`.
```
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
n_epochs = 2
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
```
# Multi-layer RNN
```
reset_graph()
n_steps = 28
n_inputs = 28
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 2
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
```
# Time series
```
t_min, t_max = 0, 30
resolution = 0.1
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_plot")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
```
## Using an `OuputProjectionWrapper`
Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each traiing instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a sigle value:
```
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
```
At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an `OutputProjectionWrapper`.
```
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 100
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, "./my_time_series_model") # not shown in the book
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_pred_plot")
plt.show()
```
## Without using an `OutputProjectionWrapper`
```
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 100
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
saver.save(sess, "./my_time_series_model")
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
```
## Generating a creative new sequence
```
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
sequence = [0.] * n_steps
for iteration in range(300):
X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence.append(y_pred[0, -1, 0])
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
with tf.Session() as sess:
saver.restore(sess, "./my_time_series_model")
sequence1 = [0. for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence1.append(y_pred[0, -1, 0])
sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence2.append(y_pred[0, -1, 0])
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(t, sequence1, "b-")
plt.plot(t[:n_steps], sequence1[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.plot(t, sequence2, "b-")
plt.plot(t[:n_steps], sequence2[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
save_fig("creative_sequence_plot")
plt.show()
```
# Deep RNN
## MultiRNNCell
```
reset_graph()
n_inputs = 2
n_steps = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.random.rand(2, n_steps, n_inputs)
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})
outputs_val.shape
```
## Distributing a Deep RNN Across Multiple GPUs
Do **NOT** do this:
```
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
```
Instead, you need a `DeviceCellWrapper`:
```
import tensorflow as tf
class DeviceCellWrapper(tf.contrib.rnn.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
reset_graph()
n_inputs = 5
n_steps = 20
n_neurons = 100
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
devices = ["/cpu:0", "/cpu:0", "/cpu:0"] # replace with ["/gpu:0", "/gpu:1", "/gpu:2"] if you have 3 GPUs
cells = [DeviceCellWrapper(dev,tf.contrib.rnn.BasicRNNCell(num_units=n_neurons))
for dev in devices]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
```
Alternatively, since TensorFlow 1.1, you can use the `tf.contrib.rnn.DeviceWrapper` class (alias `tf.nn.rnn_cell.DeviceWrapper` since TF 1.2).
```
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))
```
## Dropout
```
reset_graph()
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
```
Note: the `input_keep_prob` parameter can be a placeholder, making it possible to set it to any value you want during training, and to 1.0 during testing (effectively turning dropout off). This is a much more elegant solution than what was recommended in earlier versions of the book (i.e., writing your own wrapper class or having a separate model for training and testing). Thanks to Shen Cheng for bringing this to my attention.
```
keep_prob = tf.placeholder_with_default(1.0, shape=())
cells = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
cells_drop = [tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob=keep_prob)
for cell in cells]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells_drop)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
learning_rate = 0.01
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
train_keep_prob = 0.5
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
_, mse = sess.run([training_op, loss],
feed_dict={X: X_batch, y: y_batch,
keep_prob: train_keep_prob})
if iteration % 100 == 0: # not shown in the book
print(iteration, "Training MSE:", mse) # not shown
saver.save(sess, "./my_dropout_time_series_model")
with tf.Session() as sess:
saver.restore(sess, "./my_dropout_time_series_model")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
```
Oops, it seems that Dropout does not help at all in this particular case. :/
# LSTM
```
reset_graph()
lstm_cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
states
top_layer_h_state
n_epochs = 2
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=n_neurons, use_peepholes=True)
gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)
```
# Embeddings
This section is based on TensorFlow's [Word2Vec tutorial](https://www.tensorflow.org/versions/r0.11/tutorials/word2vec/index.html).
## Fetch the data
```
from six.moves import urllib
import errno
import os
import zipfile
WORDS_PATH = "datasets/words"
WORDS_URL = 'http://mattmahoney.net/dc/text8.zip'
def mkdir_p(path):
"""Create directories, ok if they already exist.
This is for python 2 support. In python >=3.2, simply use:
>>> os.makedirs(path, exist_ok=True)
"""
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):
os.makedirs(words_path, exist_ok=True)
zip_path = os.path.join(words_path, "words.zip")
if not os.path.exists(zip_path):
urllib.request.urlretrieve(words_url, zip_path)
with zipfile.ZipFile(zip_path) as f:
data = f.read(f.namelist()[0])
return data.decode("ascii").split()
words = fetch_words_data()
words[:5]
```
## Build the dictionary
```
from collections import Counter
vocabulary_size = 50000
vocabulary = [("UNK", None)] + Counter(words).most_common(vocabulary_size - 1)
vocabulary = np.array([word for word, _ in vocabulary])
dictionary = {word: code for code, word in enumerate(vocabulary)}
data = np.array([dictionary.get(word, 0) for word in words])
" ".join(words[:9]), data[:9]
" ".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])
words[24], data[24]
```
## Generate batches
```
import random
from collections import deque
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
data_index=0
batch, labels = generate_batch(8, 2, 1)
batch, [vocabulary[word] for word in batch]
labels, [vocabulary[word] for word in labels[:, 0]]
```
## Build the model
```
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample.
learning_rate = 0.01
reset_graph()
# Input data.
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
vocabulary_size = 50000
embedding_size = 150
# Look up embeddings for inputs.
init_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)
embeddings = tf.Variable(init_embeds)
train_inputs = tf.placeholder(tf.int32, shape=[None])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / np.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
num_sampled, vocabulary_size))
# Construct the Adam optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
# Add variable initializer.
init = tf.global_variables_initializer()
```
## Train the model
```
num_steps = 10001
with tf.Session() as session:
init.run()
average_loss = 0
for step in range(num_steps):
print("\rIteration: {}".format(step), end="\t")
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
# We perform one update step by evaluating the training op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([training_op, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = vocabulary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to %s:" % valid_word
for k in range(top_k):
close_word = vocabulary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
```
Let's save the final embeddings (of course you can use a TensorFlow `Saver` if you prefer):
```
np.save("./my_final_embeddings.npy", final_embeddings)
```
## Plot the embeddings
```
def plot_with_labels(low_dim_embs, labels):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
from sklearn.manifold import TSNE
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [vocabulary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
```
# Machine Translation
The `basic_rnn_seq2seq()` function creates a simple Encoder/Decoder model: it first runs an RNN to encode `encoder_inputs` into a state vector, then runs a decoder initialized with the last encoder state on `decoder_inputs`. Encoder and decoder use the same RNN cell type but they don't share parameters.
```
import tensorflow as tf
n_steps = 50
n_neurons = 200
n_layers = 3
num_encoder_symbols = 20000
num_decoder_symbols = 20000
embedding_size = 150
learning_rate = 0.01
X = tf.placeholder(tf.int32, [None, n_steps]) # English sentences
Y = tf.placeholder(tf.int32, [None, n_steps]) # French translations
W = tf.placeholder(tf.float32, [None, n_steps - 1, 1])
Y_input = Y[:, :-1]
Y_target = Y[:, 1:]
encoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors
decoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
output_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
encoder_inputs,
decoder_inputs,
cell,
num_encoder_symbols,
num_decoder_symbols,
embedding_size)
logits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])
logits_flat = tf.reshape(logits, [-1, num_decoder_symbols])
Y_target_flat = tf.reshape(Y_target, [-1])
W_flat = tf.reshape(W, [-1])
xentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
```
# Exercise solutions
## 1. to 6.
See Appendix A.
## 7. Embedded Reber Grammars
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
```
from random import choice, seed
# to make this notebook's output stable across runs
seed(42)
np.random.seed(42)
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
production, state = choice(grammar[state])
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
```
Let's generate a few strings based on the default Reber grammar:
```
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
```
Looks good. Now let's generate a few strings based on the embedded Reber grammar:
```
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
```
Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
```
def generate_corrupted_string(grammar, chars="BEPSTVX"):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = choice(list(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
```
Let's look at a few corrupted strings:
```
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
```
It's not possible to feed a string directly to an RNN: we need to convert it to a sequence of vectors, first. Each vector will represent a single letter, using a one-hot encoding. For example, the letter "B" will be represented as the vector `[1, 0, 0, 0, 0, 0, 0]`, the letter E will be represented as `[0, 1, 0, 0, 0, 0, 0]` and so on. Let's write a function that converts a string to a sequence of such one-hot vectors. Note that if the string is shorted than `n_steps`, it will be padded with zero vectors (later, we will tell TensorFlow how long each string actually is using the `sequence_length` parameter).
```
def string_to_one_hot_vectors(string, n_steps, chars="BEPSTVX"):
char_to_index = {char: index for index, char in enumerate(chars)}
output = np.zeros((n_steps, len(chars)), dtype=np.int32)
for index, char in enumerate(string):
output[index, char_to_index[char]] = 1.
return output
string_to_one_hot_vectors("BTBTXSETE", 12)
```
We can now generate the dataset, with 50% good strings, and 50% bad strings:
```
def generate_dataset(size):
good_strings = [generate_string(embedded_reber_grammar)
for _ in range(size // 2)]
bad_strings = [generate_corrupted_string(embedded_reber_grammar)
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
n_steps = max([len(string) for string in all_strings])
X = np.array([string_to_one_hot_vectors(string, n_steps)
for string in all_strings])
seq_length = np.array([len(string) for string in all_strings])
y = np.array([[1] for _ in range(len(good_strings))] +
[[0] for _ in range(len(bad_strings))])
rnd_idx = np.random.permutation(size)
return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]
X_train, l_train, y_train = generate_dataset(10000)
```
Let's take a look at the first training instances:
```
X_train[0]
```
It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
```
l_train[0]
```
What class is it?
```
y_train[0]
```
Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences:
* First, the input strings have variable length, so we need to specify the `sequence_length` when calling the `dynamic_rnn()` function.
* Second, this is a binary classifier, so we only need one output neuron that will output, for each input string, the estimated log probability that it is a good string. For multiclass classification, we used `sparse_softmax_cross_entropy_with_logits()` but for binary classification we use `sigmoid_cross_entropy_with_logits()`.
```
reset_graph()
possible_chars = "BEPSTVX"
n_inputs = len(possible_chars)
n_neurons = 30
n_outputs = 1
learning_rate = 0.02
momentum = 0.95
X = tf.placeholder(tf.float32, [None, None, n_inputs], name="X")
seq_length = tf.placeholder(tf.int32, [None], name="seq_length")
y = tf.placeholder(tf.float32, [None, 1], name="y")
gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,
sequence_length=seq_length)
logits = tf.layers.dense(states, n_outputs, name="logits")
y_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name="y_pred")
y_proba = tf.nn.sigmoid(logits, name="y_proba")
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=momentum,
use_nesterov=True)
training_op = optimizer.minimize(loss)
correct = tf.equal(y_pred, y, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
init = tf.global_variables_initializer()
saver = tf.train.Saver()
```
Now let's generate a validation set so we can track progress during training:
```
X_val, l_val, y_val = generate_dataset(5000)
n_epochs = 2
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
X_batches = np.array_split(X_train, len(X_train) // batch_size)
l_batches = np.array_split(l_train, len(l_train) // batch_size)
y_batches = np.array_split(y_train, len(y_train) // batch_size)
for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):
loss_val, _ = sess.run(
[loss, training_op],
feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})
print("{:4d} Train loss: {:.4f}, accuracy: {:.2f}% Validation accuracy: {:.2f}%".format(
epoch, loss_val, 100 * acc_train, 100 * acc_val))
saver.save(sess, "my_reber_classifier")
```
Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
```
test_strings = [
"BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVETE",
"BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVEPE"]
l_test = np.array([len(s) for s in test_strings])
max_length = l_test.max()
X_test = [string_to_one_hot_vectors(s, n_steps=max_length)
for s in test_strings]
with tf.Session() as sess:
saver.restore(sess, "my_reber_classifier")
y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, y_proba_val[index][0]))
```
Ta-da! It worked fine. The RNN found the correct answers with absolute confidence. :)
## 8. and 9.
Coming soon...
| github_jupyter |
```
# 基底以上土的加权平均重度
γm = (18*0.5 + 20 * 1.5 + 19.4 * 0.5)/2.5
# 持力层承载力特征值
fa = 195 + 1.6 * 22.4 * (2.5 - 0.5)
# C 柱底荷载标准值
Fk = 1496
Mk = 325
Vk = 83
# 计算基础和回填土重Gk时的基础埋深
d = 0.5 * (2.5 + 2.65)
γG = 20
# 基础底面积
A0 = Fk / (fa - γG * d)
# 考虑偏心荷载的影响,基础底面积增大 1.1~1.4,此处增大 1.2
A = 1.2 * A0
# A 取 12
A = 12
# 长宽
l = 4
b = 3
# 基础和回填土重
Gk = γG * d * A
# 偏心距
ek = Mk / (Fk + Gk)
# 基底最大压力
Pkmax = (Fk + Gk) / A * (1 + 6 * ek / l)
print(γm)
print(fa)
print(d)
print(A0, 1.2 * A0)
print(A)
print(Gk)
print(ek)
print(Pkmax, "<", 1.2*fa)
import math
# C 柱底荷载效应基本组合设计值
F = 1945
M = 423
V = 108
# 净偏心距
en0 = (M + V * 0.8) / F
# 基础边缘处的最大和最小净反力
pnmax = F / (l*b) * (1 + 6 * en0 / l)
pnmin = F / (l*b) * (1 - 6 * en0 / l)
# 泊松比
μ = 0.4
# 沉降影响系数
w = 1.08
# 变形模量
E = 9.5
# 基础边缘处沉降
smax = (1- math.pow(μ, 2)) / (E*math.pow(10,6)) * w * b * pnmax * math.pow(10, 3)
smin = (1- math.pow(μ, 2)) / (E*math.pow(10, 6)) * w * b * pnmin * math.pow(10, 3)
Δs = smax - smin
print(en0)
print(pnmax)
print(pnmin)
print(smax)
print(smin)
print(Δs)
# 地基变形验算
G = γG * A * d
P = (F + G) / A
σc = γm * d
P0 = P - σc
z = d
def alpha(lb, lbLess, zb, zbLess, zbBigger, zbllbl, zbblbl, zbllbb: float) -> float:
return 4 * (((zb-zbLess)*(zbblbl-zbllbl)+(0.4-0.2)*zbllbl)/(0.4-0.2)+(lb-lbLess)*(zbllbb-zbllbl)/(0.4-0.2))
def getLess(v: float) -> float:
i = 0
a = 0
while (a <= v):
a += 0.2
i += 1
return (i-1) * 0.2
def getBigger(v: float) -> float:
i = 0
a = 0
while (a <= v):
a += 0.2
i += 1
return i * 0.2
def cal_alpha(lb, zb, zbllbl, zbblbl, zbllbb: float) -> float:
return alpha(lb, getLess(lb), zb, getLess(zb), getBigger(zb), zbllbl, zbblbl, zbllbb)
lb = l / b
print(lb, getLess(lb))
zb0 = z / (0.5 * b)
z1 = 0.5 + 1.5 + 1.3
zb1 = z1 / (0.5 * b)
z2 = 0.5 + 1.5 + 1.3 + 1.75
zb2 = (z2 - z) / (0.5 * b)
z3 = 0.5 + 1.5 + 1.3 + 1.75 + 1.75
zb3 = (z3 - z)/ (0.5 * b)
z4 = 0.5 + 1.5 + 1.3 + 1.75 + 1.75 + 0.5
zb4 = (z4 - z) / (0.5 * b)
print("==========")
print(zb0, getLess(zb0), getBigger(zb0))
α0 = cal_alpha(lb, zb0, 0.2006, 0.1912, 0.2049)
print(α0)
print("==========")
print(zb1, getLess(zb1), getBigger(zb1))
α1 = cal_alpha(lb, zb1, 0.1737, 0.1657, 0.1793)
print(α1)
print("==========")
print(zb2, getLess(zb2), getBigger(zb2))
α2 = cal_alpha(lb, zb2, 0.2006, 0.1912, 0.2049)
print(α2)
print("==========")
print(zb3, getLess(zb3), getBigger(zb3))
α3 = cal_alpha(lb, zb3, 0.1514, 0.1449, 0.1574)
print(α3)
print("==========")
print(zb4, getLess(zb4), getBigger(zb4))
α4 = cal_alpha(lb, zb4, 0.1449, 0.1390, 0.1510)
print(α4)
ΔS0 = P0 / 9.5 * (z * α0 - 0)
ΔS1 = P0 / 9.5 * (z1 * α1 - z * α0)
ΔS2 = P0 / 14 * (z2 * α2 - z1 * α1)
ΔS3 = P0 / 14 * (z3 * α3 - z2 * α2)
ΔS4 = P0 / 2.0 * (z4 * α4 - z3 * α3)
print(ΔS0)
print(ΔS1)
print(ΔS2)
print(ΔS3)
print(ΔS4)
Es = (z4 * α4) / (z/9.5 + z1/9.5 + z2/14 + z3/14 + z4/20)
print(z4 * α4, z, z1, z2, z3, z4, Es)
# ψs = 1.4 - (2.5-2.28)/(4.0-2.5) * (1.4-1.3)
ψs = 1.4
print(ψs)
s = ψs * (ΔS0 + ΔS1 + ΔS2 + ΔS3 + ΔS4)
print(s)
print(G)
print(P)
print(σc)
print(P0)
import math
F = 1945
M = 423
V = 108
l=3
b=3
at=0.5
bc=0.5
ac=0.5
h=800
h0=h-(40+10)
ab=at+2*h0/1000
am=(at+ab)/2
e = (M + V * 0.8)/ F
pnmax = F / (l * b) * (1 + 6 * e / l)
pnmin = F / (l * b) * (1 - 6 * e / l)
Fl=pnmax*((l/2-ac/2-h0/1000)*b-math.pow((b/2-bc/2-h0/1000), 2))
Flk = 0.7 * 1.0 * 1.27 * math.pow(10, 3) * am * h0 / 1000
import math
F = 1945
M = 423
V = 108
l=3
b=3
at=1.5
b1=1.5
a1=1.9
h=400
h01=h-(40+10)
ab=at+2*h01/1000
am=(at+ab)/2
e = (M + V * 0.8)/ F
pnmax = F / (l * b) * (1 + 6 * e / l)
pnmin = F / (l * b) * (1 - 6 * e / l)
Fl=pnmax*((l/2-a1/2-h01/1000)*b-math.pow((b/2-b1/2-h01/1000), 2))
Flk = 0.7 * 1.0 * 1.27 * math.pow(10, 3) * am * h01 / 1000
pn1 = pnmin + (l + ac) / (2 * l) * (pnmax - pnmin)
M1 = l / 48 * (pnmax + pn1) * (2*b * bc) * math.pow((l-ac), 2)
As1 = M1 * math.pow(10, 6) / (0.9 * 210 * h0)
pn3 = pnmin + (l + a1) / (pnmax - pnmin)
M3 = 1 / 48 * (pnmax + pn3) * (2 * b * b1) * math.pow((l - a1), 2)
As3 = M3 * math.pow(10, 6) / (0.9 * 210 * h01)
As = As1 if As1 > As3 else As3
n = int(As / 254.5)
gap = int(3000 / n) - int(3000 / n) % 10
true_n = int(3000 / gap) - int(3000 / gap) % 1 + 2
true_n = 3000 / 150
true_as = 254.4 * true_n
print(n, gap, true_n, true_as)
import math
Es = Es
b = b
d = d
Pk = (Fk + Gk) / A
EsEs = Es / 2.0
zr = (z4 - z) / b
θ = 23
tanθ = math.tan(math.radians(θ))
dr = 0.5 + 1.5 + 1.3 + 3.5 - d
σcd = 0.5 * 18 + 1.5 * 20 + + 0.5 * 19.4
σz = l*b*(Pk - σcd) / ((l + 2 * dr * tanθ) * (b + 2 * dr * tanθ))
σcz = 18 * 0.5 + 20 * 1.5 + 19.4 * 1.3 + 3.5 * 21
γmx = σcz / (d + dr)
faz = 70 + 1.0 * γmx * (6.8 - 0.5)
print(EsEs, zr, 0.5 * b)
print(tanθ)
print(σz, σcz, γmx, σz + σcz, faz)
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
EVCのためのEV-GMMを構築します. そして, 適応学習する.
詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf
This program make EV-GMM for EVC. Then, it make adaptation learning.
Check detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf
"""
from __future__ import division, print_function
import os
from shutil import rmtree
import argparse
import glob
import pickle
import time
import numpy as np
from numpy.linalg import norm
from sklearn.decomposition import PCA
from sklearn.mixture import GMM # sklearn 0.20.0から使えない
from sklearn.preprocessing import StandardScaler
import scipy.signal
import scipy.sparse
%matplotlib inline
import matplotlib.pyplot as plt
import IPython
from IPython.display import Audio
import soundfile as sf
import wave
import pyworld as pw
import librosa.display
import pysptk
from dtw import dtw
import warnings
warnings.filterwarnings('ignore')
"""
Parameters
__Mixtured : GMM混合数
__versions : 実験セット
__convert_source : 変換元話者のパス
__convert_target : 変換先話者のパス
"""
# parameters
__Mixtured = 40
__versions = 'pre-stored0.1.2'
__convert_source = 'input/EJM10/V01/T01/TIMIT/000/*.wav'
__convert_target = 'adaptation/EJM04/V01/T01/ATR503/A/*.wav'
# settings
__same_path = './utterance/' + __versions + '/'
__output_path = __same_path + 'output/EJM04/' # EJF01, EJF07, EJM04, EJM05
Mixtured = __Mixtured
pre_stored_pickle = __same_path + __versions + '.pickle'
pre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav'
pre_stored_list = __same_path + "pre/**/V01/T01/**/*.wav"
#pre_stored_target_list = "" (not yet)
pre_stored_gmm_init_pickle = __same_path + __versions + '_init-gmm.pickle'
pre_stored_sv_npy = __same_path + __versions + '_sv.npy'
save_for_evgmm_covarXX = __output_path + __versions + '_covarXX.npy'
save_for_evgmm_covarYX = __output_path + __versions + '_covarYX.npy'
save_for_evgmm_fitted_source = __output_path + __versions + '_fitted_source.npy'
save_for_evgmm_fitted_target = __output_path + __versions + '_fitted_target.npy'
save_for_evgmm_weights = __output_path + __versions + '_weights.npy'
save_for_evgmm_source_means = __output_path + __versions + '_source_means.npy'
for_convert_source = __same_path + __convert_source
for_convert_target = __same_path + __convert_target
converted_voice_npy = __output_path + 'sp_converted_' + __versions
converted_voice_wav = __output_path + 'sp_converted_' + __versions
mfcc_save_fig_png = __output_path + 'mfcc3dim_' + __versions
f0_save_fig_png = __output_path + 'f0_converted' + __versions
converted_voice_with_f0_wav = __output_path + 'sp_f0_converted' + __versions
EPSILON = 1e-8
class MFCC:
"""
MFCC() : メル周波数ケプストラム係数(MFCC)を求めたり、MFCCからスペクトルに変換したりするクラス.
動的特徴量(delta)が実装途中.
ref : http://aidiary.hatenablog.com/entry/20120225/1330179868
"""
def __init__(self, frequency, nfft=1026, dimension=24, channels=24):
"""
各種パラメータのセット
nfft : FFTのサンプル点数
frequency : サンプリング周波数
dimension : MFCC次元数
channles : メルフィルタバンクのチャンネル数(dimensionに依存)
fscale : 周波数スケール軸
filterbankl, fcenters : フィルタバンク行列, フィルタバンクの頂点(?)
"""
self.nfft = nfft
self.frequency = frequency
self.dimension = dimension
self.channels = channels
self.fscale = np.fft.fftfreq(self.nfft, d = 1.0 / self.frequency)[: int(self.nfft / 2)]
self.filterbank, self.fcenters = self.melFilterBank()
def hz2mel(self, f):
"""
周波数からメル周波数に変換
"""
return 1127.01048 * np.log(f / 700.0 + 1.0)
def mel2hz(self, m):
"""
メル周波数から周波数に変換
"""
return 700.0 * (np.exp(m / 1127.01048) - 1.0)
def melFilterBank(self):
"""
メルフィルタバンクを生成する
"""
fmax = self.frequency / 2
melmax = self.hz2mel(fmax)
nmax = int(self.nfft / 2)
df = self.frequency / self.nfft
dmel = melmax / (self.channels + 1)
melcenters = np.arange(1, self.channels + 1) * dmel
fcenters = self.mel2hz(melcenters)
indexcenter = np.round(fcenters / df)
indexstart = np.hstack(([0], indexcenter[0:self.channels - 1]))
indexstop = np.hstack((indexcenter[1:self.channels], [nmax]))
filterbank = np.zeros((self.channels, nmax))
for c in np.arange(0, self.channels):
increment = 1.0 / (indexcenter[c] - indexstart[c])
# np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする
for i in np.int_(np.arange(indexstart[c], indexcenter[c])):
filterbank[c, i] = (i - indexstart[c]) * increment
decrement = 1.0 / (indexstop[c] - indexcenter[c])
# np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする
for i in np.int_(np.arange(indexcenter[c], indexstop[c])):
filterbank[c, i] = 1.0 - ((i - indexcenter[c]) * decrement)
return filterbank, fcenters
def mfcc(self, spectrum):
"""
スペクトルからMFCCを求める.
"""
mspec = []
mspec = np.log10(np.dot(spectrum, self.filterbank.T))
mspec = np.array(mspec)
return scipy.fftpack.realtransforms.dct(mspec, type=2, norm="ortho", axis=-1)
def delta(self, mfcc):
"""
MFCCから動的特徴量を求める.
現在は,求める特徴量フレームtをt-1とt+1の平均としている.
"""
mfcc = np.concatenate([
[mfcc[0]],
mfcc,
[mfcc[-1]]
]) # 最初のフレームを最初に、最後のフレームを最後に付け足す
delta = None
for i in range(1, mfcc.shape[0] - 1):
slope = (mfcc[i+1] - mfcc[i-1]) / 2
if delta is None:
delta = slope
else:
delta = np.vstack([delta, slope])
return delta
def imfcc(self, mfcc, spectrogram):
"""
MFCCからスペクトルを求める.
"""
im_sp = np.array([])
for i in range(mfcc.shape[0]):
mfcc_s = np.hstack([mfcc[i], [0] * (self.channels - self.dimension)])
mspectrum = scipy.fftpack.idct(mfcc_s, norm='ortho')
# splrep はスプライン補間のための補間関数を求める
tck = scipy.interpolate.splrep(self.fcenters, np.power(10, mspectrum))
# splev は指定座標での補間値を求める
im_spectrogram = scipy.interpolate.splev(self.fscale, tck)
im_sp = np.concatenate((im_sp, im_spectrogram), axis=0)
return im_sp.reshape(spectrogram.shape)
def trim_zeros_frames(x, eps=1e-7):
"""
無音区間を取り除く.
"""
T, D = x.shape
s = np.sum(np.abs(x), axis=1)
s[s < 1e-7] = 0.
return x[s > eps]
def analyse_by_world_with_harverst(x, fs, fftl=1024, shiftms=5.0, minf0=40.0, maxf0=500.0):
"""
WORLD音声分析合成器で基本周波数F0,スペクトル包絡,非周期成分を求める.
基本周波数F0についてはharvest法により,より精度良く求める.
"""
#f0, time_axis = pw.harvest(x, fs, frame_period=shiftms)
f0, time_axis = pw.harvest(x, fs, f0_floor=minf0, f0_ceil=maxf0, frame_period=shiftms)
spc = pw.cheaptrick(x, f0, time_axis, fs, fft_size=fftl)
ap = pw.d4c(x, f0, time_axis, fs, fft_size=fftl)
# 4 Harvest with F0 refinement (using Stonemask)
"""
frame_period = 5
_f0, t = pw.harvest(x, fs, frame_period=frame_period)
f0 = pw.stonemask(x, _f0, t, fs)
sp = pw.cheaptrick(x, f0, t, fs)
ap = pw.d4c(x, f0, t, fs)
"""
return f0, spc, ap
def wavread(file):
"""
wavファイルから音声トラックとサンプリング周波数を抽出する.
"""
wf = wave.open(file, "r")
fs = wf.getframerate()
x = wf.readframes(wf.getnframes())
x = np.frombuffer(x, dtype= "int16") / 32768.0
wf.close()
return x, float(fs)
def preEmphasis(signal, p=0.97):
"""
MFCC抽出のための高域強調フィルタ.
波形を通すことで,高域成分が強調される.
"""
return scipy.signal.lfilter([1.0, -p], 1, signal)
def alignment(source, target, path):
"""
タイムアライメントを取る.
target音声をsource音声の長さに合うように調整する.
"""
# ここでは814に合わせよう(targetに合わせる)
# p_p = 0 if source.shape[0] > target.shape[0] else 1
#shapes = source.shape if source.shape[0] > target.shape[0] else target.shape
shapes = source.shape
align = np.array([])
for (i, p) in enumerate(path[0]):
if i != 0:
if j != p:
temp = np.array(target[path[1][i]])
align = np.concatenate((align, temp), axis=0)
else:
temp = np.array(target[path[1][i]])
align = np.concatenate((align, temp), axis=0)
j = p
return align.reshape(shapes)
"""
pre-stored学習のためのパラレル学習データを作る。
時間がかかるため、利用できるlearn-data.pickleがある場合はそれを利用する。
それがない場合は一から作り直す。
"""
timer_start = time.time()
dim = 24
alpha = 0.42
if os.path.exists(pre_stored_pickle):
print("exist, ", pre_stored_pickle)
with open(pre_stored_pickle, mode='rb') as f:
total_data = pickle.load(f)
print("open, ", pre_stored_pickle)
print("Load pre-stored time = ", time.time() - timer_start , "[sec]")
else:
source_mfcc = []
#source_data_sets = []
for name in sorted(glob.iglob(pre_stored_source_list, recursive=True)):
print(name)
x, fs = sf.read(name)
f0, sp, ap = analyse_by_world_with_harverst(x, fs)
mc = pysptk.sp2mc(sp, dim, alpha)
#source_data = np.hstack([source_mfcc_temp, mfcc.delta(source_mfcc_temp)]) # static & dynamic featuers
source_mfcc.append(mc)
#source_data_sets.append(source_data)
total_data = []
i = 0
_s_len = len(source_mfcc)
for name in sorted(glob.iglob(pre_stored_list, recursive=True)):
print(name, len(total_data))
x, fs = sf.read(name)
f0, sp, ap = analyse_by_world_with_harverst(x, fs)
mc = pysptk.sp2mc(sp, dim, alpha)
dist, cost, acc, path = dtw(source_mfcc[i%_s_len], mc, dist=lambda x, y: norm(x - y, ord=1))
#print('Normalized distance between the two sounds:' + str(dist))
#print("target_mfcc = {0}".format(target_mfcc.shape))
aligned = alignment(source_mfcc[i%_s_len], mc, path)
#target_data_sets = np.hstack([aligned, mfcc.delta(aligned)]) # static & dynamic features
#learn_data = np.hstack((source_data_sets[i], target_data_sets))
learn_data = np.hstack([source_mfcc[i%_s_len], aligned])
total_data.append(learn_data)
i += 1
with open(pre_stored_pickle, 'wb') as output:
pickle.dump(total_data, output)
print("Make, ", pre_stored_pickle)
print("Make pre-stored time = ", time.time() - timer_start , "[sec]")
"""
全事前学習出力話者からラムダを推定する.
ラムダは適応学習で変容する.
"""
S = len(total_data)
D = int(total_data[0].shape[1] / 2)
print("total_data[0].shape = ", total_data[0].shape)
print("S = ", S)
print("D = ", D)
timer_start = time.time()
if os.path.exists(pre_stored_gmm_init_pickle):
print("exist, ", pre_stored_gmm_init_pickle)
with open(pre_stored_gmm_init_pickle, mode='rb') as f:
initial_gmm = pickle.load(f)
print("open, ", pre_stored_gmm_init_pickle)
print("Load initial_gmm time = ", time.time() - timer_start , "[sec]")
else:
initial_gmm = GMM(n_components = Mixtured, covariance_type = 'full')
initial_gmm.fit(np.vstack(total_data))
with open(pre_stored_gmm_init_pickle, 'wb') as output:
pickle.dump(initial_gmm, output)
print("Make, ", initial_gmm)
print("Make initial_gmm time = ", time.time() - timer_start , "[sec]")
weights = initial_gmm.weights_
source_means = initial_gmm.means_[:, :D]
target_means = initial_gmm.means_[:, D:]
covarXX = initial_gmm.covars_[:, :D, :D]
covarXY = initial_gmm.covars_[:, :D, D:]
covarYX = initial_gmm.covars_[:, D:, :D]
covarYY = initial_gmm.covars_[:, D:, D:]
fitted_source = source_means
fitted_target = target_means
"""
SVはGMMスーパーベクトルで、各pre-stored学習における出力話者について平均ベクトルを推定する。
GMMの学習を見てみる必要があるか?
"""
timer_start = time.time()
if os.path.exists(pre_stored_sv_npy):
print("exist, ", pre_stored_sv_npy)
sv = np.load(pre_stored_sv_npy)
print("open, ", pre_stored_sv_npy)
print("Load pre_stored_sv time = ", time.time() - timer_start , "[sec]")
else:
sv = []
for i in range(S):
gmm = GMM(n_components = Mixtured, params = 'm', init_params = '', covariance_type = 'full')
gmm.weights_ = initial_gmm.weights_
gmm.means_ = initial_gmm.means_
gmm.covars_ = initial_gmm.covars_
gmm.fit(total_data[i])
sv.append(gmm.means_)
sv = np.array(sv)
np.save(pre_stored_sv_npy, sv)
print("Make pre_stored_sv time = ", time.time() - timer_start , "[sec]")
"""
各事前学習出力話者のGMM平均ベクトルに対して主成分分析(PCA)を行う.
PCAで求めた固有値と固有ベクトルからeigenvectorsとbiasvectorsを作る.
"""
timer_start = time.time()
#source_pca
source_n_component, source_n_features = sv[:, :, :D].reshape(S, Mixtured*D).shape
# 標準化(分散を1、平均を0にする)
source_stdsc = StandardScaler()
# 共分散行列を求める
source_X_std = source_stdsc.fit_transform(sv[:, :, :D].reshape(S, Mixtured*D))
# PCAを行う
source_cov = source_X_std.T @ source_X_std / (source_n_component - 1)
source_W, source_V_pca = np.linalg.eig(source_cov)
print(source_W.shape)
print(source_V_pca.shape)
# データを主成分の空間に変換する
source_X_pca = source_X_std @ source_V_pca
print(source_X_pca.shape)
#target_pca
target_n_component, target_n_features = sv[:, :, D:].reshape(S, Mixtured*D).shape
# 標準化(分散を1、平均を0にする)
target_stdsc = StandardScaler()
#共分散行列を求める
target_X_std = target_stdsc.fit_transform(sv[:, :, D:].reshape(S, Mixtured*D))
#PCAを行う
target_cov = target_X_std.T @ target_X_std / (target_n_component - 1)
target_W, target_V_pca = np.linalg.eig(target_cov)
print(target_W.shape)
print(target_V_pca.shape)
# データを主成分の空間に変換する
target_X_pca = target_X_std @ target_V_pca
print(target_X_pca.shape)
eigenvectors = source_X_pca.reshape((Mixtured, D, S)), target_X_pca.reshape((Mixtured, D, S))
source_bias = np.mean(sv[:, :, :D], axis=0)
target_bias = np.mean(sv[:, :, D:], axis=0)
biasvectors = source_bias.reshape((Mixtured, D)), target_bias.reshape((Mixtured, D))
print("Do PCA time = ", time.time() - timer_start , "[sec]")
"""
声質変換に用いる変換元音声と目標音声を読み込む.
"""
timer_start = time.time()
source_mfcc_for_convert = []
source_sp_for_convert = []
source_f0_for_convert = []
source_ap_for_convert = []
fs_source = None
for name in sorted(glob.iglob(for_convert_source, recursive=True)):
print("source = ", name)
x_source, fs_source = sf.read(name)
f0_source, sp_source, ap_source = analyse_by_world_with_harverst(x_source, fs_source)
mc = pysptk.sp2mc(sp_source, dim, alpha)
#mfcc_s_tmp = mfcc_s.mfcc(sp)
#source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)])
source_mfcc_for_convert.append(mc)
source_sp_for_convert.append(sp_source)
source_f0_for_convert.append(f0_source)
source_ap_for_convert.append(ap_source)
target_mfcc_for_fit = []
target_f0_for_fit = []
target_ap_for_fit = []
for name in sorted(glob.iglob(for_convert_target, recursive=True)):
print("target = ", name)
x_target, fs_target = sf.read(name)
f0_target, sp_target, ap_target = analyse_by_world_with_harverst(x_target, fs_target)
mc = pysptk.sp2mc(sp_target, dim, alpha)
#mfcc_target_tmp = mfcc_target.mfcc(sp_target)
#target_mfcc_for_fit = np.hstack([mfcc_t_tmp, mfcc_t.delta(mfcc_t_tmp)])
target_mfcc_for_fit.append(mc)
target_f0_for_fit.append(f0_target)
target_ap_for_fit.append(ap_target)
# 全部numpy.arrrayにしておく
source_data_mfcc = np.array(source_mfcc_for_convert)
source_data_sp = np.array(source_sp_for_convert)
source_data_f0 = np.array(source_f0_for_convert)
source_data_ap = np.array(source_ap_for_convert)
target_mfcc = np.array(target_mfcc_for_fit)
target_f0 = np.array(target_f0_for_fit)
target_ap = np.array(target_ap_for_fit)
print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]")
"""
適応話者学習を行う.
つまり,事前学習出力話者から目標話者の空間を作りだす.
適応話者文数ごとにfitted_targetを集めるのは未実装.
"""
timer_start = time.time()
epoch=100
py = GMM(n_components = Mixtured, covariance_type = 'full')
py.weights_ = weights
py.means_ = target_means
py.covars_ = covarYY
fitted_target = None
for i in range(len(target_mfcc)):
print("adaptation = ", i+1, "/", len(target_mfcc))
target = target_mfcc[i]
for x in range(epoch):
print("epoch = ", x)
predict = py.predict_proba(np.atleast_2d(target))
y = np.sum([predict[:, i: i + 1] * (target - biasvectors[1][i])
for i in range(Mixtured)], axis = 1)
gamma = np.sum(predict, axis = 0)
left = np.sum([gamma[i] * np.dot(eigenvectors[1][i].T,
np.linalg.solve(py.covars_, eigenvectors[1])[i])
for i in range(Mixtured)], axis=0)
right = np.sum([np.dot(eigenvectors[1][i].T,
np.linalg.solve(py.covars_, y)[i])
for i in range(Mixtured)], axis = 0)
weight = np.linalg.solve(left, right)
fitted_target = np.dot(eigenvectors[1], weight) + biasvectors[1]
py.means_ = fitted_target
print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]")
"""
変換に必要なものを残しておく.
"""
np.save(save_for_evgmm_covarXX, covarXX)
np.save(save_for_evgmm_covarYX, covarYX)
np.save(save_for_evgmm_fitted_source, fitted_source)
np.save(save_for_evgmm_fitted_target, fitted_target)
np.save(save_for_evgmm_weights, weights)
np.save(save_for_evgmm_source_means, source_means)
```
| github_jupyter |
# Do prediction on a whole folder and create stacked Numpy files for each image
```
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import random
#import requests
from io import BytesIO
from PIL import Image
import numpy as np
import os
import cv2
from matplotlib.image import imread
```
Those are the relevant imports for the detection model
```
from maskrcnn_benchmark.config import cfg
pylab.rcParams['figure.figsize'] = 20, 12
# importing the prediction class
from predictor import NUCLEIdemo
# make sure that pytorch is installed correctly, check
# https://github.com/rusty1s/pytorch_geometric/issues/114
# for troubleshooting if CUDA errors occur
```
The NUCLEIdemo class can load the config file and does the image prediction.
There are multiple models to choose from in order to obtain a prediction. All models can be found in the folder `/data/proj/smFISH/Students/Max_Senftleben/files/models/`. Choosing the model_final.pth in each directory will lead to the best results.
In the following, some examples are provided:
`/data/proj/smFISH/Students/Max_Senftleben/files/models/20190423_transfer_ale/model_final.pth`
originally trained on nuclei from somatosensory cortex cells and then re-trained on
oligodendrocyte progenitor cells, backbone ResNet 50.
`/data/proj/smFISH/Students/Max_Senftleben/files/models/20190310_offline_augment/model_final.pth`
originally trained on nuclei from somatosensory cortex cells, backbone ResNet 50.
`/data/proj/smFISH/Students/Max_Senftleben/files/models/20190315_poly_t/model_final.pth`
trained on cells from poly-A staining, ResNet 50.
Most models were trained with a learning rate of 0.0025 which was found to be best. Information about the training process of each model can be found in the folder in the `log.txt` file with `head -100 log.txt`.
```
configuration_file = "../configs/nuclei_1gpu_nonorm_offline_res50.yaml"
# update the config options with the config file
cfg.merge_from_file(configuration_file)
# manual override some options
cfg.merge_from_list(["MODEL.DEVICE", "cpu"])
# change dimensions of test images
cfg.merge_from_list(['INPUT.MAX_SIZE_TEST','2049'])
# change number of classes (classes + 1 for background)
cfg.merge_from_list(['MODEL.ROI_BOX_HEAD.NUM_CLASSES','4'])
# change normalization, here model was not normalized
cfg.merge_from_list(['INPUT.PIXEL_MEAN', [0., 0., 0.]])
# define model for prediction to use here
cfg.merge_from_list(['MODEL.WEIGHT', '/data/proj/smFISH/Students/Max_Senftleben/files/models/20190423_transfer_ale/model_final.pth'])
cfg.merge_from_list(['OUTPUT_DIR', '.'])
# show the configuration
print(cfg)
```
## Multiple ways of loading and plotting of images
For my purposes, load_cv2 was best because it took into account all formats.
```
# load image
def load(path):
pil_image = Image.open(path).convert("RGB")
#print(pil_image)
# convert to BGR format
image = np.array(pil_image)[:, :, [2, 1, 0]]
return image
def load_matplot(path):
img = imread(path)
return img
def load_cv2(path):
img = cv2.imread(path,cv2.IMREAD_ANYDEPTH)
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img = cv2.normalize(img, img, 0, 255, cv2.NORM_MINMAX)
img = np.uint8(img)
#img = cv2.convertScaleAbs(img)
return img
def load_pil(path):
img = Image.open(path)
image = np.array(img)
info = np.iinfo(image.dtype) # Get the information of the incoming image type
print(info)
data = image.astype(np.int32) / info.max # normalize the data to 0 - 1
data = 255 * data # Now scale by 255
img = data.astype(np.uint8)
cv2.imshow("Window", img)
# show image alongside the result and save if necessary
def imshow(img, result, save_path=None):
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
ax1.imshow(img)
plt.axis('off')
ax2 = fig.add_subplot(1,2,2)
ax2.imshow(result)
plt.axis('off')
if save_path:
plt.savefig(save_path, bbox_inches = 'tight')
plt.show()
else:
plt.show()
def imshow_single(result, save_path=None):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.imshow(result)
plt.axis('off')
if save_path:
plt.savefig(save_path, bbox_inches='tight')
plt.close()
else:
plt.show()
# here the image size and the confidence threshold can be changed
nuclei_detect = NUCLEIdemo(
cfg,
min_image_size=1024,
confidence_threshold=0.3,
)
```
### Define the image paths and do the prediction on the whole folder
```
# make stacked numpy file from its prediction and the ground truth
# where the first dimension is the numpy file and every other is the numpy
def make_numpy(prediction, image, path):
# get the masks from the prediction variable
list_masks = vars(prediction)['extra_fields']['mask']
masks_to_save = []
# ground truth image
img = np.squeeze(np.dsplit(image,3)[0], axis=2)
masks_to_save.append(img)
# iterate through the list of masks
for i, label in enumerate(vars(prediction)['extra_fields']['labels']):
numpy_mask = list_masks[i].numpy().transpose(1,2,0)
numpy_mask = np.squeeze(numpy_mask, axis=2)
numpy_mask[numpy_mask > 0] = label
masks_to_save.append(numpy_mask)
# save the numpy array
np.save(path, np.dstack(masks_to_save))
# predict for a folder of images
# folder of handled images
img_path = '/data/proj/smFISH/Simone/test_intron/AMEXP20181106/AMEXP20181106_hyb1/test_run_20181123_AMEXP20181106_hyb1_filtered_png/test_run_20181123_AMEXP20181106_hyb1_DAPI_filtered_png/'
# path to subfolder for the results
save_results = '/data/proj/smFISH/Students/Max_Senftleben/files/results/'
# path to save the images with their masks
save_independently = save_results + '20190329_test_run_20181123_AMEXP20181106_hyb1_DAPI_filtered_png/'
# path to save the predicted stacked numpy files
save_npy = save_results + '20190329_test_run_20181123_AMEXP20181106_hyb1_DAPI_filtered_npy/'
def save_pred_as_numpy():
for one_image in os.listdir(img_path):
print("Image {} is handled.".format(one_image))
image = load_cv2(img_path + one_image)
# prediction is done
result, prediction = nuclei_detect.run_on_opencv_image_original(image)
img = Image.fromarray(result)
# png image is saved with masks (for visualization)
img.save(save_independently + one_image[:-4] + '_pred.png')
# numpy files are saved
make_numpy(prediction, image, save_npy + one_image[:-4] + '_pred.npy')
# optionally, the results can be shown
#imshow(image, result)
# check predicted numpy files
# can also be used to check the chunks from below
random_img = random.choice(os.listdir(save_npy))
mask = np.load(save_npy+random_img)
mask_list = np.dsplit(mask, mask.shape[2])
'''
for i in mask_list:
print(i)
plt.imshow(np.squeeze(i, axis=2))
plt.show()
print(np.unique(i))
'''
# numpy arrays from above have the size of the original image
# here, the arrays can be sliced so that they can be used in training in the next step
# after this step, the chunks can further be used in the creation of the data set
def chunking_labeled_images(number_chunks_dimension, old_chunks, new_chunks):
for i in os.listdir(old_chunks):
mask = np.load(old_chunks + i)
height, width = mask.shape[:2]
instance_count = mask.shape[2]
#masklist = np.dsplit(mask, instance_count)
#plt.imshow(np.dstack((masklist[1]*100, masklist[1]*100, masklist[1]*100)))
#plt.show()
hsplits = np.split(mask,number_chunks_dimension,axis=0)
total_images = []
for split in hsplits:
total_images.append(np.split(split,number_chunks_dimension,axis=1))
total_images = [img for cpl in total_images for img in cpl]
for idx,image_chunk in enumerate(total_images):
image_chunks_ids = []
mask = image_chunk != 0
planes_to_keep = np.flatnonzero((mask).sum(axis=(0,1)))
# Make sure that the image has labeled objects
if planes_to_keep.size:
image_chunk_trimmed = image_chunk[:,:,planes_to_keep]
image_chunk_trimmed_id = new_chunks + i.split('.')[0]+'chunk'+str(idx)
np.save(image_chunk_trimmed_id, image_chunk_trimmed)
old = '/data/proj/smFISH/Students/Max_Senftleben/files/results/20190329_test_run_20181123_AMEXP20181106_hyb1_DAPI_filtered_npy/'
new = '/data/proj/smFISH/Students/Max_Senftleben/files/data/20190422_AMEX_transfer_nuclei/npy/'
dim = 2
chunking_labeled_images(dim, old, new)
```
| github_jupyter |
# Mount Drive
```
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
```
# Install packages
```
!pip install -q transformers
# !pip install -q tensorflow==2.2-rc1
!pip install -q tf-models-official==2.2.0
!pip install keras-lr-multiplier
```
# Import libraries
```
import os
import time
import datetime
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, roc_auc_score
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from transformers import TFAutoModel, AutoTokenizer, TFBertForSequenceClassification,AutoConfig
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Maximum,LayerNormalization,GlobalMaxPooling2D,Average,Dot, Dense, Input, GlobalAveragePooling1D, BatchNormalization, Activation, Concatenate, Flatten, Dropout, Conv1D, MaxPooling1D, Add, Lambda, GlobalAveragePooling2D, Reshape, RepeatVector, UpSampling1D
from tensorflow.keras.models import Model
from keras.layers import LSTM, Bidirectional
from official import nlp
import official.nlp.optimization
tf.keras.backend.clear_session()
# Set seed value
seed_value = 56
os.environ['PYTHONHASHSEED']=str(0)
# 2. Set `python` built-in pseudo-random generator at a fixed value
random.seed(seed_value)
# 3. Set `numpy` pseudo-random generator at a fixed value
np.random.seed(seed_value)
# 4. Set `tensorflow` pseudo-random generator at a fixed value
tf.random.set_seed(seed_value)
# for later versions:
# tf.compat.v1.set_random_seed(seed_value)
# 5. Configure a new global `tensorflow` session
from keras import backend as K
# session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
# sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
# K.set_session(sess)
# for later versions:
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
```
# Parameters
```
base_dir = '/content/drive/MyDrive/mediaeval2016'
train_path = os.path.join(base_dir, 'train-ekphrasis.csv')
val_path = os.path.join(base_dir, 'dev-ekphrasis.csv')
test_path = os.path.join(base_dir, 'test-ekphrasis.csv')
img_train_path = os.path.join(base_dir, 'train-img-224.npy')
img_val_path = os.path.join(base_dir, 'dev-img-224.npy')
img_test_path = os.path.join(base_dir, 'test-img-224.npy')
MAX_LENGTH = 32
MODEL = 'vinai/bertweet-base'
MODEL_NAME = 'vinai/bertweet-base'
N_LABELS = 1
```
# Read data
```
df_train = pd.read_csv(train_path)
print(df_train.shape)
print(df_train.info())
display(df_train.head())
df_val = pd.read_csv(val_path)
print(df_val.shape)
print(df_val.info())
display(df_val.head())
df_test = pd.read_csv(test_path)
print(df_test.shape)
print(df_test.info())
display(df_test.head())
# Get the lists of sentences and their labels.
train_sent = df_train.cleaned_text.values
train_labels = df_train.enc_label.values
val_sent = df_val.cleaned_text.values
val_labels = df_val.enc_label.values
test_sent = df_test.cleaned_text.values
test_labels = df_test.enc_label.values
#Bertweet tokens
import re
for i in range(train_sent.shape[0]):
train_sent[i] = re.sub(r'<url>','HTTPURL',train_sent[i])
train_sent[i] = re.sub(r'<user>','@USER',train_sent[i])
for i in range(val_sent.shape[0]):
val_sent[i] = re.sub(r'<url>','HTTPURL',val_sent[i])
val_sent[i] = re.sub(r'<user>','@USER',val_sent[i])
for i in range(test_sent.shape[0]):
test_sent[i] = re.sub(r'<url>','HTTPURL',test_sent[i])
test_sent[i] = re.sub(r'<user>','@USER',test_sent[i])
train_sent = np.append(train_sent,val_sent)
train_labels = np.append(train_labels,val_labels)
#Image
img_train = np.load(img_train_path)
img_val = np.load(img_val_path)
img_test = np.load(img_test_path)
img_train.shape, img_test.shape, img_val.shape
#Use both train and val to train like other papers
img_train = np.vstack((img_train,img_val))
img_train.shape
print(len(train_sent), len(train_labels))
print(len(val_sent), len(val_labels))
print(len(test_sent), len(test_labels))
```
# Tokenization & Input Formatting
```
!pip install emoji
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = AutoTokenizer.from_pretrained(MODEL, do_lower_case=False)
print(' Original: ', train_sent[4])
print('Tokenized: ', tokenizer.tokenize(train_sent[4]))
print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(train_sent[4])))
```
## Find max length
```
max_len = 0
stat = []
for sent in tqdm(train_sent):
input_ids = tokenizer.encode(sent, add_special_tokens=True)
# if(len(input_ids)>96):
# print(sent)
max_len = max(max_len, len(input_ids))
stat.append(len(input_ids))
print('\nMax sentence length in train data: ', max_len)
# for sent in tqdm(val_sent):
# input_ids = tokenizer.encode(sent, add_special_tokens=True)
# # if(len(input_ids)>96):
# # print(sent)
# max_len = max(max_len, len(input_ids))
# stat.append(len(input_ids))
# print('\nMax sentence length in both train and val data: ', max_len)
for index,sent in tqdm(enumerate(test_sent)):
input_ids = tokenizer.encode(sent, add_special_tokens=True)
if(len(input_ids)>96):
print(index, sent)
max_len = max(max_len, len(input_ids))
stat.append(len(input_ids))
print('\nMax sentence length in both train and val data: ', max_len)
```
## Tokenize
### Train
```
input_ids = []
attention_masks = []
for sent in tqdm(train_sent):
encoded_dict = tokenizer.encode_plus(
sent,
add_special_tokens = True,
max_length = MAX_LENGTH,
pad_to_max_length = True,
return_attention_mask = True,
return_tensors = 'np',
truncation = True,
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
id_train = np.concatenate(input_ids)
mask_train = np.concatenate(attention_masks)
y_train = train_labels
id_train.shape, mask_train.shape, y_train.shape
```
### Val
```
input_ids = []
attention_masks = []
for sent in tqdm(val_sent):
encoded_dict = tokenizer.encode_plus(
sent,
add_special_tokens = True,
max_length = MAX_LENGTH,
pad_to_max_length = True,
return_attention_mask = True,
return_tensors = 'np',
truncation = True,
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
id_val = np.concatenate(input_ids)
mask_val = np.concatenate(attention_masks)
y_val = val_labels
id_val.shape, mask_val.shape, y_val.shape
```
## Create iterator for data
```
BATCH_SIZE = 256
X_train = [
id_train,
mask_train,
img_train
]
X_val = [
id_val,
mask_val,
img_val
]
```
# Train TFBertForSequenceClassification Model
```
from tensorflow.keras.applications import VGG19 #Change here for different models
from tensorflow.keras.applications.vgg19 import preprocess_input #Change here for different models
def create_cnn(input_shape):
inputs = Input(shape=input_shape)
x = Lambda(preprocess_input)(inputs)
# load the VGG16 network, ensuring the head FC layer sets are left
# off - Change here for different models
baseModel = VGG19(weights="imagenet", include_top=True, input_tensor=x)
# construct the head of the model that will be placed on top of the
# the base model
headModel_cnn = baseModel.layers[22].output
headModel_fc = baseModel.layers[25].output
###Attention
headModel_cnn = Reshape((-1,headModel_cnn.shape[-1]))(headModel_cnn)
###Non-attention
# headModel = Flatten()(headModel)
# headModel = Dense(512)(headModel)
# headModel = BatchNormalization()(headModel)
# headModel = Activation("relu")(headModel)
# headModel = Dropout(0.2)(headModel)
# headModel = Dense(512)(headModel)
# headModel = BatchNormalization()(headModel)
# headModel = Activation("relu")(headModel)
# headModel = Dropout(0.2)(headModel)
model = Model(inputs=baseModel.input, outputs=[headModel_cnn, headModel_fc])
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the first training process
for layer in baseModel.layers:
layer.trainable = False
return model
def create_model(transformer, max_len=256):
merge = []
input_ids = Input(shape=(max_len,), dtype=tf.int32, name='input_ids')
attention_mask = Input(shape=(max_len,), dtype=tf.int32, name='attention_mask')
image = create_cnn((224,224,3))
sequence_output_1 = transformer(input_ids, attention_mask=attention_mask)[2][-1]
sequence_output_2 = transformer(input_ids, attention_mask=attention_mask)[2][-2]
sequence_output_3 = transformer(input_ids, attention_mask=attention_mask)[2][-3]
sequence_output_4 = transformer(input_ids, attention_mask=attention_mask)[2][-4]
sequence_output = Concatenate(axis=2)([sequence_output_1, sequence_output_2, sequence_output_3, sequence_output_4])
#cls_token = sequence_output[:, 0, :]
#merge.append(cls_token)
# Yoon Kim model (https://arxiv.org/abs/1408.5882)
convs = []
filter_sizes = [2,3,4,5]
size_pool = 3
drop_rate = 0.2
final_hid = 32
for filter_size in filter_sizes:
l_conv = Conv1D(filters=768, kernel_size=filter_size)(sequence_output)
l_conv = BatchNormalization()(l_conv)
l_conv = Activation('relu')(l_conv)
# l_pool = MaxPooling1D(pool_size=max_len-filter_size+1)(l_conv)
l_pool = MaxPooling1D(pool_size=size_pool)(l_conv)
convs.append(l_pool)
#merge.append(Flatten()(l_pool))
l2_pool = Concatenate(axis=1)(convs)
# l2_pool = BatchNormalization()(l2_pool)
for _ in range(2):
origin = l2_pool
l2_conv = Conv1D(filters=768, kernel_size=size_pool,padding='same')(l2_pool)
l2_conv = BatchNormalization()(l2_conv)
l2_conv = Activation('relu')(l2_conv)
#print(origin.shape, l2_conv.shape)
# l2_conv = Add()([Lambda(lambda x: x[0]*x[1])([origin,0.1]), l2_conv])
l2_pool = MaxPooling1D(pool_size=size_pool)(l2_conv)
text = Flatten()(l2_pool)
#append to merge
text_append = Dense(512)(text)
text_append = BatchNormalization()(text_append)
text_append = Activation('relu')(text_append)
text_append = Dropout(drop_rate)(text_append)
text_append = Dense(final_hid)(text_append)
text_append = BatchNormalization()(text_append)
text_append = Activation('relu')(text_append)
# merge.append(text_append)
# text_append = l2_pool
#text for attention
# text = Dropout(0.2)(text)
# text = Dense(128)(text)
# text = BatchNormalization()(text)
# text = Activation("tanh")(text) #change to tanh?
# text = Dropout(0.2)(text)
image_append = Dense(2048)(image.output[1])
image_append = BatchNormalization()(image_append)
image_append = Activation('relu')(image_append)
image_append = Dropout(drop_rate)(image_append)
image_append = Dense(final_hid)(image_append)
image_append = BatchNormalization()(image_append)
image_append = Activation('relu')(image_append)
img = Dense(final_hid)(image.output[0])
img = BatchNormalization()(img)
img = Activation('relu')(img)
# image_append = image.output[1]
# merge.append(image_append)
# image_append = image.output[0]
head = 1
att_layers = 1
att_hid = 32
##With attention - text on image
inpAttImg_key = img
inpAttImg_query = text_append
for layer in range(1):
att_img = []
concat_key = []
for _ in range(head):
img_key = Dense(att_hid/head, use_bias=False)(inpAttImg_key) #change to tanh?
text_query = Dense(att_hid/head, use_bias=False)(inpAttImg_query)
# img_key = BatchNormalization()(img_key)
# img_key = Activation('tanh')(img_key)
# text_query = BatchNormalization()(text_query)
# text_query = Activation('tanh')(text_query)
img_value = Dense(att_hid/head, use_bias=False)(inpAttImg_key)
# img_value = BatchNormalization()(img_value)
# img_value = Activation('tanh')(img_value)
attention = Dot(axes=(1,2))([text_query, img_key])
attention = Lambda(lambda x: x[0]/x[1])([attention,np.sqrt(att_hid/head)])
attention = Activation("softmax")(attention)
head_att_img = Dot(axes=(1,1))([attention, img_value])
att_img = head_att_img
concat_key = img_key
# att_img = Concatenate(axis=1)(att_img)
# att_img = Dense(512, use_bias=False)(att_img)
# att_img = Dropout(0.3)(att_img)
# att_img = Add()([att_img, inpAttImg_query])
# att_img = LayerNormalization()(att_img)
# att_img2 = Dense(1024,activation='relu')(att_img)
# att_img2 = Dense(512)(att_img)
# att_img2 = Dropout(0.1)(att_img2)
# att_img = Add()([att_img, att_img2])
# att_img = LayerNormalization()(att_img)
att_img2 = Average()([ Activation('relu')(BatchNormalization()(Dense(final_hid)(att_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_img)))])
att_img2 = Dense(final_hid)(att_img2)
att_img2 = BatchNormalization()(att_img2)
att_img2 = Activation('relu')(att_img2)
att_img2 = Dropout(drop_rate)(att_img2)
att_img = Add()([inpAttImg_query, att_img2])
att_img = LayerNormalization()(att_img)
# concat_key = Concatenate(axis=1)(concat_key)
# concat_key = Dense(512)(concat_key)
# concat_key = Dropout(0.3)(inpAttImg_key)
# concat_key = Add()([concat_key, inpAttImg_key])
# concat_key = LayerNormalization()(concat_key)
# concat_key2 = Dense(1024,activation='relu')(concat_key)
# concat_key2 = Dense(512)(concat_key)
# concat_key2 = Dropout(0.1)(concat_key2)
# concat_key = Add()([concat_key, concat_key2])
# concat_key = LayerNormalization()(concat_key)
inpAttImg_query = att_img
inpAttImg_key = concat_key
# merge.append(att_img)
##attention image on text
inpAttText_key = l2_pool
inpAttText_query = image_append
for layer in range(1):
att_text = []
concat_key = []
for _ in range(head):
img_query = Dense(att_hid/head, use_bias=False)(inpAttText_query)
# # # img_query = BatchNormalization()(img_query)
# # # img_query = Activation('tanh')(img_query)
text_key = Dense(att_hid/head, use_bias=False)(inpAttText_key)
# # # text_key = BatchNormalization()(text_key)
# # # text_key = Activation('tanh')(text_key)
text_value = Dense(att_hid/head, use_bias=False)(inpAttText_key)
attention = Dot(axes=(1,2))([img_query, text_key])
attention = Lambda(lambda x: x[0]/x[1])([attention,np.sqrt(att_hid/head)])
attention = Activation("softmax")(attention)
head_att_text = Dot(axes=(1,1))([attention, text_value])
att_text = head_att_text
concat_key = text_key
# att_text = Concatenate(axis=1)(att_text)
# att_text = Dense(512, use_bias=False)(att_text)
# att_text = Dropout(0.3)(att_text)
# att_text = Add()([att_text, inpAttText_query])
# att_text = LayerNormalization()(att_text)
# att_text2 = Dense(1024,activation='relu')(att_text)
# att_text2 = Dense(512)(att_text)
# att_text2 = Dropout(0.1)(att_text2)
# att_text = Add()([att_text, att_text2])
# att_text = LayerNormalization()(att_text)
# concat_key = Concatenate(axis=1)(concat_key)
# concat_key = Dense(512)(concat_key)
# concat_key = Dropout(0.3)(concat_key)
# concat_key = Add()([concat_key, inpAttText_key])
# concat_key = LayerNormalization()(concat_key)
# concat_key2 = Dense(1024,activation='relu')(concat_key)
# concat_key2 = Dense(512)(concat_key)
# concat_key2 = Dropout(0.1)(concat_key2)
# concat_key = Add()([concat_key, concat_key2])
# concat_key = LayerNormalization()(concat_key)
att_text2 = Average()([Activation('relu')(BatchNormalization()(Dense(final_hid)(att_text))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_text))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_text))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(att_text)))])
att_text2 = Dense(final_hid)(att_text2)
att_text2 = BatchNormalization()(att_text2)
att_text2 = Activation('relu')(att_text2)
att_text2 = Dropout(drop_rate)(att_text2)
att_text = Add()([inpAttText_query, att_text2])
att_text = LayerNormalization()(att_text)
inpAttText_query = att_text
inpAttText_key = concat_key
head = 1
###Self attention image
inpAttSelf_key = img
inpAttSelf_query = img
for layer in range(att_layers):
self_img = []
for _ in range(head):
self_query = Dense(att_hid/head, use_bias=False)(inpAttSelf_query)
self_key = Dense(att_hid/head, use_bias=False)(inpAttSelf_key)
self_value = Dense(att_hid/head, use_bias=False)(inpAttSelf_key)
attention = Dot(axes=(2,2))([self_query, self_key])
attention = Lambda(lambda x: x[0]/x[1])([attention,np.sqrt(att_hid/head)])
attention = Activation("softmax")(attention)
head_att_self = Dot(axes=(2,1))([attention, self_value])
self_img = head_att_self
# self_img = Concatenate(axis=2)(self_img)
# self_img = Dense(512, use_bias=False)(self_img)
# self_img = Dropout(0.3)(self_img)
# self_img = Add()([self_img, inpAttSelf_query])
# self_img = LayerNormalization()(self_img)
# self_img2 = Dense(1024,activation='relu')(self_img)
# self_img2 = Dense(512)(self_img2)
# self_img2 = Dropout(0.3)(self_img2)
# self_img = Add()([self_img, self_img2])
# self_img = LayerNormalization()(self_img)
self_img2 = Average()([Activation('relu')(BatchNormalization()(Dense(final_hid)(self_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(self_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(self_img))),
Activation('relu')(BatchNormalization()(Dense(final_hid)(self_img)))])
self_img2 = Dense(final_hid)(self_img2)
self_img2 = BatchNormalization()(self_img2)
self_img2 = Activation('relu')(self_img2)
self_img2 = Dropout(drop_rate)(self_img2)
self_img = Add()([inpAttSelf_query, self_img2])
self_img = LayerNormalization()(self_img)
inpAttSelf_query = self_img
inpAttSelf_key = self_img
# merge.append(att_text)
self_img = Flatten()(self_img)
# att_img = Dense(32)(Flatten()(att_img))
# # # att_img = BatchNormalization()(att_img)
# att_img = Activation('relu')(att_img)
# att_text = Flatten()(att_text)
# # # att_text = Concatenate(axis=1)([att_text,self_img])
# att_text = Dense(32)(att_text)
# # # att_text = BatchNormalization()(att_text)
# att_text = Activation('relu')(att_text)
self_img = Dense(final_hid)(self_img)
self_img = BatchNormalization()(self_img)
self_img = Activation('relu')(self_img)
# self_img = Dense(32)(self_img)
# # # self_img = BatchNormalization()(self_img)
# self_img = Activation('relu')(self_img)
# text_append = Dense(512)(Flatten()(text_append))
# text_append = BatchNormalization()(text_append)
# text_append = Activation('relu')(text_append)
# image_append = Dense(512)(Flatten()(image_append))
# image_append = BatchNormalization()(image_append)
# image_append = Activation('relu')(image_append)
# merge.append(att_img)
# merge.append(att_text)
merge.append(self_img)
# merge.append(Dense(512,activation='relu')(Flatten()(self_img)))
# merge.append(text_append)
merge.append(image_append)
l_merge = Concatenate(axis=1)(merge)
# l_merge = image_append
l_merge = Dropout(drop_rate)(l_merge)
l_merge = Dense(final_hid)(l_merge)
l_merge = BatchNormalization()(l_merge)
l_merge = Activation('relu')(l_merge)
l_merge = Dropout(drop_rate)(l_merge)
# l_merge = Average()(merge)
out = Dense(N_LABELS, activation='sigmoid')(l_merge)
model = Model(inputs=[input_ids, attention_mask, image.input],
outputs=out)
return model
%%time
EPOCHS = 8
total_steps = len(y_train) * BATCH_SIZE
train_data_size = len(y_train)
steps_per_epoch = int(train_data_size / BATCH_SIZE) + 1
num_train_steps = steps_per_epoch * EPOCHS
# warmup_steps = int(steps_per_epoch * 1)
warmup_steps = 0
# Create the learning rate scheduler.
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=1e-4,
decay_steps=num_train_steps,
end_learning_rate=0)
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=7e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
epsilon=1e-8,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
#Load bert4news
# config = AutoConfig.from_pretrained('/content/drive/MyDrive/bert4news/config.json')
# transformer = TFAutoModel.from_pretrained('/content/drive/MyDrive/bert4news/pytorch_model.bin', from_pt=True, config=config)
transformer = TFAutoModel.from_pretrained(MODEL,output_attentions=False,output_hidden_states=True,return_dict =True)
transformer.trainable = False
model = create_model(transformer, max_len=MAX_LENGTH)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss='binary_crossentropy',
metrics='accuracy')
model.summary()
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced', np.unique(np.argmax(train_labels, axis=1)), np.argmax(train_labels, axis=1))
class_weights = {i : class_weights[i] for i in range(3)}
class_weights
```
## Train model
```
from keras.utils import np_utils
from keras.callbacks import Callback, EarlyStopping
from sklearn.metrics import f1_score
from sklearn.datasets import make_classification
class roc_auc_callback(Callback):
def __init__(self,training_data,validation_data):
self.x = training_data[0]
self.y = training_data[1]
self.x_val = validation_data[0]
self.y_val = validation_data[1]
def on_train_begin(self, logs={}):
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
# y_pred_train = self.model.predict(self.x, verbose=0)
# y_pred_train = (y_pred_train>=0.5)*1
# y_true_train = self.y
# f1_train_micro = f1_score(y_true_train, y_pred_train, average='micro')
# f1_train_macro = f1_score(y_true_train, y_pred_train, average='macro')
y_pred_val = self.model.predict(self.x_val, verbose=0)
y_pred_val = (y_pred_val>0.5)*1
y_true_val = self.y_val
f1_val_micro = f1_score(y_true_val, y_pred_val, average='micro')
f1_val_macro = f1_score(y_true_val, y_pred_val, average='macro')
print('\ f1_val: micro - %s macro - %s' % (str(round(f1_val_micro,4)),str(round(f1_val_macro,4))),end=100*' '+'\n')
return
def on_batch_begin(self, batch, logs={}):
return
def on_batch_end(self, batch, logs={}):
return
n_steps = int(np.ceil(y_train.shape[0] / BATCH_SIZE))
# Checkpoint path
ckpt_path = f'/content/drive/My Drive/mediaeval2016/checkpoint-test/{MODEL_NAME}/'
if not os.path.exists(ckpt_path):
os.makedirs(ckpt_path)
ckpt_path += 'cp-{epoch:02d}.h5'
# Callback
my_callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_path,
monitor='val_loss',
save_weights_only=True,
save_freq='epoch'),
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1),
roc_auc_callback(training_data=(X_train, y_train),validation_data=(X_test_data, test_labels))]
H = model.fit(
X_train, y_train,
validation_data=(X_test_data, test_labels),
batch_size=BATCH_SIZE,
epochs=EPOCHS,
#steps_per_epoch=n_steps,
# class_weight=class_weights,
shuffle=False,
callbacks=my_callbacks
)
```
# Saving Fine-Tuned Model
```
model.save_weights(os.path.join(base_dir, f'model/{MODEL_NAME}.h5'))
EPOCHS = 20
%%time
EPOCHS = EPOCHS
total_steps = len(y_train) * BATCH_SIZE
train_data_size = len(y_train)
steps_per_epoch = int(train_data_size / BATCH_SIZE) + 1
num_train_steps = steps_per_epoch * EPOCHS
# warmup_steps = int(num_train_steps * 0.1)
warmup_steps = 0
# Create the learning rate scheduler.
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
epsilon=1e-8,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
#Load bert4news
# config = AutoConfig.from_pretrained('/content/drive/MyDrive/bert4news/config.json')
# transformer = TFAutoModel.from_pretrained('/content/drive/MyDrive/bert4news/pytorch_model.bin', from_pt=True, config=config)
transformer = TFAutoModel.from_pretrained(MODEL,output_attentions=False,output_hidden_states=True,return_dict =True)
transformer.trainable = False
model2 = create_model(transformer, max_len=MAX_LENGTH)
model2.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics='accuracy')
model2.summary()
model2.load_weights(os.path.join(base_dir, f'checkpoint-test/{MODEL_NAME}/cp-03.h5'))
# model2.evaluate(X_val,y_val)
```
#Predict
```
input_ids = []
attention_masks = []
for sent in tqdm(test_sent):
encoded_dict = tokenizer.encode_plus(
sent,
add_special_tokens = True,
max_length = MAX_LENGTH,
pad_to_max_length = True,
return_attention_mask = True,
return_tensors = 'np',
truncation = True,
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
id_test_data = np.concatenate(input_ids)
mask_test_data = np.concatenate(attention_masks)
id_test_data.shape, mask_test_data.shape
X_test_data = [
id_test_data,
mask_test_data,
img_test
]
pred = model2.predict(X_test_data, verbose=1)
test_true = test_labels
test_true.shape, pred.shape
from sklearn import metrics
theta = 0.4
# Print the confusion matrix
print(metrics.confusion_matrix(test_true, (pred>theta).astype(int)))
# Print the precision and recall, among other metrics
print(metrics.classification_report(test_true, (pred>theta).astype(int), digits=3))
```
| github_jupyter |
# Measurement Error Mitigation
```
from qiskit import *
```
### Introduction
The effect of noise is to give us outputs that are not quite correct. The effect of noise that occurs throughout a computation will be quite complex in general, as one would have to consider how each gate transforms the effect of each error.
A simpler form of noise is that occuring during final measurement. At this point, the only job remaining in the circuit is to extract a bit string as an output. For an $n$ qubit final measurement, this means extracting one of the $2^n$ possible $n$ bit strings. As a simple model of the noise in this process, we can imagine that the measurement first selects one of these outputs in a perfect and noiseless manner, and then noise subsequently causes this perfect output to be randomly perturbed before it is returned to the user.
Given this model, it is very easy to determine exactly what the effects of measurement errors are. We can simply prepare each of the $2^n$ possible basis states, immediately measure them, and see what probability exists for each outcome.
As an example, we will first create a simple noise model, which randomly flips each bit in an output with probability $p$.
```
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors import pauli_error, depolarizing_error
def get_noise(p):
error_meas = pauli_error([('X',p), ('I', 1 - p)])
noise_model = NoiseModel()
noise_model.add_all_qubit_quantum_error(error_meas, "measure") # measurement error is applied to measurements
return noise_model
```
Let's start with an instance of this in which each bit is flipped $1\%$ of the time.
```
noise_model = get_noise(0.01)
```
Now we can test out its effects. Specifically, let's define a two qubit circuit and prepare the states $\left|00\right\rangle$, $\left|01\right\rangle$, $\left|10\right\rangle$ and $\left|11\right\rangle$. Without noise, these would lead to the definite outputs `'00'`, `'01'`, `'10'` and `'11'`, respectively. Let's see what happens with noise. Here, and in the rest of this section, the number of samples taken for each circuit will be `shots=10000`.
```
for state in ['00','01','10','11']:
qc = QuantumCircuit(2,2)
if state[0]=='1':
qc.x(1)
if state[1]=='1':
qc.x(0)
qc.measure(qc.qregs[0],qc.cregs[0])
print(state+' becomes',
execute(qc,Aer.get_backend('qasm_simulator'),noise_model=noise_model,shots=10000).result().get_counts())
```
Here we find that the correct output is certainly the most dominant. Ones that differ on only a single bit (such as `'01'`, `'10'` in the case that the correct output is `'00'` or `'11'`), occur around $1\%$ of the time. Those than differ on two bits occur only a handful of times in 10000 samples, if at all.
So what about if we ran a circuit with this same noise model, and got an result like the following?
```
{'10': 98, '11': 4884, '01': 111, '00': 4907}
```
Here `'01'` and `'10'` occur for around $1\%$ of all samples. We know from our analysis of the basis states that such a result can be expected when these outcomes should in fact never occur, but instead the result should be something that differs from them by only one bit: `'00'` or `'11'`. When we look at the results for those two outcomes, we can see that they occur with roughly equal probability. We can therefore conclude that the initial state was not simply $\left|00\right\rangle$, or $\left|11\right\rangle$, but an equal superposition of the two. If true, this means that the result should have been something along the lines of.
```
{'11': 4977, '00': 5023}
```
Here is a circuit that produces results like this (up to statistical fluctuations).
```
qc = QuantumCircuit(2,2)
qc.h(0)
qc.cx(0,1)
qc.measure(qc.qregs[0],qc.cregs[0])
print(execute(qc,Aer.get_backend('qasm_simulator'),noise_model=noise_model,shots=10000).result().get_counts())
```
In this example we first looked at results for each of the definite basis states, and used these results to mitigate the effects of errors for a more general form of state. This is the basic principle behind measurement error mitigation.
### Error mitigation in with linear algebra
Now we just need to find a way to perform the mitigation algorithmically rather than manually. We will do this by describing the random process using matrices. For this we need to rewrite our counts dictionaries as column vectors. For example, the dictionary `{'10': 96, '11': 1, '01': 95, '00': 9808}` describing would be rewritten as
$$
C =
\begin{pmatrix}
9808 \\
95 \\
96 \\
1
\end{pmatrix}.
$$
Here the first element is that for `'00'`, the next is that for `'01'`, and so on.
The information gathered from the basis states $\left|00\right\rangle$, $\left|01\right\rangle$, $\left|10\right\rangle$ and $\left|11\right\rangle$ can then be used to define a matrix, which rotates from an ideal set of counts to one affected by measurement noise. This is done by simply taking the counts dictionary for $\left|00\right\rangle$, normalizing it it so that all elements sum to one, and then using it as the first column of the matrix. The next column is similarly defined by the counts dictionary obtained for $\left|00\right\rangle$, and so on.
There will be statistical variations each time the circuit for each basis state is run. In the following, we will use the data obtained when this section was written, which was as follows.
```
00 becomes {'10': 96, '11': 1, '01': 95, '00': 9808}
01 becomes {'10': 2, '11': 103, '01': 9788, '00': 107}
10 becomes {'10': 9814, '11': 90, '01': 1, '00': 95}
11 becomes {'10': 87, '11': 9805, '01': 107, '00': 1}
```
This gives us the following matrix.
$$
M =
\begin{pmatrix}
0.9808&0.0107&0.0095&0.0001 \\
0.0095&0.9788&0.0001&0.0107 \\
0.0096&0.0002&0.9814&0.0087 \\
0.0001&0.0103&0.0090&0.9805
\end{pmatrix}
$$
If we now take the vector describing the perfect results for a given state, applying this matrix gives us a good approximation of the results when measurement noise is present.
$$ C_{noisy} = M ~ C_{ideal}$$
.
As an example, let's apply this process for the state $(\left|00\right\rangle+\left|11\right\rangle)/\sqrt{2}$,
$$
\begin{pmatrix}
0.9808&0.0107&0.0095&0.0001 \\
0.0095&0.9788&0.0001&0.0107 \\
0.0096&0.0002&0.9814&0.0087 \\
0.0001&0.0103&0.0090&0.9805
\end{pmatrix}
\begin{pmatrix}
0 \\
5000 \\
5000 \\
0
\end{pmatrix}
=
\begin{pmatrix}
101 \\
4895.5 \\
4908 \\
96.5
\end{pmatrix}.
$$
In code, we can express this as follows.
```
import numpy as np
M = [[0.9808,0.0107,0.0095,0.0001],
[0.0095,0.9788,0.0001,0.0107],
[0.0096,0.0002,0.9814,0.0087],
[0.0001,0.0103,0.0090,0.9805]]
Cideal = [[0],
[5000],
[5000],
[0]]
Cnoisy = np.dot( M, Cideal)
print('C_noisy =\n',Cnoisy)
```
Either way, the resulting counts found in $C_{noisy}$, for measuring the $(\left|00\right\rangle+\left|11\right\rangle)/\sqrt{2}$ with measurement noise, come out quite close to the actual data we found earlier. So this matrix method is indeed a good way of predicting noisy results given a knowledge of what the results should be.
Unfortunately, this is the exact opposite of what we need. Instead of a way to transform ideal counts data into noisy data, we need a way to transform noisy data into ideal data. In linear algebra, we do this for a matrix $M$ by finding the inverse matrix $M^{-1}$,
$$C_{ideal} = M^{-1} C_{noisy}.$$
```
import scipy.linalg as la
M = [[0.9808,0.0107,0.0095,0.0001],
[0.0095,0.9788,0.0001,0.0107],
[0.0096,0.0002,0.9814,0.0087],
[0.0001,0.0103,0.0090,0.9805]]
Minv = la.inv(M)
print(Minv)
```
Applying this inverse to $C_{noisy}$, we can obtain an approximation of the true counts.
```
Cmitigated = np.dot( Minv, Cnoisy)
print('C_mitigated =\n',Cmitigated)
```
Of course, counts should be integers, and so these values need to be rounded. This gives us a very nice result.
$$
C_{mitigated} =
\begin{pmatrix}
0 \\
5000 \\
5000 \\
0
\end{pmatrix}
$$
This is exactly the true result we desire. Our mitigation worked extremely well!
### Error mitigation in Qiskit
```
from qiskit.ignis.mitigation.measurement import (complete_meas_cal,CompleteMeasFitter)
```
The process of measurement error mitigation can also be done using tools from Qiskit. This handles the collection of data for the basis states, the construction of the matrices and the calculation of the inverse. The latter can be done using the pseudo inverse, as we saw above. However, the default is an even more sophisticated method using least squares fitting.
As an example, let's stick with doing error mitigation for a pair of qubits. For this we define a two qubit quantum register, and feed it into the function `complete_meas_cal`.
```
qr = qiskit.QuantumRegister(2)
meas_calibs, state_labels = complete_meas_cal(qr=qr, circlabel='mcal')
```
This creates a set of circuits to take measurements for each of the four basis states for two qubits: $\left|00\right\rangle$, $\left|01\right\rangle$, $\left|10\right\rangle$ and $\left|11\right\rangle$.
```
for circuit in meas_calibs:
print('Circuit',circuit.name)
print(circuit)
print()
```
Let's now run these circuits without any noise present.
```
# Execute the calibration circuits without noise
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000)
cal_results = job.result()
```
With the results we can construct the calibration matrix, which we have been calling $M$.
```
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
```
With no noise present, this is simply the identity matrix.
Now let's create a noise model. And to make things interesting, let's have the errors be ten times more likely than before.
```
noise_model = get_noise(0.1)
```
Again we can run the circuits, and look at the calibration matrix, $M$.
```
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000, noise_model=noise_model)
cal_results = job.result()
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
```
This time we find a more interesting matrix, and one that is not invertible. Let's see how well we can mitigate for this noise. Again, let's use the Bell state $(\left|00\right\rangle+\left|11\right\rangle)/\sqrt{2}$ for our test.
```
qc = QuantumCircuit(2,2)
qc.h(0)
qc.cx(0,1)
qc.measure(qc.qregs[0],qc.cregs[0])
results = qiskit.execute(qc, backend=backend, shots=10000, noise_model=noise_model).result()
noisy_counts = results.get_counts()
print(noisy_counts)
```
In Qiskit we mitigate for the noise by creating a measurement filter object. Then, taking the results from above, we use this to calulate a mitigated set of counts. Qiskit returns this as a dictionary, so that the user doesn't need to use vectors themselves to get the result.
```
# Get the filter object
meas_filter = meas_fitter.filter
# Results with mitigation
mitigated_results = meas_filter.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
```
To see the results most clearly, let's plot both the noisy and mitigated results.
```
from qiskit.visualization import *
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
plot_histogram([noisy_counts, mitigated_counts], legend=['noisy', 'mitigated'])
```
Here we have taken results for which almost $20\%$ of samples are in the wrong state, and turned it into an exact representation of what the true results should be. However, this example does have just two qubits with a simple noise model. For more qubits, and more complex noise models or data from real devices, the mitigation will have more of a challenge. Perhaps you might find methods that are better than those Qiskit uses!
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
## Creating Time Series
```
# Create the range of dates here
seven_days = pd.date_range(start = '2017-1-1', periods = 7)
# Iterate over the dates and print the number and name of the weekday
for day in seven_days:
print(day.dayofweek, day.weekday_name)
```
## Indexing and resamplig
```
data = pd.read_csv('nyc.csv')
# Inspect data
print(data.info())
# Convert the date column to datetime64
data['date'] = pd.to_datetime(data.date)
# Set date column as index
data.set_index('date', inplace = True)
# Inspect data
print(data.info())
# Plot data
data.plot(subplots=True)
plt.show()
```
### Compare annual stock price trends
```
# Create dataframe prices here
prices = pd.DataFrame()
# Select data for each year and concatenate with prices here
for year in ['2013', '2014', '2015']:
price_per_year = yahoo.loc[year, ['price']].reset_index(drop=True)
price_per_year.rename(columns={'price': year}, inplace=True)
prices = pd.concat([prices, price_per_year], axis=1)
# Plot prices
prices.plot()
plt.show()
```
### Set and change time series frequency
```
# Inspect data
print(co.info())
# Set the frequency to calendar daily
co = co.asfreq('D')
# Plot the data
co.plot(subplots= True)
plt.show()
# Set frequency to monthly
co = co.asfreq('M')
# Plot the data
co.plot(subplots= True)
plt.show()
```
### Shifting time series
```
# Import data here
google = pd.read_csv('google.csv', parse_dates = ['Date'], index_col = 'Date')
# Set data frequency to business daily
google = google.asfreq('B')
# Create 'lagged' and 'shifted'
google['lagged'] = google.Close.shift(-90)
google['shifted'] = google.Close.shift(90)
# Plot the google price series
google.plot()
plt.show()
```
### Calculating changes
```
# Created shifted_30 here
yahoo['shifted_30'] = yahoo.price.shift(30)
# Subtract shifted_30 from price
yahoo['change_30'] = yahoo.price.sub(yahoo.shifted_30)
# Get the 30-day price difference
yahoo['diff_30'] = yahoo.price.diff(30)
# Inspect the last five rows of price
print(yahoo.tail())
# Show the value_counts of the difference between change_30 and diff_30
print(yahoo.change_30.sub(yahoo.diff_30).value_counts())
# Create daily_return
google['daily_return'] = google.Close.pct_change(1)*100
# Create monthly_return
google['monthly_return'] = google.Close.pct_change(30)*100
# Create annual_return
google['annual_return'] = google.Close.pct_change(360)*100
# Plot the result
google.plot(subplots=True)
plt.show()
```
## Comparing Time Series Growth Rate
### Create a 100 base stocks comparisson graphic
```
# Import data here
prices = pd.read_csv('asset_classes.csv', parse_dates = ['DATE'], index_col = 'DATE')
# Inspect prices here
print(prices.info())
# Select first prices
first_prices = prices.iloc[0]
# Create normalized
normalized = prices.div(first_prices).mul(100)
# Plot normalized
normalized.plot()
plt.show()
```
### Compare with a bechmark or subtract the benchmark
### Using reindex() and asfreq()
```
# Set start and end dates
start = '2016-1-1'
end = '2016-2-29'
# Create monthly_dates here
monthly_dates = pd.date_range(start = start, end = end, freq = 'M')
# Create and print monthly here
monthly = pd.Series(data = [1,2], index = monthly_dates)
print(monthly)
# Create weekly_dates here
weekly_dates = pd.date_range(start = start, end = end, freq = 'W')
# Print monthly, reindexed using weekly_dates
print(monthly.reindex(index = weekly_dates, method = None))
print(monthly.reindex(index = weekly_dates, method = 'bfill'))
print(monthly.reindex(index = weekly_dates, method = 'ffill'))
# Import data here
data = pd.read_csv('unemployment.csv', parse_dates=['date'], index_col='date')
# Show first five rows of weekly series
print(data.asfreq('W').head())
# Show first five rows of weekly series with bfill option
print(data.asfreq('W', method='bfill').head())
# Create weekly series with ffill option and show first five rows
weekly_ffill = data.asfreq('W', method='ffill')
print(weekly_ffill.head())
# Plot weekly_fill starting 2015 here
weekly_ffill.loc['2015':].plot()
plt.show()
```
### Upsampling & interpolation with .resample()
```
# Inspect data here
print(monthly.info())
# Create weekly dates
weekly_dates = pd.date_range(start = monthly.index.min(), end = monthly.index.max(), freq = 'W')
# Reindex monthly to weekly data
weekly = monthly.reindex(weekly_dates)
# Create ffill and interpolated columns
weekly['ffill'] = weekly.UNRATE.ffill()
weekly['interpolated'] = weekly.UNRATE.interpolate()
# Plot weekly
weekly.plot()
plt.show()
# Import & inspect data here
data = pd.read_csv('debt_unemployment.csv', parse_dates = ['date'], index_col = 'date')
print(data.info())
# Interpolate and inspect here
interpolated = data.interpolate()
print(interpolated.info())
# Plot interpolated data here
interpolated.plot(secondary_y = 'Unemployment')
plt.show()
```
#### Downsampling & aggregation
```
# Import and inspect data here
ozone = pd.read_csv('ozone.csv', parse_dates = ['date'], index_col = 'date')
print(ozone.info())
# Calculate and plot the weekly average ozone trend
ozone.resample('W').mean().plot()
plt.show()
# Calculate and plot the monthly average ozone trend
ozone.resample('M').mean().plot()
plt.show()
# Calculate and plot the annual average ozone trend
ozone.resample('A').mean().plot()
plt.show()
# Import and inspect data here
stocks = pd.read_csv('stocks.csv', parse_dates = ['date'], index_col = 'date')
print(stocks.info())
# Calculate and plot the monthly averages
monthly_average = stocks.resample('M').mean()
monthly_average.plot(subplots=True)
plt.show()
# Import and inspect gdp_growth here
gdp_growth = pd.read_csv('gdp_growth.csv', parse_dates = ['date'], index_col = 'date')
print(gdp_growth.info())
# Import and inspect djia here
djia = pd.read_csv('djia.csv', parse_dates = ['date'], index_col = 'date')
print(djia.info())
# Calculate djia quarterly returns here
djia_quarterly = djia.resample('QS').first()
djia_quarterly_return = djia_quarterly.pct_change().mul(100)
# Concatenate, rename and plot djia_quarterly_return and gdp_growth here
data = pd.concat([gdp_growth, djia_quarterly_return], axis = 1)
data.columns = ['gdp', 'djia']
data.plot()
plt.show()
# Import data here
sp500 = pd.read_csv('sp500.csv', parse_dates=['date'], index_col = 'date')
print(sp500.info())
# Calculate daily returns here
daily_returns = sp500.squeeze().pct_change()
# Resample and calculate statistics
stats = daily_returns.resample('M').agg(['mean', 'median', 'std'])
# Plot stats here
stats.plot()
plt.show()
```
### Rolling window functions with pandas
```
# Import and inspect ozone data here
data = pd.read_csv('ozone.csv', parse_dates = ['date'], index_col = 'date')
print(data.info())
# Calculate 90d and 360d rolling mean for the last price
data['90D'] = data.Ozone.rolling('90D').mean()
data['360D'] = data.Ozone.rolling('360D').mean()
# Plot data
data['2010':].plot(title = 'New York City')
plt.show()
# Import and inspect ozone data here
data = pd.read_csv('ozone.csv', parse_dates=['date'], index_col='date').dropna()
# Calculate the rolling mean and std here
rolling_stats = data.Ozone.rolling(360).agg(['mean', 'std'])
# Join rolling_stats with ozone data
stats = data.join(rolling_stats)
# Plot stats
stats.plot(subplots=True);
plt.show()
# Resample, interpolate and inspect ozone data here
data = data.resample('D').interpolate()
print(data.info())
# Create the rolling window
rolling = data.Ozone.rolling(360)
# Insert the rolling quantiles to the monthly returns
data['q10'] = rolling.quantile(0.1)
data['q50'] = rolling.quantile(0.5)
data['q90'] = rolling.quantile(0.9)
# Plot the data
data.plot()
plt.show()
```
## Expanding window functions with pandas
### Cumulative sum vs .diff()
```
# Calculate differences
differences = data.diff().dropna()
# Select start price
start_price = data.first('D')
# Calculate cumulative sum
cumulative_sum = start_price.append(differences).cumsum()
# Validate cumulative sum equals data
print(data.equals(cumulative_sum))
```
#### Cumulative return on $1,000 invested in google vs apple I
```
# Define your investment
investment = 1000
# Calculate the daily returns here
returns = data.pct_change()
# Calculate the cumulative returns here
returns_plus_one = returns + 1
cumulative_return = returns_plus_one.cumprod()
# Calculate and plot the investment return here
cumulative_return.mul(investment).plot()
plt.show()
# Import numpy
import numpy as np
# Define a multi_period_return function
def multi_period_return(period_returns):
return (np.prod(period_returns + 1) - 1)
# Calculate daily returns
daily_returns = data.pct_change()
# Calculate rolling_annual_returns
rolling_annual_returns = daily_returns.rolling('360D').apply(multi_period_return)
# Plot rolling_annual_returns
rolling_annual_returns.mul(100).plot()
plt.show()
```
## Case Study: S&P500 price simulation
### Random Walk Simulation
```
from numpy.random import normal, seed, choice
# Set seed here
seed(42)
# Create random_walk
random_walk = normal(loc = .001, scale = .01, size = 2500)
# Convert random_walk to pd.series
random_walk = pd.Series(random_walk)
# Create random_prices
random_prices = (random_walk + 1).cumprod()
# Plot random_prices here
random_prices.mul(1000).plot()
plt.show()
# Calculate daily_returns here
daily_returns = fb.pct_change().dropna()
# Get n_obs
n_obs = daily_returns.count()
# Create random_walk
random_walk = choice(daily_returns, n_obs)
# Convert random_walk to pd.series
random_walk = pd.Series(random_walk)
# Plot random_walk distribution
sns.distplot(random_walk)
plt.show()
# Select fb start price here
start = fb.price.first('D')
# Add 1 to random walk and append to start
random_walk = random_walk + 1
random_price = start.append(random_walk)
# Calculate cumulative product here
random_price = random_price.cumprod()
# Insert into fb and plot
fb['random'] = random_price
fb.plot()
plt.show()
```
## Correlations
#### Annual return correlations among several stocks
```
# Calculate year-end prices here
annual_prices = data.resample('A').last()
# Calculate annual returns here
annual_returns = annual_prices.pct_change()
# Calculate and print the correlation matrix here
correlations = annual_returns.corr()
print(correlations)
# Visualize the correlations as heatmap here
sns.heatmap(correlations, annot = True)
plt.show()
```
## Select index components and import data
#### Explore and clean company listing information
```
# Inspect listings
print(listings.info())
# Move 'stock symbol' into the index
listings.set_index('Stock Symbol', inplace = True)
# Drop rows with missing 'sector' data
listings.dropna(inplace = True)
# Select companies with IPO Year before 2019
listings = listings[listings['IPO Year']<2019]
# Inspect the new listings data
print(listings.info())
# Show the number of companies per sector
print(listings.groupby('Sector').size().sort_values(ascending = False))
```
#### Select and inspect index components
```
# Select largest company for each sector
components = listings.groupby(['Sector'])['Market Capitalization'].nlargest(1)
# Print components, sorted by market cap
print(components.sort_values(ascending=False))
# Select stock symbols and print the result
tickers = components.index.get_level_values('Stock Symbol')
print(tickers)
# Print company name, market cap, and last price for each component
info_cols = ['Company Name', 'Market Capitalization', 'Last Sale']
print(listings.loc[tickers, info_cols].sort_values('Market Capitalization', ascending=False))
```
#### Import index component price information
```
# Print tickers
print(tickers)
# Import prices and inspect result
stock_prices = pd.read_csv('stock_prices.csv', parse_dates=['Date'], index_col='Date')
print(stock_prices.info())
# Calculate the returns
price_return = stock_prices.iloc[-1].div(stock_prices.iloc[0]).sub(1).mul(100)
# Plot horizontal bar chart of sorted price_return
price_return.sort_values().plot(kind='barh', title='Stock Price Returns')
plt.show()
```
## Build a market-cap weighted index
#### Calculate number of shares outstanding
```
# Inspect listings and print tickers
print(listings.info())
print(tickers)
# Select components and relevant columns from listings
components = listings.loc[tickers][['Market Capitalization', 'Last Sale']]
# Print the first rows of components
print(components.head())
# Calculate the number of shares here
no_shares = components['Market Capitalization'].div(components['Last Sale'])
# Print the sorted no_shares
print(no_shares.sort_values(ascending = False))
```
#### Create time series of market value
```
# Select the number of shares
no_shares = components['Number of Shares']
print(no_shares.sort_values())
# Create the series of market cap per ticker
market_cap = stock_prices.mul(no_shares)
# Select first and last market cap here
first_value = market_cap.iloc[0]
last_value = market_cap.iloc[-1]
# Concatenate and plot first and last market cap here
pd.concat([first_value, last_value], axis = 1).plot(kind = 'barh')
plt.show()
```
#### Calculate & plot the composite index
```
# Aggregate and print the market cap per trading day
raw_index = market_cap_series.agg(sum, axis = 1)
print(raw_index)
# Normalize the aggregate market cap here
index = raw_index.div(raw_index.iloc[0]).mul(100)
print(index)
# Plot the index here
index.plot(title = 'Market-Cap Weighted Index')
plt.show()
```
#### Calculate the contribution of each stock to the index
```
# Calculate and print the index return here
index_return = ((index.iloc[-1]/index.iloc[0])-1)*100
print(index_return)
# Select the market capitalization
market_cap = components['Market Capitalization']
# Calculate the total market cap
total_market_cap = market_cap.sum()
# Calculate the component weights, and print the result
weights = market_cap/total_market_cap
print(weights.sort_values())
# Calculate and plot the contribution by component
weights.mul(index_return).sort_values().plot(kind = 'barh')
plt.show()
```
#### Compare index performance against benchmark I
```
# Convert index series to dataframe here
data = index.to_frame(name = 'Index')
# Normalize djia series and add as new column to data
djia = djia.div(djia.iloc[0]).mul(100)
data['DJIA'] = djia
# Show total return for both index and djia
print(data.iloc[-1].div(data.iloc[0]).sub(1).mul(100))
# Plot both series
data.plot()
plt.show()
```
#### Compare index performance against benchmark II
```
# Inspect data
print(data.info())
print(data.head())
# Create multi_period_return function here
def multi_period_return(r):
return (np.prod(r+1) - 1) * 100
# Calculate rolling_return_360
rolling_return_360 = data.pct_change().rolling('360D').apply(multi_period_return)
# Plot rolling_return_360 here
rolling_return_360.plot(title = 'Rolling 360D Return')
plt.show()
```
## Index correlation & exporting to Excel
#### Visualize your index constituent correlations
```
# Inspect stock_prices here
print(stock_prices.info())
# Calculate the daily returns
returns = stock_prices.pct_change()
# Calculate and print the pairwise correlations
correlations = returns.corr()
print(correlations)
# Plot a heatmap of daily return correlations
sns.heatmap(correlations, annot = True)
plt.title('Daily Return Correlations',)
plt.show()
```
#### Save your analysis to multiple excel worksheets
```
# Inspect index and stock_prices
print(index.info())
print(stock_prices.info())
# Join index to stock_prices, and inspect the result
data = stock_prices.join(index)
print(data.info())
# Create index & stock price returns
returns = data.pct_change()
# Export data and data as returns to excel
with pd.ExcelWriter('data.xls') as writer:
data.to_excel(writer, sheet_name='data')
returns.to_excel(writer, sheet_name='returns')
```
| github_jupyter |
```
import cv2
import pytesseract
import numpy as np
import matplotlib as plt
import matplotlib.pyplot as pplt
from pytesseract import Output
image = cv2.imread('/home/cwhyse/BloomProj/scribble-stadium-ds/data_management/autopreprocess_testing/data/Photo 3130 .jpg')
d = pytesseract.image_to_data(image, output_type=Output.DICT)
# d["text"]
# get grayscale image
def get_grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# noise removal
def remove_noise(image):
return cv2.medianBlur(image,5)
#thresholding
def thresholding(image):
return cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
#dilation
def dilate(image):
kernel = np.ones((5,5),np.uint8)
return cv2.dilate(image, kernel, iterations = 1)
#erosion
def erode(image):
kernel = np.ones((5,5),np.uint8)
return cv2.erode(image, kernel, iterations = 1)
#opening - erosion followed by dilation
def opening(image):
kernel = np.ones((5,5),np.uint8)
return cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel)
#canny edge detection
def canny(image):
return cv2.Canny(image, 100, 200)
#skew correction
def deskew(image):
coords = np.column_stack(np.where(image > 0))
angle = cv2.minAreaRect(coords)[-1]
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
return rotated
#template matching
def match_template(image, template):
return cv2.matchTemplate(image, template, cv2.TM_CCOEFF_NORMED)
# wrapping individual words
def bounding_boxes(image):
n_boxes = len(d['text'])
for i in range(n_boxes):
if int(d['conf'][i]) > 60:
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
return img
opened = opening(image)
pplt.imshow(opened)
pplt.title('my picture')
pplt.show()
canny = canny(image)
pplt.imshow(canny)
pplt.title('my picture')
pplt.show()
canny.shape
# img = cv2.imread(image)
image = cv2.imread('/home/cwhyse/BloomProj/scribble-stadium-ds/data_management/autopreprocess_testing/data/Photo 3130 .jpg')
gray = get_grayscale(image)
pplt.imshow(gray)
pplt.title('my picture')
pplt.show()
clean = remove_noise(image)
pplt.imshow(clean)
pplt.title('my picture')
pplt.show()
# thresh = thresholding(opened)
# pplt.imshow(thresh)
# pplt.title('my picture')
# pplt.show()
# dilate = dilate(opened)
# pplt.imshow(dilate)
# pplt.title('my picture')
# pplt.show()
# match = match_template(opened)
# pplt.imshow(match)
# pplt.title('my picture')
# pplt.show()
# bounding_boxes(opened)
# pplt.imshow(opened)
# # pplt.title('my picture')
# pplt.show()
# deskew = deskew(image)
# pplt.imshow(deskew)
# pplt.title('my picture')
# pplt.show()
# import numpy as np
# from scipy.misc import imshow, imsave, imread
imggray = np.mean(image, -1)
imfft = np.fft.fft2(imggray)
mags = np.abs(np.fft.fftshift(imfft))
angles = np.angle(np.fft.fftshift(imfft))
visual = np.log(mags)
visual2 = (visual - visual.min()) / (visual.max() - visual.min())*255
#
mask = image[:,:,:3]
mask = (np.mean(mask,-1) > 20)
visual2[mask] = np.mean(visual)
# !pip install opencv-contrib-python
newmagsshift = np.exp(visual)
newffts = newmagsshift * np.exp(1j*angles)
newfft = np.fft.ifftshift(newffts)
imrev = np.fft.ifft2(newfft)
newim2 = 255 - np.abs(imrev).astype(np.uint8)
# pplt.imshow(newim2)
# pplt.show()
newim2.shape
# # cannyPlus = canny(newim2)
# openedPlus = opening(newim2)
# cleanPlus = remove_noise(newim2)
cv2.imwrite("/home/cwhyse/BloomProj/scribble-stadium-ds/data_management/autopreprocess_testing/data/transformedData/Photo 3130 (fourier).jpg", newim2)
```
| github_jupyter |
```
import malaya
malays = malaya.texts._malay_words._malay_words
import re
from unidecode import unidecode
def cleaning(string):
string = unidecode(string).replace('.', '. ').replace(',', ' , ')
string = re.sub('[^\'"A-Za-z\-/ ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string.lower()).strip()
return string
# http://rembauboy.blogspot.com/2015/10/perkataan-negeri-sembilan.html
additional = """
angka – suhu badan seperti hendak demam
2. ampai – jemur kain
3. acup – dalam / tenggelam
4. asey – saya
5. alih baro –
7. amba – gulai tidak rasa garam
8. amba-ambaan – kain yang tidak kering atau tidak basah
9. ambek kau - terima padahnya
10. arai-arai - pokok bunga yang tidak boleh disuburkan lagi
11. aka-aka - dipermain-mainkan
1. berbiri-biri – bertolak-tolakan
2. bumin – gelap gelita
3. bega – degil
4. boleng – bogel
5. bogek – degil
6. bincek – masalah
7. bingit – bising
8. bocek – anak ikan haruan
9. bekecapuih – buat kerja tak cukup masa
10. beketupeh – buat kerja tak cukup masa
11. boda – pukul kuat dengan kayu
12. bengot – rosak ( dented )
13. baeh – baling dengan kayu
14. bersememaian – bertaburan
15. betokak – berkelahi
16. banga – busuk
17. bingai – gila / bodah
18. bonda – anak air
19. buah baru mencelah – buah baru nak masak
20. buah tengah lahat – buah tengah lebat
21. buah dah teriang – buah dah nak habis
22. bederet-deret – berbaris-baris
23. bepinau – kepala pusing
24. bekerunai – kotor
25. bintat-bintat – kesan gigitan serangga
26. bongka – timbul atau bangun
27. berong – tidak lurus
28. bobot – gemok dan gebu
29. balon – habiskan semua makanan
30. bontuih – kenyang-sekenyangkenyangnya / kenyang dek minum
31. bangang – bodoh
32. basau – lemah / makanan yang lebih tempohnya / basi
33. binawe – perkataan yang bermaksud cacian
34. bingkeh – tercabut / masuk perangkap setelah tertutup
35. bicak – tanah basah dek hujan
36. bincut – bengkak
37. binjek – ambil sedikit makanan dengan jari
38. bok hisap – makan apa sahaja yang dapat
39. batat – pekat dan keras
40. bangka – kayu pokok yang terendam di dalam sungai atau paya
41. buntang – rupa mata yang bulat dan besar
42. boko – makanan seperti wajik atau penganan yang dibawa ke rumah saudara selepas kawin
43. berpondot-pondot - buah dipokok terlalu banyak seperti betik
44. badi – semangat penyakit
45. belobau – terjatuh ke dalam air
46. bongap – bodoh
47. bodobin – jatuh terhempas ke tanah
48. boseit – libas
49. bosa burak – cakap besar
50. bosulumak – mulut penuh dengan sisa makanan
51. bosulumeh - sama seperti di atas
52. bongak / mongak – bohong / tipu
53. biaweh – buah jambu kampung yang banyak biji
54. belorak – merasai akan buah yang belum masak, masih muda lagi.
55. berjuntai-juntai – contohnya buah rambai yang terlalu lebat dan memanjang ke bawah
56. berloghrup – dirasai apabila mengunyah buah yang hampir-hampir masak seperti buah jambu batu.
57. berpiong – berpusing
58. berbalau-balau – berbelang-belang / bersinar-sinar
59. belolah – berlari
60. berjelago – kotoran pada pelita minyak tanah
61. bonak - bodoh
62. bersepah-sepah - terlau banyak bilangannya
63. bungko - bodah teramat sangat
64. burak - berbual-bual
65. bedobet - jatuh dengan kuat
66. bodontung - bunyi yang amat kuat
67. belum poghopan - belum siap lagi
68. bodosing - sakit telinga mendengar akan sesuatu
69. bantai kau - lakukan apa yang kau suka
70. bedorau - terkejut ( contoh: bodorau darah den )
91. bedorau - dah turun ( contoh: hujan dah bedorau )
92. bedama - pening dan terdiam ( bedama muke eh! kena lompang )
93. bersapau - duduk atau tinggal berlama-lamaan di suatu tempat
94. bergasak - makan dengan banyak atau buat bersungguh-sungguh
95. berangguh - berkumpul sanak saudara di suatu rumah sewaktu perkahwinan
96. bangsai - punah atau rosak habis ( rumah moyang eh ! dah bangsai )
97. cun melecun - tersangat cantik ( terbit dari percakapan remaja sekarang )
98. bonggeng - terdedah
99. bongok - bodoh
100.bonto - sejenis tumbuhan rendah dalam paya yang menyebabkan badan gatal jika disentuh
101.begolek - kenduri besar
102.bergolekan - terdapat dengan banyak di atas tanah seperti buah durian yang jatuh waktu malam
103.bergolek-gelantang = tinggal di suatu tempat yang dikunjungi seperti rumah sendiri
104.berenggeir - tinggal di suatu tempat yang dikunjung seperti rumah sendiri
105.beghaghit - bergerak ( malasnya kau ni ! beghaghitlah sikit ).
106.bergayut-gayut - buah yang lebat di dahannya .
107.bajau - tidak tinggal setempat / baju T
108.baning - degil yang teramat sangat
109.bayang - seperti hendak tumbang
110.bentong - kecil tak mahu besar-besar
111.biaweh -jambu batu
112.biak - lecah / kawasan air bertakung
113.bincau - kecoh sambil marah
114.bidik - baling
115.binga -pekak
116.bobok - air masuk ke dalam mulut dengan banyaknya
117.bobek - balut
118.becoghabih - banyak cakap
119.becoghidau -membising
120.bocokak - berkelahi
121.bodahagi - susah / leceh
122.bodaso - berbaloi
123.boghipuk - bertimbun ( kain baju yang bertimbun )
124.bodok - pukul dengan kayu
125.boghoghok - bertengkar
126.boghojin - naik angin / marah
127.bogomunto - bunyi bising
128.bojangek - lama menunggu
129.bogho - muka merah padam kerana marah atau malu
130.boghonjeng - menari-nari
131.boghoroman - kain baju yang berlunggunk-lungguk
132.bojoman - menunggu terlalu lama
133.bojooh - buat kerja ramai-ramai dengan suka ria
134.bojonggeng - berjemur
135.bokiak - suara kanak-kanak menangis secara beramai-ramai
136.bokighah - bergerak
137.bokonak - berkumpul anak beranak sebagai persiapan untuk kenduri kawin
138.botating - menghidangkan makanan
139.bokolelang - sakit yang teramat sangat
140.bokotunjang - menahan sakit yang teramat sangat
141.bokuak - mengarah orang supaya ke tepi
142.bekolintau - berpusing-pusing di tempat yang sama
143.bokoludat - keruh atau air berkarat
144.bokotupak - bau busuk yang terperangkap
145.bokotenteng - terloncat-loncat kerana sakit
146.bokotinjah - memijak-mijak sesuatu dengan sesuka hati
147.bokotuntang - kelam kabut
148.bokotutuih - becok
149.bokundo-kundo - bergerak kesuatu tempat beramai-ramai
150.bolimpap - kain yang berlonggok dan belum dilipat
151.bolo - membersihkan haiwan selepas disembelih
152.bolong - bocor yang besar
153.bondong - angkut atau punggah
154.bonak - bodoh atau bebal
155.bonyo - biskut yang direndam terlalu lama di dalam minuman
156.bosoka - gula yang berlebihan pada kuih wajik
157.bosoko - merayau tanpa tujuan.
158.bosolehet - comot / kotor
159.bosolopeng - kerak hingus yang melekat di pipi
160.bosolopet - najis yang tidak habis dicuci yang melekat di punggung
161.bosolepot - duduk atas lantai
162.bosomemai - bertaburan
163.bosonghayang - bergelandangan
164.botenggong - duduk atas batang kayu yang sudah rebah
165.botighai - kain yang koyak rabak
166.bosolighat - keadaan yang terlalu banyak atau tidak terurus
167.bototam - berhimpit-himpit untuk melihat sesuatu yang menarik ( excited )
168.bukat - air yang berkeladak
169.boepak-epak - berlambak-lambak
170.bobenta - berpusing-pusing
171.bodamar - perik akibat kena tampar
172.bodocit -berbunyi
173.bodosup -laju
174.boghombik - terlalu banyak
175.bolengkar - tidur merata-rata
176.bolongging - tidak memakai baju
177.bejo-ong - untung besar
1. comporo – mandi lama dan bersuka-sukaan dalam sungai
2. cekadak – perangai
3. capal – selipar
4. cobak – gali
5. cetot – hubungan sex
6. cemat – pukul dengan ranting kayu atau lidi
7. cocokehan – nak cepat
8. cuek – tanamkan / pacakkan
9. ceneke – tidak berlaku jujur
10. cikun – curi
11. cikai – bayaran ke atas sesuatu
12. cucut – sedut air
13. congkot – kesejukan
14. calong – bekas air
15. cebok – bekas air atau basuh punggung selepas buang air besar
16. chairmay – kosong ( daun trup )
17. cupil - di tepi / nyaris jatuh kebawah
18. cangap - cabang di hujung buluh untuk mengambil buah
19. cegak mata kau - terang mata selepas minum teh
20. cuak - takut
21. cengini - macam ini ( cengini cara buek eh ! )
22. cokau - tangkap ( cokau ayam dalam reban tu ! )
23. colek - sentuh ( jangan dicolek dinding yang baru dicat tu ! )
24. congkau - capai ( tak tercongkau den den yo ! )
25. cun-melocun - tersangat cantik ( terbit dari percakapan remaja sekarang )
26. cembeng - sedih
27. cibir - menjelir lidah
28. cilut - curi benda-benda kecil
29. coghreng - comot
30. cobau - tetak
31. celait - lampu suluh ( berasal dari BI - touch-light )
32. cogholang - mata yang terbuka dan bersinar
33. capuk-capuk - bertompok-tompok
34. congkuit - pengikut ( followers )
35. copot - sejemput / ambil sedikit dengan tangan
36. cuci - rosak / punah
37. coghuh - lumat
38. colungap - makan dengan golojoh
1. dobik – pukul dengan tangan khususnya di belakang badan
2. den – saya
3. debot – gemok
4. dengket – air cetek
5. dompot – pendek
6. dobush – terlepas
7. dogheh – cepat
8. dodat / dudut – minum tanpa gunakan cawan atau gelas
9. dedulu – zaman dahulu
10. dah kelicit luho – sudah masuk zuhur
11. digodak - didapatkan atau dicari
12. dijujut - ditarik
13. dah bungkuih - gagal atau sudah mati
14. diruih - curahkan
15. dibogolkan - dikumpulkan
16. dek ari - disebabkan cuaca panas
17. dikenok-kenoknya - seseorang yang mengatakan apa sahaja yang dibuat oleh orang lain semuanya salah belaka.
18. dengkot - terhincut-hincut
19. diang - memanaskan badan dekat dengan api
20. domut - lemah / lembab
1. ensot – bergerak sedikit
2. epong – makan buah tanpa mengupas kulitnya
4. engkoh - mengurus / peduli
5. entaher - mengiyakan sesuatu
1. gana – semata-mata
2. gedempong – terlalu gemok
3. ginco – gaulkan
4. ghuyup – basah kuyup
5. ghoroh – nyawa
6. ghubu – jolok
7. ghobu – putih empuk
8. gunjai – tinggi
9. gundu – tinggi lampai
10. gacik – potong kecil-kecil
11. gobok – tempat menyimpan makanan
12. godak – kacau
13. gombo – pecah-pecahkan tanah
14. goyeh – tiang atau gigi longgar
15. gait – ambil buah dengan kayu
16. ghongak – rosak
17. gaha – gesel
18. golo – dapat masalah / susah
19. ghongkoh – menarik dahan ke bawah supaya buah boleh diambil
20. ghruntun – buah digugurkan sebelum masak oleh kera
21. ghrompong – gigi berlubang
22. godang – besar
23. ghamuih – muka penuh dengan bulu ( hairy )
24. gaysot – potong atau kerat dengan pisau atau gergaji
25. gaywai – pukul dengan benda panjang
26. gayat – orang asli
27. ghombay – tak diperdulikan akan cakap seseorang
28. ghayat – takut akan ketinggian
29. ghemah – sisa makan di atas lantai
30. goba – selimut
31. ghoman – raut muka
33. gelesek – gosok dengan kuat
34. gaynyot – ubi atau buah yang sudah masuk angin / lisut
36. gairbong – disondol lalu dimakan seperti babi memakan ubi kayu
37. ghewang - khayal
38. ghodah - dibuat / dicari
39. ghopoi - hancur
40. ghahai - rosak teruk
41. gobek - sejenis alat untuk melumatkan sirih ( berasal dari perkataan Inggeris " go and back " )
42. gudam - kena hentam ( berasal dari perkataan Inggeris - god damn )
43. ghairbeh - tidak terurus
44. gasak kau - buat sendiri dan terimalah akibatnya
45. gesek - sembelih dengan pisau
46. gedobok - besar dan hodoh
47. gilan-gilan - pandir
48. ghontian - minta maaf ( berasal dari perkataan hentikan - ghontian takkan dapek dek kau do ? )
49. ghinek - tanda-tanda
50. ghopang - pangkas / potong
51. gabak - koyak atau kehabisan
52. geboih - terlalu besar
53. ghembat - bergaduh / baling sesuatu ke dinding / dicuri orang
54. ghembeh - sejenis cangkul / tajak
55. ghobeh - hujan renyai
56. gombeh - mengada-ngada keterlaluan
57. ghopeh - makan
58. ghosan - hampir basi
59. ghenceh - makhluk yang kononnya memotong kepala untuk takutkan kanak-kanak
60. ghinso - badan rasa tidak selesa kerana panas atau tidak mandi
61. ghopong - alat ditiup untuk menghidupkan api
62. godek - kacau
63. gelemey - lemak pada daging lembu
64. gomeng - kelapa yang tiada isi / menggoyangkan kepala
65. ghepeng - penyek
66. gaduh -risau
67. golo dek eh - susahlah macam ini
68. gheha - tempat meletakkan al-quran untuk dibaca
1. hobin – pukul
2. hulu-hala – tidak tentu arah kerana dahaga
3. hamput – memarahi
4. hoga – menggoncang pokok supaya buah gugur
5. hurai – buka dan taburkan
6. hongeh – penat berjalan / makan terlalu banyak
7. hampeh – tak berguna
8. hapah – apapun tiada
9. humang ai ! - kagum akan sesuatu
10. herot – tidak lurus
11. haram – tidak melakukan permintaan atau arahan seseorang
12. hadang - halang atau penghalang
13. hunjam - jalan sejauh-jauhnya dengan sepenuh tenaga
14. hapik - tidak diperdulikan
15. hati kaudu - sangat busuk hati
16. hojan - teran / meneran
17. humbeh - gasak / bantai
18. hapak - busuk
I
1. ibu sako – generasi tua
2. imbeh - urus / layan
J
1. jongkeh – mati keras / ketawa bagai nak rak
2. jongkit - degil
3. jobo – tidak tahu malu
4. jobak – perangkap burung
5. Jibam – nama orang tempatan
6. joghan – serik
7. jogheh – buat penat
8. joki – rezeki
9. jigat – kaku kerana menangis
10. juang kail - joran pancing
11. jadah - faedah ( contoh: apa jadah eh ! )
12. jorang air - masak air
13. juntang-kalang - dalam keadaan yang tidak kemas dan tersusun,
14. jalang - cair / tidak pekat
15. jamba - dulang
16. jingang - terpinga-pinga
17. joghan - serik
18. jokin - teramat yakin
19. judu - pasangan
20. jujut - ditarik
21. junujanah - mereka / mereka cerita
K
1. koka – kacau padi yang dijemur supaya cepat kering
2. kolokai – habis semua
3. kono losang – kena sengat
4. kenairneng – tidak tentu arah
5. koghopai – bakul
6. ketuntang – tak duduk diam
7. ketenteng – jalan dengan satu kaki
8. kompot – kudong
9. ketot – pendek
10. kola – potong daging ikan atau daging secara berbaris-baris
11. kuti – cubit dengan jari
12. kelilau – tidak tentu arah
13. kluklaha – tidak tentu arah kerana pedas
14. koporit – terlalu nakal ( berasal dari perkataan Inggeris – culprit )
15. kepiting – kedekut
16. kirap – buang atau tuang air ke dalam longkang
17. kopok – tempat menyimpan padi
18. kongkang – duduk di atas bahu orang yang bergerak
19. kuak – ketepikan
20. kubeh – putih atau cerah
21. kepantangan – pantang sekali
22. kelibat – bayang seseorang
23. kirai kain – goyangkan kain
24. kobeh – curi / upacara berubat
25. kelimbek – bayangan seseorang
26. koting – bahagian kaki dari lutut ke bawa
27. kelihar sedikit – lega sedikit
28. komat – kena ilmu
29. kisai – mengganggu lipatan kain / bertaburan
30. kopam – kehitaman kerana dah lama tak basuh
31. kelolong – suka buat bising
33. kecoloan – tidak dapat menutup malu / kantoi
34. kolinggoman – geli melihat akan sesuatu seperti ulat
35. kataden – saya berkata
36. katakau – kamu yang berkata
37. keirbok – ketepikan
38. konit – kecil
39. kocet – poket
40. kochik – kecil
41. koman – ketinggalan zaman ( berasal dari perkataan Inggeris - common )
42. kitai – kibas
43. kenceng – kenyit
44. kotomeneh – ragam
45. kainsal – tuala mandi
46. kelompong – kosong
47. kolopong – ondeh-ondeh / buah Melaka
48. koyok – sombong
49. koleh – keluarkan isi durian dari bijinya untuk dibuat tempoyak
50. koghroh – dengkur
51. keirpok – lentur atau bengkokkan
52. kederek kebaruh – merayau-rayau dari rumah ke rumah
53. kaba – pati kelapa perahan kedua dan ketiga
54. kuih-kuihkan – ketepikan
55. kohok – busuk / niat buruk
56. kobok – mengambil lauk dengan tangan, sepatutnya dengan sudu ketika makan
57. kebagroh – hala ke bawah
58. kederek – hala ke atas
59. kicak – meminta kerana tidak malu
60. kalang - empangan atau sekatan
61. keladak - kotoran di dalam air
62. kopek - ketiak / buah dada wanita
63. kupek - buka atau belah buah-buahan
64. kandar - pikul
66. keluneh - kecil
67. kompuih - bertambah kecil / menjadikan perut bertambah kecil
68. kame - uli / bancuh dengan tangan
69. karang - selepas ini
70. kawan - suami atau isteri
71. kayu berakal - membuat persiapan untuk kenduri kahwin
72. kemot - kemek
73. kenen - mengurat anak dara orang / nak dijudukan
74. kepeh - basah kuyup
75. kepet - kempis / tidak gebu
76. kecet - tiada biji
77. keset - biji yang kecil
78. kilit - kelecek bola
79. kitai - libas / kibas
80. kobok - ambil sesuatu di tempat yang dalam tanpa melihatnya
81. koghobong - ambil makanan tanpa menggunakan sudu
82. koghomau / koghomuih - ramas atau cakar kerana marah
83. koghajat - juadah
84. koghuntong / kuntong - bekas untuk mengisi ikan di pinggang semasa menjala
85. kokhoghrek - baki makanan yang tinggal sedikit.
86. kolapo - mencabahkan biji benih
87. kolayak - diterbangkan oleh angin
88. kelemot - tidak licin atau tidak rata
89. kolikanak - pintu dari ruang tengah rumah menuju ke dapur
90. kololong - kesakitan akibat terkena benda panas
91. kolompong - buah tiada isi dimakan tupai
92. kolongkiak - sejenis serangga yang menggigit dari jenis anai-anai
93. kolopo - dikebas / dilesang / diserang oleh ayam
94. koluhum - umum / pukul rata
95. kolenset - pukul / lipat ke atas / tanggalkan kulit ayam selepas disembelih
96. kolokati - kacip pinang
97. kolosar - buat kerja tidak cermat
98. kelimbahan - air bertakung di belakang dapur tidak mengalir dan berbau busuk.
99. kolutong / koghutong - kerumun
100.kelimbuaian - perangai tidak senonoh / tidak tahu ditegah orang
101.komap - terdiam / tergamam
102.komat - tarikan / terkena ilmu orang / terkena gula-gula
103.koghumuih - menghurung
104.koliha - kelaparan
105.komeh - menghabiskan nasi atau lauk
106.kopam -hitam / kotor
107.kosan eh - rupanya
108.kosompokan - terjumpa secara tidak sengaja
109.ketulahan - badi / sumpahan
110.kua - kacau nasi supaya masak serata
111.kucung -balut / balutan
112.kudap - makan
113.kuei - pondok tempat simpan padi / kepok
114.kumam - mengulum makanan
115.kutak-katik - buat ikut suka
116.kuyo - sejenis hama putih yg melekat di celah kacang panjang dan lain-lain
L
1. likat – pekat
2. leyteng – jentik telinga dengan jari
3. lapun – makan tanpa izin atau sekenyang-kenyangnya
4. lenyek pisang – hancurkan pisang untuk buat kueh
5. lunyah – dipijak
6. lurut – dipisahkan dari tangkainya
7. lobuk-lobak – jatuh dengan banyaknya
8. longkeh – tanggal
9. lelan – hayal / leka
10. logar – hantukkan
11. locut – baling dengan batu / cabut lari
12. lugeh – tumbuk dengan tangan dari arah bawah
13. lompa – anjung rumah
14. layo – salai
15. layoh – lemah
16. lutap – makan dengan cepat
17. lohong – bodoh / besar / barai
18. longjan – terlalu penat berjalan ( berasal dari perkataan Inggeris – long journey )
19. lontung – langgar
20. lokok – curam / berlubang
21. lolok – makan dengan banyak dan cepat
22. lopok – tampar
23. lopok- lopak – bunyi seperti orang berkelahi
24. licih – potong halus-halus
25. lobo – lepaskan / benamkan / masukkan
26. luruh – buang bisa dari badan pesakit / daun atau buah habis gugur
27. lombong – pinggan yang lekok ke dalam
28. layot – tua / rendah atau ke bawah
29. lumu – dipalitkan muka dengan sesuatu
30. lunyai – rosak / hancur
31. locit – membuang selemur dari hidung
32. locot – lebam terkena panas / melecur
33. longot – kotor
34. lontung-lontang –bunyi yang teramat bising
35. longuk-longak – keadaan yang terlalu teruk seperti jalan raya
36. losang – disengat oleh serangga / penangan seseorang
37. leweh – lemah / tawar
38. lunggah – dilanggar ( bulldoze )
39. lokek – kedekut
40. loghoi – hancur
41. loteng – bilik di tingkat atas ( perkataan Hokien )
42. loleh – potong atau kerat daging
43. longkung-longkang – bunyi seperti orang sedang membuat kerja
44. lilih – potong dengan pisau
45. lari larau - sesuatu yang tidak tepat dan berubah-ubah sepergi ketika menyanyi
46. lompung - tampar / pukul
47. langsai - selesai segala hutang
48. longang - tiada orang di sana
49. luak - berkurangan
50. licau - kehabisan
51. lapek - alas
52. lagu - perangai atau cara ( mungkin ditiru dari bahasa utara Semenanjung )
53. lomau - sudah masuk angin khususnya biskut
54. lungguk - kumpul atau simpan ( dah lamo den lungguk duit ni nak ke mekah ! )
55. losulosau - bunyi seperti sesuatu bergerak pantas di dalam semak
56. lisot - buah yang masak belum cukup tempohnya
57. ligat - gasing yang berpusing dengan lajunya
58. lompung-lompang - berbagai bunyi kedengaran yang teramat bising.
59. lulo - dimakan secara menelan dan tak payah dikunyah
60. loho - menjadi besar kerana selalu digunakan
61. landung - sepak
62. laghah - buruk
63. langgai - tempat membakar lemang
64. lanteh - tembus
65. lantung - tersangat busuk
66. lecok - coret
67. lele ( tolele ) - cuai
68. leweh - lemah
69. lobong - berlubang
70. locun - basah kuyup
71. lodoh - lumat / hancur
72. logat - menangis sampai tak keluar suara
73. lohong - berlubang yang sangat besar
74. lompa - beranda rumah
75. lonco - putus tunang
76. longit - kalah main
77. lonjap - basah
78. lotong - kotor
79. lontuih - panggilan untuk budak degil
80. lueh - muntahkan
81. lukuih - basah dek peluh
82. lulut - selut / lumpur
83. lukuk - ketuk kepala dengan jari
84. luncuih - tirus
85. luyu - mengantuk
M
1. meghasai – merana
2. memboko – terpinga-pinga
3. memancak-mancak – bersilat-silat
4. melanush – nasi penuh dengan kuah
5. mengoning – perut buncit
6. menyungko - jatuh tertangkup
7. mengeletek – mengelupur
8. mengugu – menangis tak berhenti-henti
9. menopong – pecah
10. mendobut – lari dengan pantas
11. mantai – sembelih lembu
12. mengoreh – gugur
13. monung – termenung
14. mengelopai – mengelupur
15. menteneng – terlalu kenyang sehingga nampak perut
16. melengkar – tidur macam ular
17. melangsit – tidur terlalu lama
18. mencerabih – cakap tidak tentu arah
19. mencangut-cangut – seperti nyawa-nyawa ikan
20. megrombik – terlalu banyak seperti ulat
21. mada – tidak dapat atau pandai belajar
22. memburan – buang air besar
23. melopong – memandang kosong ke arah sesuatu
24. menyelansang – memarahi
25. meghomut – banyak seperti semut
26. membibih – terlalu banyak air yang keluar
27. melahu – merayau
28. merambu – merayau
29. menajo – kail cucuk
30. menderu-deru – terlalu ramai
31. mengambur – cuba keluar dari bekas seperti ikan
32. meloncek – melompat
33. membobok – air keluar terlalu banyak
34. meladung – nasi penuh dengan kuah
35. melanguk – ke sana ke mari tanpa tujuan
36. monggo – membakar sesuatu
37. moghron – membakar sampah
38. mentedarah – makan dengan lahapnya
39. mencicit – melarikan diri dengan pantas seperti tikus atau tupai
40. mencelinap – hilang dengan cepat seperti arnab
41. mengelintin – melompat-lompat kerana kesakitan
42. membuto – tidur tidak kira masa
43. mencurek-curek – air keluar tidak henti-henti
44. mengengkot – kesejukan
45. menghengkoh – membawa barang berat
46. memek – lembik ( nasi ini memek )
47. mencelungap – mencuri makanan seperti kucing
48. mencikun – mencuri
49. mencanak – lari atau jalan laju tanpa menoleh ke belakang
50. mentanau – makan sehingga tegak seperti burung tanau
51. menconcong – berjalan sendirian tanpa memperdulikan orang lain
52. meladus – merokok
53. muak – jemu
54. mangga – pelepah kelapa
55. melambak-lambak - menunjukkan banyak
56. melolong – menangis sekuat-kuatnya
57. menitih – air terkeluar sedikit demi sedikit di bawah periuk
58. mendosut – memandu dengan laju
59. mohdoreh – mari cepat
60. mencocoh – tersangat penas
61. marin – hari itu / kelmarin
62. membaysey – kerap membuang air kecil
63. mencieh-cieh - berpeluh-peluh
64. mendhorak – ramai atau banyak
65. mengeliat – regangkan badan khususnya selepas tidur
66. menonoi – kudis atau luka yang bernanah dan sakit
67. menjoram – bunyi motor terlalu buat
68. mencipan – menghilangkan diri dengan pantas seperti lipan
69. memandai-mandai – buat-buat pandai
70. melayot-layot – buah rambutan contohnya terlalu lebat sehingga boleh dicapai tangan
71. menaga koghroh eh – berbunyi dengkurnya
72. memboyot – perut yang besar
73. mencirit – suatu cacian / punggung
74. membaning – berjalan di tengah panas tanpa baju
75. mamai – nyanyok
76. moyok – sakit
77. motan – penting ( berasal dari perkataan Inggeris Important )
78. melingkup - menghilangkan diri atau berambus
79. menggagau - kelam-kabut
80. monceh-monceh - yang kecil-kecil atau kurang penting
81. melempeng - tidak boleh digunakan / keadaan sangat teruk
82. mujor - bernasib baik
83. melehet-lehet - sesuatu yang melekat di sana sini
84. mengelebeh - kulit manusai yang suatu ketika tegang sekarang dah menjadi kendur
85. mengairlor - terlalu panjang
86. menodah - buang masa di suatu tempat
87. meghaban - pergi entah ke mana
88. memalam - waktu malam
89. menyeghingai - menyelongkar
90. melaircet - bahagian kaki yang bengkak memakai kasut terlalu lama
91. mogang - berkumpul dan menjamu selera beberapa hari sebelum masuk tarikh berpuasa
92. menongah - jalan ke tengah tanpa memperdulikan orang lain
93. moh doghreh - mari atau pergi cepat
94. melaghram - berlawa-lawaan
95. melancur - watery stools ( najis air )
96. makan ombong - boleh melakukan sesuatu yang disuruh orang yang diri merasa bangga
97. melancar - menghafal
98. membana - tidur sepanjang hari
99. manjangan - kepala pengamal ilmu hitam yang terbang waktu malam mencari mangsa
100. melilau-lilau - berjalan tidak tentu halatuju
101. mekasi - terima kasih ( terbit dari percakapan remaja sekarang )
102. mangkin - menjadi ( tapai yang kau buat itu tak mangkin )
103. mangai - uh mang eh ! ( expresi terkejut akan kehebatan sesuatu )
104. melantak - makan sebanyak yang mungkin
105. mensiang - sejenis tumbuhan tinggi dalam paya yang menyebabkan badan gatal jika disentuh
106. mengumbo-ngumbo - membesar dengan cepat / api membesar dengan cepat
107. melesa - bahagian bawah kain yang mencecah tanah
108. melonto-lonto - dahan pokok yang condong ke bawah kerana terlalu banyak buah.
109. menginding-ngiding - menghampiri untuk tujuan meminta sesuatu
110. menebeng-nebeng - menghampiri untuk tujuan mendapatkan sesuatu juga.
111. menyampah - meluat / benci akan sesuatu
112. manai - badan tidak sihat / lemah
113. menjala - tidur kekenyangan seperti ular
114. meleweh - pokok kepala tidak lagi berbuah dengan banyaknya
115. melampuh - terlalu banyak khususnya buahan
116. meong - kuat merajuk / otak berapa betul
117. moncongak / tercangok - duduk tanpa tujuan
118. monghahang - melalak
119. moghahau - bercakap kuat-kuat
120. moghelo - panjang
121. moghewai - berjalan yang tidak lurus
122. monghungap - keadaan nafas yang keletihan
123. moghanduk - mengharung air
124. mengeban - menjela-jela
125. monghungap - mengah
126. moghogheh - benda-benda seperti daun atau rambut berguguran
127. moghonggoh - pergi ke suatu tempat beramai-ramai
128. moghoning - bengkak kemerah-merahan
129. moghonyok - merajuk
130. moghosit - mengunyah makan ringan
131. moghotul - tukul besi
132. moghumbo-ghumbo - keadaan api marak dan menjulang-julang
133. moghuboi - rambut panjang yang tidak terurus
134. mogun - termenung
135. molampa - bertaburan
136. melampuh - terlalu banyak
137. molondut - melayut / kendur
138. melehet - hingus yang mengalir di pipi
139. molongoh - menangis berpanjangan
140. molaghah - buah-buahan yang gugur tidak berkutip
141. molayau / moleyak - keadaan air yang melempah ke atas lantai
142. molesa - kain yang labuh sehingga mencecah lantai
143. melese - suka duduk dekat-dekat dengan orang lain
144. moluha / molaha - kepedasan
145. molukah - perempuan yang suka duduk terkangkang
146. molunggun - bertimbun
147. momboko - terlepas sesuatu peluang / padan muka
148. mombumbung - pergi entah ke mana
149. membungkam - tidur
150. memepe - bunyi suara yang sumbang dan sakit telinga
151. mompulon - jatuh
152. monabe - tumpah / berciciran
153. monabir - darah yang banyak akibat luka
154. moncamek - benci sangat
155. moncoghidau - becok
156. moncolongo -harum
157. monculin - lari yang teramat laju
158. mondahagi / bedahagi - susah diajak berbuat sesuatu
159. mendaghek - temberang / cerita yang tak masuk akal
160. meneneng - sakit berdenyut
161. mengicak - meminta-minta
162. mengirap - naik darah
163. monghorik-hoghik - menyanyi atau bercakap sampai ternampak urat leher
164. mongokeh kais seperti ayam
165. mengelintau - mencari sesuatu dengan perasaan cemas
166. mongolesah / mongolusuh - gelisah / duduk tak diam
167. monghonggang - bernafas dengan laju kerana keletihan
168. mongolinjang - terlompat-lompat kerana gembira
169. mongolujut - menggigil kerana keletihan
170. mengumba - berpatah balik
171. monogoh - jalan pergi dan datang dengan segera
172. menonong -berjalan pantas tanpa lihat kiri dan kanan
173. montegheh - perut yang kekenyangan
174. montibang - tak tau pergi ke mana
175. monueng - mengekor ke mana orang pergi
176. monugheh - tahu
177. monyisik - menyelongkar sama dengan monyoleme
178. monyolangsang - bertandang ke rumah orang
179. monyongit - bau busuk dari badan
180. monyopak - makan berbunyi
181. monyonggau - rasa pedas atau kepanasan
182. menyungkam - jatuh tersadung dengan muka ke bawah
183. menjeje - panjang, melele atau menitik
184. merengkeh - keadaan oarang yang membawa bebanan berat
185. morengkoh / moghengkeh - keadaan berat bawa beban
186. manjang rugi - sentiasa rugi.
187. menghayau - dipenuhi air
188. mak datok - sungguh hairan
N
1. nonggok – duduk bersahaja
2. nangnga – pandanglah
3. nasi membubu – nasi penuh sepinggan
4. ngirap – marah
5. ngungkit – membawa cerita lama menyebabkan kemarahan
6. ngarit – bergerak melakukan sesuatu
7. ngosit – makan benda ringan sedikit demi sedikit
8. ngoca – tangkap ikan dengan jeramak ( jala kecil )
9. noneh – nanas
10. ngok atau ngong - bodoh ( ngong berasal dari perkataan Hokien )
11. nyuruk-nyuruk - sejenis permainan kanak-kanak ( hide and seek )
13. naik minyak - marah atau mengada-ngada
14. nijam - masa ini ( terbit dari percakapan remaja sekarang )
15. ngonyei - memakan sesuatu sepertu kuaci secara perlahan-lahan
16. ngosey - memakan sesuatu sepanjang waktu
17. nekmoyang sodap eh ! - tentang makanan yang terlalu sedap ( extremely )
18. nangoi - anak babi
19. ngisai - buat sepah / selongkar
20. ngongilanan - buat perangai gila-gila bahasa
21. nompan - sepadan / bergaya / sesuai
22. nunuk-nanak - terkejar ke sana ke sini dengan pantas untuk mendapatkan sesuatu
23. nyaghi - makan sahur ( untuk puasa )
24. nyinggah - kena / dapat
25. nokoh - sepadan atau sesuai
O
1. okap – tamak
2. onap – mendap
3. ompong – bodoh
4. onceh-onceh – yang tidak penting / remeh
5. ongap – nafas
6. obush – pastikan
7. otuih – tapis / biarkan mendap
8. ontok-ontok – duduk diam-diam
9. onyang – moyang
10. oma – buah yang rosak atau empuk kerana terjatuh dari pokok yang tinggi
11. ompuk - lembut
12. osel - gunakan wang simpanan sedikit demi sedikit sehingga habis
13. olek - kenduri besar
14. osah - benarkah ? ( osah kau tak do duit ni ? )
15. ocup / acup - tenggelam
16. oghuk - bising
17. ongeh - mengah
18. ongkoh - iras ( rupa )
19.
P
1. pelakong – pukul
2. pojo – hujan lebat
3. pengoring – semangat
4. pelaso / poloser – pemalas
5. pioh – pulas
6. pewai – tua gayot, longlai, penat, letih
7. peletok – patahkan
8. pirek-pirek – pisahkan biji padi dari tangkainya / tenyeh / lunyah
9. polak – panas / jemu
10. pengka – jalan tidak betul kerana sakit
11. perut tebuseh – perut terkeluar kerana gemuk
12. peicok – tidak lurus
13. peluto – kuat pembohong
14. pauk – tetak
15. pongolan – buluh untuk menggait buah / galah
16. peteng – tenguk atau perhati sebelum menembak ( aim )
17. pororok – dipergaduhkan
18. porokeh – hadiah wang
19. punat – pusat / bahagian paling penting
20. puga – jolok atau masukkan ke dalam mulut sesuatu makanan / sumbatkan
21. penampa – lempang
22. pokok kek kau – suka hati kamu
23. polosit – nakal / jahat / sejenis belalang akuan orang
24. pencalit api – macis api
25. pangkin – tempat duduk-duduk ( perkataan Hokien )
26. poghan – bahagian di tepi atap di dalam rumah yang boleh disimpan barangan.
27. poi – pergi
28. penyogan – pemalas
29. porak – kawasan kecil yang berpagar di tepi dapur untuk menanam keperluan memasak
30. piun - posman
31. petarang - sarang atau tempat tinggal yang usang
32. phoghoi - terlalu reput
33. pulut - menarik keluar ( berasal dari perkataan Inggeris " pull out " )
34. pipir - meleraikan buah dari tangkai
35. pughruk - sorokan
36. pangkeh - memotong dahan pokok
37. palut - bungkus
38. poghiso - sedap ( gulai ni tak poghiso ! )
39. penanggalan - sama seperti manjangan ( kepala pengamal ilmu hitam yang terbang waktu malam )
40. poghrat - rasa yang kurang enak khususnya sayur terung yang telah digulai
42. peghajin - seseorang yang sangat rajin bekerja .
43. pait - tak mahu
44. panggilan - jemputan kawin
45. panta - dulang untuk meletak makanan
46. pegheh - teruk / susah
47. pencong - tak lurus / bengkok
48. peta - fokus mata sebelum membaling sesuatu
49. piuh - pintal / cubit
50. pocek - komen
51. podulik ajab - tidak mengambil endah lagi
52. poloso - pemalas
53. pomola - lucu / kelakar / lawak
54. pomongak - pembohong
55. pencolek angek -punca
56. ponggit - mengada-ngada
57. ponggo - bakar hingga rentung
58. pongopan ( belum pongopan lagi ) - sedia
59. posuk - berlubang
60. pudek / pondot - menyumbat pakaian ke dalam almari yang padat
61. puga - tenyeh
62. pugeh - main hentam
63. pundung - pondok / dangau
64. puntang-panting - lari lintang pukang
65.
R
1. radak – tikam
2. ronggeng - tarian
S
1. sodoalah – semua ( kosodoalah - kesemuanya )
2. songeh – perangai
3. sonak – sakit perut
4. sonsang – sepak / salah / meluru
5. sigai – tangga buluh untuk mengambil air nira
6. sobun – telah tertimbus / tidak boleh dipakai lagi seperti perigi
7. susup – sorokan
8. sontot – pendek atau gemuk
9. sondot – pendek dan gemok
10. siang – bersihkan ikan untuk dibuat gulai
11. sonta - cuci atau kikis pinggan atau periuk
12. sumpeh / sumpek / - sumbing ( digunakan ke atas mata parang atau pisau )
13. seykeh – ketuk kepala dengan kepala jari
14. sotai atau kotai – rosak atau buruk
15. songkang – tutup atau tahan dengan sesuatu
16. sungkit – keluarkan sesuatu dari bawah dengan kayu
17. sombam – lemah semacam berpenyakit
18. sugon - menenyeh / menyental muka seseorang ke atas permukaan lantai
19. sokah – dahan patah
20. sodo – seadanya
21. selompap – selonggok
22. sosah – pukul dengan kayu
23. sondat – padat
24. serabai – sejenis makanan bulan puasa diperbuat dari beras / tak kemas
25. sangkak – buluh yang dibelah supaya boleh mengambil buah-buahan
26. sekerap – getah kering dari pokok getah
27. selilit – penuh dengan hutang
28. semumeh - comot di mulut
29. sepicing – tidur sekejap
30. singkek – bangsat / miskin
31. sewat – sambar / rentap
32. sengkelet – sekeh / ketuk biasanya di kepala
33. siniya – di sini
34. senua – di sana
35. sanun – di sana
36. senteng – pendek, contohnya seluar
37. sewa – ilham / gambaran
38. sampuk – ditegur kuasa ghaib
39. soka – keras dan kering
40. senaja – tidak dapat mengawal basikal turun dari bukit lalu jatuh ke dalam semak
41. sibak - selak ( kain atau langsir )
42. selepot – duduk sambil kedua kaki ke belakang
43. senayan – hari Isnin
44. selemo – hingus
45. sambal gesek – sambal lada
46. songkon – bodoh
47. sungkok-sangkak – merangkak di dalam semak untuk mencari sesuatu.
48. selorau – hujan dengan lebatnya tetapi sekejap
49. secoet - sedikit
50. sanggo - sambut dengan tangan
51. selompap / selopok - menjelaskan bayaran dengan jumlah yang besar
52. sebar - luaskan ( dia hendak sebar rumahnya )
53. sempong - robek
54. seghoman - rupa sama sahaja
55. Sandar - sejenis tumbuhan di dalam sawah yang boleh dibuat gulai
56. selompar - sebidang kecil tanah
57. sede - diabai-abaikan / tak ambil berat /sambil lewa
58. siapaleh / chapaleh - minta dijauhkan dari bahaya
59. siah - elak
60. simbo - simbah dengan air
61. sitap - sepet ( mata )
62. sobulek - dikumpulkan menjadi satu dalam bekas
63. sojak - segak / bergaya
64. sokolap - tidur sekejap
65. so'eh - kenyang sebab makan
66. solala - sama / sefesyen
67. solanjar - terus menerus tanpa tersekat atau terhenti
68. sonala - pada kebiasaan / selalunya
69. senegheh - senonoh / tepat betul
70. sengayut - bergayut
71. songkalan - batu giling
72. sopalit - sikit sangat
73. sopah - makan
74. songge - sengeh
75. songkeh - sangat kenyang
76. sontak - hingga / sampai
77. sotai - kertas yang hampir hancur dirobek-robek
78. sugi - sental
79. suki - cukup makan ikut sukatan / keperluan
80. stemet - anggaran
81. sunu - perbuatan mengarahkan kepala ke arah suatu tempat
82. songkit - terlalu kenyang
83. sekoli daro - masakan yang penuh sekuali
T
1. tercanak – tegak
2. teleng – juling
3. terkeluceh – kaki tercedera ketika berjalan
4. toek – kosong ( lempeng toek )
5. tebungkeh – tercabut
6. terkelepot – jatuh dan tak boleh bangun
7. taek – sedap sangat
8. tergolek – jatuh ke tanah
9. tengahari gaego – tengahari tepat
10. terkemama – terkesima / kehairanan
11. tenjot – jalan seperti orang cedera
12. terbeliak – mata terbuka luas
13. tuntong – terbalikkan atau curahkan
14. tongka – degil
15. terjeluek – rasa nak muntah
16. tungka – tumbang
17. tutuh – potong dahan kayu dengan parang
18. teriang – penghujung
19. tupang – sokong dahan daripada jatuh atau patah
20. tesonte – terkeluar sedikit
21. terbungkang – mati dalam keadaan kaki dan tangan ke atas
22. tak terhengkoh – tak terlaksana
23. tersongge – ketawa nampak gigi
24. terkincit – buang air besar sedikit demi sedikit
25. tempolok – perut sendiri
26. tak tegembalo – tak terurus
27. tercaduk – terkeluar
28. tak senereh – tak senonoh
29. tak menureh – tidak cekap atau pandai
30. tak nyonyeh – tak rosak
31. tak semonggah – buat kerja yang tak berfaedah
32. tengkot – jalan tak stabil kerana kesakitan
33. titik – pukul sehingga lumat / rotan
34. terembat – terhantuk
35. tukih – naik
36. tebarai – telah dikelurkan ( dismantle )
37. tercacak atau tercanak – lekat pada sesuatu
38. terkiak-kiak – dalam ketakutan sambil menangis
39. tepeleot – tidak dapat bangun kerana cedera
40. terpico – pantang dibiarkan lalu diambil orang
41. terpaco – muncul dengan tiba-tiba
42. toteh – memotong secara memancong
43. tua tau – arif kerana tua
44. toreh – menoreh getah
45. terlurus – lubang mayat memanjang di tengah-tengah lubang kubur
46. terkelepot – terbaring
47. terbulus – terlepas / jatuh ke bawah
48. terpuruk – tidak dapat keluar dari kawasan berlumpur
49. terpelosok – dalam keadaan yang sukar untuk keluar
50. telayang – hampir nak tertidur
51. ternonok-ternanak - berjalan tidak tentu arah lalu tersungkur
52. terbureh – terbuka dan terkeluar
53. terhencot-hencot - jalan perlahan dan kelihatan sangat serik
54. tersembam – jatuh ke tanah dengan muka dahulu
55. tirih – bocor
56. tuam – tekan
57. tak belocit – tak berbunyi
58. tokih – hampir / nyaris
59. talo – bantai / belasah / rotan
60. tempeleng – lempang
61. tokak – selera
62. tetoyek atau kekorek – lebihan atau kerak rendang dalam kuali
63. togamam – terkejut
64. tarak – tiada
65. terjungkit – ternaik ke atas
66. tak goso dek den yo ! saya malas buat
67. tochlit – lampu suluh ( berasal dari perkataan Inggeris - torchlight )
68. tak berkotak – tak berbunyi
69. tekulu tekilir – kepercayaan dukun tentang penyakit malaria
70. tak goso – malas nak buat
71. tak diapekkannya – tak diperdulikannya
72. tercirit – tidak dapat mengawal membuang air besar
73. terbuseh – perut besar yang terkeluar apabila memakai kain
74. tersantak – terlalu hampir
75. toreh-toreh – penting-penting
76. telapak - rumah atau tempat tinggal
77. tebughai - terkeluar secara tak sengaja
78. tercabut - menyerupai
79. tobek - paya
80. togah dek kau - disebabkan oleh kamu
81. togel - tidak berpakaian
82. terkeghonyok - rosak atau tercemar
83. tak selakuan - tak senonoh
84. tang air - anak sungai
85. tutuh dek kau - buatlah sesuka hati kamu
86. tekelicik - jatuh ke tanah kerana licin
87. taruk - bubuh
88. taghruh - letak / simpan
89. tuang - masukkan atau curahkan air ke dlm sesuatu
90. tak suki - tak cukup
91. tebar - lepaskan jala ke dalam sungai / mencanai roti
92. tinggal lagi - lagipun ( dah lamo dio nak ke mekah tinggal lagi badannya tak sihat )
93. terkopak - pecah ( dinding rumah itu telah terkopak )
94. terkupek - tercabut ( tapak kasutnya telah terkupek )
95. tersonggea - tersenyum lama dan nampak gigi
96. tersengeh - tersenyum sahaja
97. teleher - tekak atau selera seseorang
98. tekau eh ! - kata awak
99. tejorang - sedang dimasak
100. toboh - tempat ( contoh: kek mano toboh eh ? )
101. tak senereh - tidak senonoh / serba tidak tahu
102. tebelesey - keadaan yang tidak berdaya
103. tak kotawanan - tak pasti / ragu-ragu apa yang hendak dibuat
104. tandeh - kehabisan
105. tak selaku - tak senonoh
106. tanjul - ikat
107. tebok / tenok - ditanduk oleh lembu
108. tenang - menumpu perhatian sebelum menembak atau memanah
109. tiek - seliuh ( tak patah tiek )
110. timpan - lepat ( sejenis kueh )
111. tingkeh - sudah habis
112. tobe'eng - sengkak / terlalu kenyang
113. toboba - gempar / terkejut
114. toboghongau - kudis / luka yang terbuka luas
115. toboleghan - hairan
116. tobolengot - merajuk / sensitif
117. toboleseh - cepat atau mudah merajuk
118. tobo'o - perasaan tersangat malu
119. tocangok / tocongak - berdiri tiba-tiba tanpa berkata-kata
120. toghoghau - terkejut besar
121. tojolepot - terduduk
122. tokoliak / tokoliek - salah urat kerana tergelincir atau terjatuh
123. tokoneneng - berpinar-pinar
124. togamang - terperanjat / terkejut
125. tojolubuih - terperosok / terbolos ( jatuh ke dalam lubang )
126. tokinyam - hendak lagi
127. tokolayak - tergelincir
128. tokolebak - terkoyak / tersiat
129. tokhighap - tumpah
130. tokoluceh - terlucut / terlepas
131. tokong - sepotong / sekerat
132. toledan-ledan - melengah-lengahkan kerja
133. tolengkang - terbiar
134. tolopo - terleka sekejap / terbuka
135. tomoleng - berpaling
136. tona - riuh / gegak gempita
137. tongging - malu-maluan
138. tonggong - besar / tegap
139. tongkuluk - kain ikat rambut orang perempuan
140. topodok - terbenam dalam
141. tosunam - tersungkur muka ke bawah
142. totungek - terbali kaki ke atas dan kepala ke bawah
143. tughik - pekak
144. tumuih - benam / tekan
145. tungkun - kayu untuk hidupkan api
146. tuntung - memasukan air ke dalam bekas
147. tombolang - cerita sebenarnya / tidak baik / busuk
148. takek - memauk dengan parang
149. telentok - dah nak tidur
U
1. uwan - nenek
2. unggih - gigit
3. uceh - melepas / tak dapat
4. ujo - cari momentum / acah
5. ulit - lembab / pelahan
6. ula-ula - resah gelisah / tidak boleh bangun dengan cepat kerana kaki cram
7 . umpok - hak
8 . upah serayo - buat kerja sukarela / tanpa bayaran
9. uncah - ganggu atau kacau
"""
words = []
for line in additional.split('\n'):
cleaned = cleaning(line)
if len(cleaned) < 3:
continue
c = cleaning(unidecode(line).split('-')[0]).replace('atau', '/').split('/')
words.extend([cleaning(i) for i in c])
len(words)
words.extend(['Dodulu', 'Joki', 'marin', 'Onyang',
'Koporit', 'Longjan', 'Loteng', 'Pangkin',
'Siniya', 'Nangnga', 'Bekecapuih',
'Terkemama', 'Kolinggoman','Comporo'])
words = [i.lower() for i in words if len(i) > 3]
words = set(words) - malays
len(words)
import json
with open('negeri-sembilan-words.json', 'w') as fopen:
json.dump(list(words), fopen)
```
| github_jupyter |
# The Making of a Preconditioner
This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.
The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver.
### Multigrid algorithm
We need some terminology before going further.
- Approximation:
- Residual:
- Exact solution (of the discrete problem)
- Correction
This is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm
- Smoothing Operator (a.k.a Relaxation)
- Restriction Operator
- Interpolation Operator (a.k.a Prolongation Operator)
- Bottom solver
We will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy.
```
import numpy as np
```
### Smoothing operator
This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that uses red-black Gauss Seidel sweeps and returns the result of the smoothing along with the residual.
```
def GSrelax(nx,ny,u,f,iters=1):
'''
Gauss Seidel smoothing
'''
dx=1.0/nx
dy=1.0/ny
Ax=1.0/dx**2
Ay=1.0/dy**2
Ap=1.0/(2.0*(Ax+Ay))
#BCs. Homogeneous Dirichlet BCs
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
for it in range(iters):
for c in [0,1]:#Red Black ordering
for i in range(1,nx+1):
start = 1 + (i%2) if c == 0 else 2 - (i%2)
for j in range(start,ny+1,2):
u[i,j]= Ap*( Ax*(u[i+1,j]+u[i-1,j])
+Ay*(u[i,j+1]+u[i,j-1]) - f[i,j])
#BCs. Homogeneous Dirichlet BCs
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
#calculate the residual
res=np.zeros([nx+2,ny+2])
for i in range(1,nx+1):
for j in range(1,ny+1):
res[i,j]=f[i,j] - ((Ax*(u[i+1,j]+u[i-1,j])+Ay*(u[i,j+1]+u[i,j-1]) - 2.0*(Ax+Ay)*u[i,j]))
return u,res
```
### Interpolation Operator
This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.
```
def prolong(nx,ny,v):
'''
interpolate 'v' to the fine grid
'''
v_f=np.zeros([2*nx+2,2*ny+2])
for i in range(1,nx+1):
for j in range(1,ny+1):
v_f[2*i-1,2*j-1] = 0.5625*v[i,j]+0.1875*(v[i-1,j]+v[i,j-1])+0.0625*v[i-1,j-1]
v_f[2*i ,2*j-1] = 0.5625*v[i,j]+0.1875*(v[i+1,j]+v[i,j-1])+0.0625*v[i+1,j-1]
v_f[2*i-1,2*j ] = 0.5625*v[i,j]+0.1875*(v[i-1,j]+v[i,j+1])+0.0625*v[i-1,j+1]
v_f[2*i ,2*j ] = 0.5625*v[i,j]+0.1875*(v[i+1,j]+v[i,j+1])+0.0625*v[i+1,j+1]
return v_f
```
### Restriction
This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. *This is fundamentally different from interpolation*. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity
```
def restrict(nx,ny,v):
'''
restrict 'v' to the coarser grid
'''
v_c=np.zeros([nx+2,ny+2])
for i in range(1,nx+1):
for j in range(1,ny+1):
v_c[i,j]=0.25*(v[2*i-1,2*j-1]+v[2*i,2*j-1]+v[2*i-1,2*j]+v[2*i,2*j])
return v_c
```
Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.
### V-cycle
Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.
```
def V_cycle(nx,ny,num_levels,u,f,level=1):
'''
V cycle
'''
if(level==num_levels):#bottom solve
u,res=GSrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Relax Au=f on this grid
u,res=GSrelax(nx,ny,u,f,iters=1)
#Step 2: Restrict residual to coarse grid
res_c=restrict(nx//2,ny//2,res)
#Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)
e_c=np.zeros_like(res_c)
e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)
#Step 4: Interpolate(prolong) e_c to fine grid and add to u
u+=prolong(nx//2,ny//2,e_c)
#Step 5: Relax Au=f on this grid
u,res=GSrelax(nx,ny,u,f,iters=1)
return u,res
```
Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.
```
#analytical solution
def Uann(x,y):
return (x**3-x)*(y**3-y)
#RHS corresponding to above
def source(x,y):
return 6*x*y*(x**2+ y**2 - 2)
```
Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
The coarsest problem is going to have a 2-by-2 grid.
```
#input
max_cycles = 18
nlevels = 6
NX = 2*2**(nlevels-1)
NY = 2*2**(nlevels-1)
tol = 1e-15
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u =np.zeros([NX+2,NY+2])#approximation
f =np.zeros([NX+2,NY+2])#RHS
#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY
xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')
uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1] =source(XX,YY)
```
Now we can call the solver
```
print('mgd2d.py solver:')
print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)
for it in range(1,max_cycles+1):
u,res=V_cycle(NX,NY,nlevels,u,f)
rtol=np.max(np.max(np.abs(res)))
if(rtol<tol):
break
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
```
**True error** is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 4th cycle. The approximation is not getting any better after this point. So we can stop after 4 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.
This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.
## Full Multi-Grid
We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.
```
def FMG(nx,ny,num_levels,f,nv=1,level=1):
if(level==num_levels):#bottom solve
u=np.zeros([nx+2,ny+2])
u,res=GSrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Restrict the rhs to a coarse grid
f_c=restrict(nx//2,ny//2,f)
#Step 2: Solve the coarse grid problem using FMG
u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)
#Step 3: Interpolate u_c to the fine grid
u=prolong(nx//2,ny//2,u_c)
#step 4: Execute 'nv' V-cycles
for _ in range(nv):
u,res=V_cycle(nx,ny,num_levels-level,u,f)
return u,res
```
Lets call the FMG solver for the same problem
```
print('mgd2d.py FMG solver:')
print('NX:',NX,', NY:',NY,', levels: ',nlevels)
u,res=FMG(NX,NY,nlevels,f,nv=1)
rtol=np.max(np.max(np.abs(res)))
print(' FMG L_inf(res.)= ',rtol)
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
```
Well... It works. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$
## Stationary iterative methods as preconditioners
A preconditioner is matrix is an easily invertible approximation of the coefficient matrix. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is we need the matrix-vector product only. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.
Now, a stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.
**This allows us to use any blackbox function stationary iterative method as a preconditioner**
We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.
The matrix as a function can be defined using **LinearOperator** from **scipy.sparse.linalg**. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to GMRES() or BiCGStab() as a preconditioner.
Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric. The multigrid V-cycle above is not symmetric because the Gauss-Seidel preconditioner is unsymmetric. If we were to use jacobi method, or symmetric Gauss-Seidel (SGS) method, then symmetry would be retained. As such Conjugate Gradient method will not work here becuase our preconditioner is not symmetric.
Below is the code for defining a V-Cycle preconditioner
```
from scipy.sparse.linalg import LinearOperator,bicgstab
def MGVP(nx,ny,num_levels):
'''
Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can
be passed to Krylov solvers as a preconditioner.
'''
def pc_fn(v):
u =np.zeros([nx+2,ny+2])
f =np.zeros([nx+2,ny+2])
f[1:nx+1,1:ny+1] =v.reshape([nx,ny])
#perform one V cycle
u,res=V_cycle(nx,ny,num_levels,u,f)
return u[1:nx+1,1:ny+1].reshape(v.shape)
M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)
return M
```
Let us define the Poisson matrix also as a Linear Operator
```
def Laplace(nx,ny):
'''
Action of the Laplace matrix on a vector v
'''
def mv(v):
u =np.zeros([nx+2,ny+2])
ut=np.zeros([nx,ny])
u[1:nx+1,1:ny+1]=v.reshape([nx,ny])
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2;Ay=1.0/dy**2
#BCs. Homogenous Dirichlet
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
for i in range(1,nx+1):
for j in range(1,ny+1):
ut[i-1,j-1]=(Ax*(u[i+1,j]+u[i-1,j])+Ay*(u[i,j+1]+u[i,j-1]) - 2.0*(Ax+Ay)*u[i,j])
return ut.reshape(v.shape)
return mv
```
The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.
The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations
```
def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):
num_iters = 0
def callback(xk):
nonlocal num_iters
num_iters+=1
x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)
return x,status,num_iters
```
Lets look at what happens with and without the preconditioner.
```
A = LinearOperator((NX*NY,NX*NY), matvec=Laplace(NX,NY))
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex
#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('without preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.max(np.abs(error))))
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.max(np.abs(error))))
```
Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations.
There we have it. A Multigrid Preconditioned Krylov Solver. We did all this without even having to deal with an actual matrix. How great is that! I think the next step should be solving a non-linear problem without having to deal with an actual Jacobian (matrix).
| github_jupyter |
```
%matplotlib inline
```
# Plot Ridge coefficients as a function of the L2 regularization
.. currentmodule:: sklearn.linear_model
:class:`Ridge` Regression is the estimator used in this example.
Each color in the left plot represents one different dimension of the
coefficient vector, and this is displayed as a function of the
regularization parameter. The right plot shows how exact the solution
is. This example illustrates how a well defined solution is
found by Ridge regression and how regularization affects the
coefficients and their values. The plot on the right shows how
the difference of the coefficients from the estimator changes
as a function of regularization.
In this example the dependent variable Y is set as a function
of the input features: y = X*w + c. The coefficient vector w is
randomly sampled from a normal distribution, whereas the bias term c is
set to a constant.
As alpha tends toward zero the coefficients found by Ridge
regression stabilize towards the randomly sampled vector w.
For big alpha (strong regularisation) the coefficients
are smaller (eventually converging at 0) leading to a
simpler and biased solution.
These dependencies can be observed on the left plot.
The right plot shows the mean squared error between the
coefficients found by the model and the chosen vector w.
Less regularised models retrieve the exact
coefficients (error is equal to 0), stronger regularised
models increase the error.
Please note that in this example the data is non-noisy, hence
it is possible to extract the exact coefficients.
```
# Author: Kornel Kielczewski -- <kornel.k@plusnet.pl>
print(__doc__)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
clf = Ridge()
X, y, w = make_regression(n_samples=10, n_features=10, coef=True,
random_state=1, bias=3.5)
coefs = []
errors = []
alphas = np.logspace(-6, 6, 200)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
errors.append(mean_squared_error(clf.coef_, w))
# Display results
plt.figure(figsize=(20, 6))
plt.subplot(121)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.axis('tight')
plt.subplot(122)
ax = plt.gca()
ax.plot(alphas, errors)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('error')
plt.title('Coefficient error as a function of the regularization')
plt.axis('tight')
plt.show()
```
| github_jupyter |
```
% run utils.ipynb
import json
def save(data, name):
with open('../data/out.{}.json'.format(name), 'w') as f:
json.dump(data, f, ensure_ascii=False)
lang_locations = pd.read_csv('../data/languages_coordinates.csv')
lang_locations.drop(['glottocode', 'macroarea'], 1, inplace=True)
lang_locations.head()
lang_locations.shape
relations = pd.read_csv('../data/etymwn.tsv', sep='\t', header=None)
relations.columns = ['src', 'rel', 'to']
relations = relations[
~relations.src.apply(lambda x: len(x.split(' ')) > 4) &
~relations.to.apply(lambda x: len(x.split(' ')) > 4) &
~relations.src.apply(lambda x: len(x.split(':')) > 2) &
~relations.to.apply(lambda x: len(x.split(':')) > 2) &
~relations.src.str.contains('-') &
~relations.src.str.contains('\[') &
~relations.to.str.contains('-') &
~relations.to.str.contains('\[') &
~relations.rel.isin(['rel:is_derived_from', 'rel:etymologically_related', 'derived'])
]
relations = relations.assign(
src_lang=relations.src.apply(lambda x: x.split(':')[0].strip()),
src_word=relations.src.apply(lambda x: x.split(':')[1].strip().lower()),
to_lang=relations.to.apply(lambda x: x.split(':')[0].strip()),
to_word=relations.to.apply(lambda x: x.split(':')[1].strip().lower()),
)
relations = relations[relations.to_word != relations.src_word]
relations.drop_duplicates(inplace=True)
relations.head()
relations.shape
words_per_lang = relations.groupby(relations.to_lang).count().to_word
words_per_lang.sort_values(ascending=False).plot(logy=True);
min_word = 20
langs = pd.Series(words_per_lang[words_per_lang > min_word].index)
langs.sort_values()
langs.shape
lang_locations[lang_locations.isocode.isin(langs)].shape
macrolangs = pd.read_csv('../data/macrolanguages.tsv', sep='\t')
macrolangs.drop(['I_Status'], 1, inplace=True)
macrolangs = macrolangs[~macrolangs.I_Id.isin(langs) & macrolangs.I_Id.isin(lang_locations.isocode)]
macrolangs = dict(macrolangs.groupby(macrolangs.M_Id).first().reset_index().values)
len(macrolangs)
unknown_lang = ~langs.isin(lang_locations.isocode)
langs[unknown_lang] = langs[unknown_lang].apply(macrolangs.get)
langs = langs[langs.values != None]
langs.shape
lang_locations_patch = np.array([
[34.5, 41],
[37.1, -3.5],
[51, 0],
[40.3, 45],
[28, 84.5],
[52, 5],
[52, -1],
[48, 2],
[48.649, 11.4676],
[48.649, 13.4676],
[59.92, 10.71],
[52, 5],
[52, 0],
[47, 2],
[53.3, 6.3],
[47.649, 12.4676],
[53.2, -7.5],
[55.7, 12],
[32, 50],
[44.3, 4],
[56, 37],
[51.152, 12.692],
[40.4414, -1.11788],
[39.8667, 32.8667],
[52, -4],
[32, 50],
[52, 14]
])
lang_locations_patch.shape
lang_locations.loc[lang_locations.isocode.isin(langs) & lang_locations.latitude.isnull(), ['latitude', 'longitude']] = lang_locations_patch
lang_locations[lang_locations.isocode.isin(langs) & lang_locations.latitude.isnull()]
lang_locations = lang_locations[lang_locations.isocode.isin(langs)]
lang_locations.shape
relations = relations[relations.src_lang.isin(langs) & relations.to_lang.isin(langs)]
relations.shape
parents_rel = relations[relations.rel != 'rel:etymology']
parents_rel.shape
words = set()
words.update(relations.src_word)
words.update(relations.to_word)
len(words)
word_per_lang = pd.DataFrame(dict(
word=np.r_[relations.src_word, relations.to_word],
lang=np.r_[relations.src_lang, relations.to_lang],
))
word_per_lang.shape
word_langs = dict(word_per_lang.groupby(word_per_lang.word).lang.apply(lambda x: list(np.unique(x))).reset_index().values)
len(word_langs)
save(word_langs, 'word_langs')
word_per_lang.head()
lang_cases = word_per_lang.groupby(word_per_lang.lang).word.apply(list)
lang_cases.head()
lang_count = word_per_lang.groupby(word_per_lang.lang).word.count()
lang_count.head()
lang_len_means = lang_cases.apply(lambda w: float(np.mean([len(x) for x in w])))
lang_len_means.head()
lang_len_percentiles = lang_cases.apply(lambda w: np.percentile([len(x) for x in w], [25, 50, 75]))
lang_len_percentiles.head()
lang_len_std = lang_cases.apply(lambda w: float(np.std([len(x) for x in w])))
lang_len_std.head()
lang_cases_letters = lang_cases.apply(lambda w: [x for xx in w for x in xx])
lang_cases_letters.head()
lang_letters = lang_cases_letters.apply(lambda w: [(l, int(c)) for l, c in zip(*np.unique(w, return_counts=True))])
lang_letters.head()
lang_stats = pd.DataFrame(dict(
count=lang_count,
mean=lang_len_means,
std=lang_len_std,
percentile25=lang_len_percentiles.apply(lambda x: float(x[0])),
percentile50=lang_len_percentiles.apply(lambda x: float(x[1])),
percentile75=lang_len_percentiles.apply(lambda x: float(x[2])),
histogram=lang_letters
))
lang_stats.head()
src_to_count = relations.groupby(relations.to_lang).to_word.count()
src_to_count.head()
src_to = parents_rel.groupby([parents_rel.src_lang, parents_rel.to_lang]).count().rel
src_to.shape
src_to.head()
network_to = {}
for (src, to), count in src_to.items():
if src not in network_to:
network_to[src] = []
ratio = count# / src_to_count.loc[to]
#assert ratio <= 1
network_to[src].append([to, ratio])
to_src_count = relations.groupby(relations.src_lang).src_word.count()
to_src_count.head()
to_src = parents_rel.groupby([parents_rel.to_lang, parents_rel.src_lang]).count().rel
to_src.shape
to_src.head()
network_from = {}
for (to, src), count in to_src.items():
if to not in network_from:
network_from[to] = []
ratio = count# / to_src_count.loc[src]
#assert ratio <= 1
network_from[to].append([src, ratio])
save({
'to': network_to,
'from': network_from,
'locations': lang_locations.set_index('isocode').to_dict('index'),
'stats': lang_stats.to_dict('index')
}, 'lang_network')
pairs = set()
pairs.update(relations.src_lang.str.cat(':' + relations.src_word))
pairs.update(relations.to_lang.str.cat(':' + relations.to_word))
len(pairs)
mappings = pd.read_csv('../data/uwn.tsv', sep='\t', header=None)
mappings.columns = ['src', 'rel', 'to', 'weight']
mappings = mappings[mappings.rel != 'rel:means']
mappings = mappings.assign(
lang=mappings.to.apply(lambda x: x.split('/')[1].strip()),
word=mappings.to.apply(lambda x: x.split('/')[2].strip().lower()),
pair=mappings.to.apply(lambda x: ':'.join(p.strip().lower() for p in x.split('/')[1:])),
)
mappings = mappings[mappings.pair.isin(pairs)]
mappings.drop(['weight', 'rel', 'pair'], axis=1, inplace=True)
mappings.set_index('src', inplace=True)
mappings.head()
mappings.shape
clusters = mappings.groupby(mappings.index).apply(lambda x: list(x.lang.str.cat(':' + x.word)))
clusters.head()
meanings = {}
for _, cluster in tqdm(clusters.items()):
for lang_word in cluster:
if lang_word not in meanings:
meanings[lang_word] = set()
meanings[lang_word].update(cluster)
meanings[lang_word].remove(lang_word)
for key, values in meanings.items():
meanings[key] = list(values)
len(meanings)
save(meanings, 'word_meanings')
parents = pd.DataFrame(dict(
src=parents_rel.src_lang + ':' + parents_rel.src_word + ',',
to=parents_rel.to_lang + ':' + parents_rel.to_word,
)).groupby('to').src.sum()
parents.head()
parents_map = dict(parents.apply(lambda x: x.split(',')[:-1]).reset_index().values)
len(parents_map)
save(parents_map, 'word_parents')
children = pd.DataFrame(dict(
src=parents_rel.src_lang + ':' + parents_rel.src_word,
to=parents_rel.to_lang + ':' + parents_rel.to_word + ',',
)).groupby('src').to.sum()
children.head()
children_map = dict(children.apply(lambda x: x.split(',')[:-1]).reset_index().values)
len(children_map)
save(children_map, 'word_children')
def recurse(lang_word, mapping, seen=None):
if seen is None:
seen = set()
if lang_word in seen:
return []
seen.add(lang_word)
ps = mapping.get(lang_word, [])
return [(p, recurse(p, mapping, seen.copy())) for p in ps]
recurse('eng:dog', parents_map)
recurse('eng:dog', children_map)
def recurse_unfold(lang_word, mapping):
edges = []
depth = 0
def edgify(lang_word, history=[], seen=None):
if seen is None:
seen = set()
ps = mapping.get(lang_word, [])
if lang_word in seen:
edges.append(history)
return
if not len(ps):
edges.append(history + [lang_word])
return
seen.add(lang_word)
[edgify(p, history + [lang_word], seen.copy()) for p in ps]
edgify(lang_word)
return edges
recurse_unfold('eng:dog', parents_map)
recurse_unfold('eng:dog', children_map)
lang_words = (parents_rel.src_lang + ':' + parents_rel.src_word).values
lang_words.shape
lang_influences = {}
for lang_word in tqdm(lang_words):
edges = recurse_unfold(lang_word, parents_map) + recurse_unfold(lang_word, children_map)
for edge in edges:
lang = edge[0].split(':')[0]
if lang not in lang_influences:
lang_influences[lang] = []
lang_influences[lang].append(edge)
lang_influences_ord = { k: sorted(v, key=len, reverse=True) for k, v in tqdm(lang_influences.items()) }
n_samples = 50
lang_samples = {}
for lang in tqdm(langs):
if lang in lang_influences:
top_starters = [lang_word[0].split(':')[1] for lang_word in lang_influences_ord[lang]]
lang_samples[lang] = [top_starters[i] for i in sorted(np.unique(top_starters, return_index=True)[1])][:n_samples]
save(lang_samples, 'lang_samples')
relation_groups = relations.groupby(relations.src_lang).apply(lambda x: x.groupby(x.to_lang).src_word.apply(list))
relation_groups.shape
relation_samples = {}
for _, (src, to, words) in tqdm(relation_groups.reset_index().iterrows()):
relation = '{}{}'.format(src, to)
relation_samples[relation] = np.random.choice(words, min(n_samples, len(words))).tolist()
save(relation_samples, 'relation_samples')
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Image classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:
* Efficiently loading a dataset off disk.
* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.
This tutorial follows a basic machine learning workflow:
1. Examine and understand data
2. Build an input pipeline
3. Build the model
4. Train the model
5. Test the model
6. Improve the model and repeat the process
## Import TensorFlow and other libraries
```
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
```
## Download and explore the dataset
This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:
```
flower_photo/
daisy/
dandelion/
roses/
sunflowers/
tulips/
```
```
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
```
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
```
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
```
Here are some roses:
```
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
```
And some tulips:
```
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
```
# Load using keras.preprocessing
Let's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial.
## Create a dataset
Define some parameters for the loader:
```
batch_size = 32
img_height = 180
img_width = 180
```
It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation.
```
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
```
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
```
class_names = train_ds.class_names
print(class_names)
```
## Visualize the data
Here are the first 9 images from the training dataset.
```
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
```
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
```
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
```
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images.
You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`.
## Configure the dataset for performance
Let's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.
`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.
`Dataset.prefetch()` overlaps data preprocessing and model execution while training.
Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance#prefetching).
```
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
```
## Standardize the data
The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer.
```
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
```
Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change.
There are two ways to use this layer. You can apply it to the dataset by calling map:
```
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
```
Or, you can include the layer inside your model definition, which can simplify deployment. We will use the second approach here.
Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer.
# Create the model
The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
```
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
```
## Compile the model
For this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
```
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## Model summary
View all the layers of the network using the model's `summary` method:
```
model.summary()
```
## Train the model
```
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
```
## Visualize training results
Create plots of loss and accuracy on the training and validation sets.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.
Let's look at what went wrong and try to increase the overall performance of the model.
## Overfitting
In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.
There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model.
## Data augmentation
Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting then using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.
We will implement data augmentation using experimental [Keras Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/?version=nightly). These can be included inside your model like other layers, and run on the GPU.
```
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
```
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
```
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
```
We will use data augmentation to train a model in a moment.
## Dropout
Another technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossary#dropout_regularization) to the network, a form of *regularization*.
When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.
Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
```
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
```
## Compile and train the model
```
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
```
## Visualize training results
After applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
## Predict on new data
Finally, let's use our model to classify an image that wasn't included in the training or validation sets.
Note: Data augmentation and Dropout layers are inactive at inference time.
```
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
```
| github_jupyter |
```
%matplotlib inline
```
# Recognizing hand-written digits
An example showing how the scikit-learn can be used to recognize images of
hand-written digits.
This example is commented in the
`tutorial section of the user manual <introduction>`.
```
# Standard scientific Python imports
import matplotlib.pyplot as plt
import numpy as np
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics
from sklearn.neighbors import KNeighborsClassifier, NearestNeighbors
from ipywidgets import interact_manual
# The digits dataset
digits = datasets.load_digits()
# The data that we are interested in is made of 8x8 images of digits, let's
# have a look at the first 4 images, stored in the `images` attribute of the
# dataset. If we were working from image files, we could load them using
# matplotlib.pyplot.imread. Note that each image must have the same size. For these
# images, we know which digit they represent: it is given in the 'target' of
# the dataset.
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
plt.show()
# To apply a classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001)
# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2])
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples // 2:]
predicted = classifier.predict(data[n_samples // 2:])
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))
for index, (image, prediction) in enumerate(images_and_predictions[:4]):
plt.subplot(2, 4, index + 5)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Prediction: %i' % prediction)
plt.show()
```
## Clasificador de Vecinos Cercanos
```
neigh = None
@interact_manual(k=(1, 20))
def make_nn_classifier(k):
global neigh
neigh = KNeighborsClassifier(n_neighbors=k)
print('training...')
neigh.fit(data[:n_samples // 2], digits.target[:n_samples // 2])
print('done!')
print(neigh)
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples // 2:]
predicted = neigh.predict(data[n_samples // 2:])
print("Classification report for classifier %s:\n%s\n"
% (neigh, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))
for index, (image, prediction) in enumerate(images_and_predictions[:4]):
plt.subplot(2, 4, index + 5)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Prediction: %i' % prediction)
plt.show()
```
## KNN No Supervisado
```
neigh = None
kk = None
rr = None
algo = None
@interact_manual(k=(1, 20),
r=(0, 50, 0.1),
algorithm=['auto', 'ball_tree', 'kd_tree', 'brute'])
def make_knn(k, r, algorithm):
global neigh
global kk, rr, algo
kk, rr, algo = k, r, algorithm
neigh = NearestNeighbors(n_neighbors=k, radius=r, algorithm=algorithm)
print('training...')
neigh.fit(data[:n_samples // 2])
print('done!')
print(neigh)
# Now predict the value of the digit on the second half:
distances, indices = neigh.kneighbors(data[n_samples // 2:], 4)
distances
indices
neigh.kneighbors_graph(data[n_samples // 2:], n_neighbors=kk).toarray()
help(neigh.kneighbors_graph)
```
| github_jupyter |
```
import sys, csv
import unicodecsv
import pymongo
import time
import argparse
def parse_number(num, default):
try:
return int(num)
except ValueError:
try:
return float(num)
except ValueError:
return default
def read_geonames_csv(file_path):
geonames_fields=[
'geonameid',
'name',
'asciiname',
'alternatenames',
'latitude',
'longitude',
'feature class',
'feature code',
'country code',
'cc2',
'admin1 code',
'admin2 code',
'admin3 code',
'admin4 code',
'population',
'elevation',
'dem',
'timezone',
'modification date',
]
#Loading geonames data may cause errors without this line:
csv.field_size_limit(2**32)
with open(file_path, 'rb') as f:
reader = unicodecsv.DictReader(f,
fieldnames=geonames_fields,
encoding='utf-8',
delimiter='\t',
quoting=csv.QUOTE_NONE)
for d in reader:
d['population'] = parse_number(d['population'], 0)
d['latitude'] = parse_number(d['latitude'], 0)
d['longitude'] = parse_number(d['longitude'], 0)
d['elevation'] = parse_number(d['elevation'], 0)
if len(d['alternatenames']) > 0:
d['alternatenames'] = d['alternatenames'].split(',')
else:
d['alternatenames'] = []
yield d
parser = argparse.ArgumentParser()
parser.add_argument(
"--mongo_url", default='localhost'
)
parser.add_argument(
"--db_name", default='geonames'
)
args, unknown = parser.parse_known_args()
# Not in iPython notebook:
# args = parser.parse_args()
print "This takes me about a half hour to run on my machine..."
db = pymongo.Connection(args.mongo_url)[args.db_name]
c = pymongo.MongoClient(args.mongo_url)
c[args.db_name]
db = c.geonames
collection = db.cities1000
collection.drop()
for i, geoname in enumerate(read_geonames_csv('../cities1001.txt')):
total_row_estimate = 10000000
if i % (total_row_estimate / 10) == 0:
print i, '/', total_row_estimate, '+ geonames imported'
collection.insert(geoname)
collection.ensure_index('name')
collection.ensure_index('alternatenames')
# Test that the collection contains some of the locations we would expect,
# and that it completes in a reasonable amount of time.
# TODO: Run the geoname extractor here.
start_time = time.time()
test_names = ['El Tarter', 'Riu Valira del Nord', 'Bosque de Soldeu', 'New York', 'Africa', 'Canada', 'Kirkland']
query = collection.find({
'$or' : [
{
'name' : { '$in' : test_names }
},
{
'alternatenames' : { '$in' : test_names }
}
]
})
found_names = set()
for geoname in query:
found_names.add(geoname['name'])
for alt in geoname['alternatenames']:
found_names.add(alt)
difference = set(test_names) - found_names
if difference != set():
print "Test failed!"
print "Missing names:", difference
if time.time() - start_time > 15:
print "Warning: query took over 15 seconds."
# That's fine.
c.disconnect()
```
| github_jupyter |
```
%matplotlib widget
%config InlineBackend.figure_format = 'svg'
import addict
import datetime
import os
import pyproj
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from importlib import reload
from okada_wrapper import dc3dwrapper
import celeri
# Plotting the global model is much much faster with tex fonts turned off
plt.rcParams['text.usetex'] = False
RUN_NAME = datetime.datetime.now().strftime("%y%m%d%H%M%S") + os.sep
command_file_name = './data/western_north_america/western_north_america_command.json'
# command_file_name = './data/global/global_command.json'
command, segment, block, meshes, station, mogi, sar = celeri.read_data(command_file_name)
station = celeri.process_station(station, command)
segment = celeri.process_segment(segment, command, meshes)
sar = celeri.process_sar(sar, command)
closure = celeri.assign_block_labels(segment, station, block, mogi, sar)
# celeri.plot_block_labels(segment, block, station, closure)
assembly = addict.Dict()
operators = addict.Dict()
assembly = celeri.merge_geodetic_data(assembly, station, sar)
assembly, operators.block_motion_constraints = celeri.block_constraints(assembly, block, command)
assembly, operators.slip_rate_constraints = celeri.slip_rate_constraints(assembly, segment, block, command)
def fault_parameters_to_okada_format(sx1, sy1, sx2, sy2, dip, D, bd):
"""
This function takes fault trace, dip, and locking depth information
and calculates the anchor coordinates, length, width and strike of
the fault plane following (1985).
Arguments:
sx1 : x coord of fault trace endpoint 1
sy1 : y coord of fault trace endpoint 1
sx2 : x coord of fault trace endpoint 2
sy2 : y coord of fault trace endpoint 2
dip : dip of fault plane (degrees)
D : fault locking depth
bd : burial depth (top "locking depth")
Returned variables:
strike : stike of fault plane
L : fault length
W : fault width
ofx : x coord of fault anchor
ofy : y coord of fault anchor
ofxe : x coord of other buried corner
ofye : y coord of other buried corner
tfx : x coord of fault anchor (top relative)
tfy : y coord of fault anchor (top relative)
tfxe : x coord of other buried corner (top relative)
tfye : y coord of other buried corner (top relative)
"""
okada_parameters = addict.Dict()
okada_parameters.strike = np.arctan2(sy1 - sy2, sx1 - sx2) + np.pi # This is by convention
okada_parameters.L = np.sqrt((sx2 - sx1)**2 + (sy2 - sy1)**2)
okada_parameters.W = (D - bd) / np.sin(np.deg2rad(dip))
# Calculate fault segment anchor and other buried point
okada_parameters.ofx = sx1 + D / np.tan(np.deg2rad(dip)) * np.sin(np.deg2rad(okada_parameters.strike))
okada_parameters.ofy = sy1 - D / np.tan(np.deg2rad(dip)) * np.cos(np.deg2rad(okada_parameters.strike))
okada_parameters.ofxe = sx2 + D / np.tan(np.deg2rad(dip)) * np.sin(np.deg2rad(okada_parameters.strike))
okada_parameters.ofye = sy2 - D / np.tan(np.deg2rad(dip)) * np.cos(np.deg2rad(okada_parameters.strike))
# Calculate fault segment anchor and other buried point (top relative)
okada_parameters.tfx = sx1 + bd / np.tan(np.deg2rad(dip)) * np.sin(np.deg2rad(okada_parameters.strike))
okada_parameters.tfy = sy1 - bd / np.tan(np.deg2rad(dip)) * np.cos(np.deg2rad(okada_parameters.strike))
okada_parameters.tfxe = sx2 + bd / np.tan(np.deg2rad(dip)) * np.sin(np.deg2rad(okada_parameters.strike))
okada_parameters.tfye = sy2 - bd / np.tan(np.deg2rad(dip)) * np.cos(np.deg2rad(okada_parameters.strike))
return okada_parameters
# def GetElasticPartials(segment, stations):
"""
Calculates the elastic displacement partial derivatives based on the Okada
formulation, using the source and receiver geometries defined in
dicitonaries segment and stations. Before calculating the partials for
each segment, a local oblique Mercator project is done.
"""
n_stations = len(station)
n_segments = len(segment)
G = np.zeros((3 * n_stations, 3 * n_segments))
v1 = np.zeros(n_segments)
v2 = np.zeros(n_segments)
v3 = np.zeros(n_segments)
# Loop through each segment and calculate displacements
# for i in range(n_segments):
for i in range(1):
print(i)
# Local map projection
projection = celeri.get_segment_oblique_projection(segment.lon1[i], segment.lat1[i], segment.lon2[i], segment.lat2[i])
station_x, station_y = projection(station.lon, station.lat)
segment_x1, segment_y1 = projection(segment.lon1[i], segment.lat1[i])
segment_x2, segment_y2 = projection(segment.lon2[i], segment.lat2[i])
# Calculate fault parameters in Okada form
# okada_parameters = fault_parameters_to_okada_format(f.px1, f.py1, f.px2, f.py2, segment.dip[i], segment.locking_depth[i], segment.burial_depth[i])
okada_parameters = fault_parameters_to_okada_format(segment_x1, segment_y1,
segment_x2, segment_y2, segment.dip[i], segment.locking_depth[i],
segment.burial_depth[i])
# Translate observation coordinates relative to fault anchor
# Rotation observation coordinates to remove strike.
# rot = np.array([[cos(theta), -sin(theta)], [sin(theta), cos(theta)]])
rotation_matrix = np.array([[np.cos(np.deg2rad(okada_parameters.strike)), -np.sin(np.deg2rad(okada_parameters.strike))],
[np.sin(np.deg2rad(okada_parameters.strike)), np.cos(np.deg2rad(okada_parameters.strike))]])
# % Displacements due to unit slip components
# [ves vns vus...
# ved vnd vud...
# vet vnt vut] = okada_partials(ofx, ofy, strike, f.lDep, deg_to_rad(f.dip), L, W, 1, 1, 1, s.fpx, s.fpy, command.poissons_ratio);
# v1{i} = reshape([ves vns vus]', 3*nSta, 1)
# v2{i} = reshape(sign(90 - f.dip).*[ved vnd vud]', 3*nSta, 1)
# v3{i} = reshape((f.dip - 90 == 0).*[vet vnt vut]', 3*nSta, 1)
# v1{i} = xyz2enumat((v1{i}), -f.strike + 90)
# v2{i} = xyz2enumat((v2{i}), -f.strike + 90)
# v3{i} = xyz2enumat((v3{i}), -f.strike + 90)
# Place cell arrays into the partials matrix
# G[:, 0::3] = cell2mat(v1)
# G[:, 1::3] = cell2mat(v2)
# G[:, 2::3] = cell2mat(v3)
# TODO: Locking depths are currently meters rather than KM in inputfiles!!!
# TODO: Do I really need two rotation matrices?
# TODO: Is there another XYZ to ENU conversion needed?
i = 0
segment.locking_depth.values[i] *= celeri.KM2M
segment.burial_depth.values[i] *= celeri.KM2M
# Project coordinates to flat space using a local oblique Mercator projection
projection = celeri.get_segment_oblique_projection(segment.lon1[i], segment.lat1[i], segment.lon2[i], segment.lat2[i], True)
station_x, station_y = projection(station.lon, station.lat)
segment_x1, segment_y1 = projection(segment.lon1[i], segment.lat1[i])
segment_x2, segment_y2 = projection(segment.lon2[i], segment.lat2[i])
# Calculate geometric fault parameters
segment_strike = np.arctan2(segment_y2 - segment_y1, segment_x2 - segment_x1) # radians
segment_length = np.sqrt((segment_y2 - segment_y1)**2.0 + (segment_x2 - segment_x1)**2.0)
segment_up_dip_width = (segment.locking_depth[i] - segment.burial_depth[i]) / np.sin(np.deg2rad(segment.dip[i]))
# Translate stations and segment so that segment mid-point is at the origin
segment_x_mid = (segment_x1 + segment_x2) / 2.0
segment_y_mid = (segment_y1 + segment_y2) / 2.0
station_x -= segment_x_mid
station_y -= segment_y_mid
segment_x1 -= segment_x_mid
segment_x2 -= segment_x_mid
segment_y1 -= segment_y_mid
segment_y2 -= segment_y_mid
# Unrotate coordinates to eliminate strike, segment will lie along y = 0
# np.einsum guidance from: https://stackoverflow.com/questions/26289972/use-numpy-to-multiply-a-matrix-across-an-array-of-points
rotation_matrix = np.array([[np.cos(segment_strike), -np.sin(segment_strike)],
[np.sin(segment_strike), np.cos(segment_strike)]])
un_rotation_matrix = np.array([[np.cos(-segment_strike), -np.sin(-segment_strike)],
[np.sin(-segment_strike), np.cos(-segment_strike)]])
station_x_rotated, station_y_rotated = np.hsplit(np.einsum("ij,kj->ik", np.dstack((station_x, station_y))[0], un_rotation_matrix), 2)
segment_x1_rotated, segment_y1_rotated = un_rotation_matrix.dot([segment_x1, segment_y1])
segment_x2_rotated, segment_y2_rotated = un_rotation_matrix.dot([segment_x2, segment_y2])
# Elastic displacements due to fault slip from Okada 1985
alpha = (command.material_lambda + command.material_mu) / (command.material_lambda + 2 * command.material_mu)
u_x = np.zeros_like(station_x)
u_y = np.zeros_like(station_x)
u_z = np.zeros_like(station_x)
for j in range(len(station)):
_, u, _ = dc3dwrapper(alpha, # (lambda + mu) / (lambda + 2 * mu)
[station_x_rotated[j], station_y_rotated[j], 0], # (meters) observation point
-segment.locking_depth[i], # (meters) depth of the fault origin
segment.dip[i], # (degrees) the dip-angle of the rectangular dislocation surface
[-segment_length / 2, segment_length / 2], # (meters) the along-strike range of the surface (al1,al2 in the original)
[0, segment_up_dip_width], # (meters) along-dip range of the surface (aw1, aw2 in the original)
[1.0, 0.0, 0.0]) # (meters) strike-slip, dip-slip, tensile-slip
u_x[j] = u[0]
u_y[j] = u[1]
u_z[j] = u[2]
# Rotate displacement to account for projected fault strike
u_x_un_rotated, u_y_un_rotated = np.hsplit(np.einsum("ij,kj->ik", np.dstack((u_x, u_y))[0], rotation_matrix), 2)
plt.figure()
plt.plot([segment.lon1[i], segment.lon2[i]], [segment.lat1[i], segment.lat2[i]], "-r")
plt.plot(station.lon, station.lat, '.b', markersize=1)
plt.xlim([235, 255])
plt.ylim([30, 50])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Positions: longitude and latitude")
plt.show()
plt.figure()
plt.plot([segment_x1, segment_x2], [segment_y1, segment_y2], "-r")
plt.plot(station_x, station_y, '.b', markersize=1)
plt.xlim([-1e6, 1e6])
plt.ylim([-1e6, 1e6])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Positions: projected and translated")
plt.show()
plt.figure()
plt.plot([segment_x1_rotated, segment_x2_rotated], [segment_y1_rotated, segment_y2_rotated], "-r")
plt.plot(station_x_rotated, station_y_rotated, '.b', markersize=1)
plt.xlim([-1e6, 1e6])
plt.ylim([-1e6, 1e6])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Positions: projected, translated, and rotated")
plt.show()
plt.figure()
plt.plot([segment_x1_rotated, segment_x2_rotated], [segment_y1_rotated, segment_y2_rotated], "-r")
# plt.plot(station_x_rotated, station_y_rotated, '.b', markersize=1)
plt.quiver(station_x_rotated, station_y_rotated, u_x, u_y, scale=1e-1, scale_units='inches')
plt.xlim([-1e6, 1e6])
plt.ylim([-1e6, 1e6])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Displacements: projected, translated, and rotated")
plt.show()
plt.figure()
plt.plot([segment_x1, segment_x2], [segment_y1, segment_y2], "-r")
# plt.plot(station_x, station_y, '.b', markersize=1)
plt.quiver(station_x, station_y, u_x_un_rotated, u_y_un_rotated, scale=1e-1, scale_units='inches')
plt.xlim([-1e6, 1e6])
plt.ylim([-1e6, 1e6])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Displacements: projected and translated")
plt.show()
plt.figure()
plt.plot([segment.lon1[i], segment.lon2[i]], [segment.lat1[i], segment.lat2[i]], "-r")
# plt.plot(station_x, station_y, '.b', markersize=1)
plt.quiver(station.lon, station.lat, u_x_un_rotated, u_y_un_rotated, scale=1e-1, scale_units='inches')
plt.xlim([235, 255])
plt.ylim([30, 50])
plt.gca().set_aspect('equal', adjustable='box')
plt.title("Displacements: longitude and latitude")
plt.show()
plt.figure()
plt.plot(u_x)
plt.show()
print(u_x)
c[i]
# Zoom on region of interest
# Plot corners and achor point
# Plot strike-slip deformation
# Plot dip-slip deformation
# Plot tensile-slip_deformation
# Try Ben's test Okada
"""
Seven arguments are required:
alpha = (lambda + mu) / (lambda + 2 * mu)
xo = 3-vector representing the observation point (x, y, z in the original)
depth = the depth of the fault origin
dip = the dip-angle of the rectangular dislocation surface
strike_width = the along-strike range of the surface (al1,al2 in the original)
dip_width = the along-dip range of the surface (aw1, aw2 in the original)
dislocation = 3-vector representing the direction of motion on the surface
(DISL1 = strike-slip, DISL2 = dip-slip, DISL3 = opening/overlap)
"""
source_depth = 3.0e3 # meters
dip = 90 # degrees
alpha = (command.material_lambda + command.material_mu) / (command.material_lambda + 2 * command.material_mu)
n_obs = 100
x = np.linspace(-1, 1, n_obs)
y = np.linspace(-1, 1, n_obs)
x, y = np.meshgrid(x, y)
x = x.flatten()
y = y.flatten()
# x = x - x_fault
# y = y - yfault
ux = np.zeros_like(x)
uy = np.zeros_like(x)
uz = np.zeros_like(x)
for i in range(x.size):
_, u, _ = dc3dwrapper(alpha, # (lambda + mu) / (lambda + 2 * mu)
[x[i], y[i], 0], # (meters) observation point
source_depth, # (meters) depth of the fault origin
dip, # (degrees) the dip-angle of the rectangular dislocation surface
[-0.6, 0.6], # (meters) the along-strike range of the surface (al1,al2 in the original)
[0, 3.0], # (meters) along-dip range of the surface (aw1, aw2 in the original)
[1.0, 0.0, 0.0]) # (meters) strike-slip, dip-slip, tensile-slip
ux[i] = u[0]
uy[i] = u[1]
uz[i] = u[2]
plt.figure()
plt.quiver(x, y, ux, uy)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
# def okada_partials(xf, yf, strike, d, delta, L, W, U1, U2, U3, xs, ys, Pr):
# """
# Arguments:
# xf : x component of fault corner in Okada ref frame
# xf : y component of fault corner in Okada ref frame
# strike : is the azimuth of the fault (should always be zero for blocks_sp1 case)
# d : is the depth (-z) of the origin of the fault
# dip : is the inclination of the fault (measured clockwise from horizontal left, facing along the strike)
# L : is the along strike length of the fault plane
# W : is the down dip length of fault plane
# U1 : is the magnitude of the strike slip dislocation
# U2 : is the magnitude of the dip slip dislocation
# U3 : is the magnitude of the tensile dislocation
# xs : x component of station position
# ys : y component of station position
# Pr : is Poisson's ratio
# Returned variables:
# uxtot : total x displacement
# uytot : total y displacement
# uztot : total z displacement
# """
# uxtot = zeros(length(xs), 1);
# uytot = zeros(length(xr), 1);
# uztot = zeros(length(xr), 1);
# tol = 1.0e-4
# alpha = -2 * Pr + 1
# # Get station locations relative to fault anchor and remove strike
# xt = xs - xf
# yt = ys - yf
# alpha_rot = -strike
# xr, yr = rotate_xy_vec(xt, yt, alpha_rot)
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# %% Calculate some values that are frequently needed %%
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# sind = sin(delta);
# cosd = cos(delta);
# twopi = 2.*pi;
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# %% Find displacements at each station %%
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# x = xr;
# y = yr;
# p = y.*cosd + d.*sind;
# q = repmat(y.*sind - d.*cosd, 1, 4);
# zi = [x x x-L x-L];
# eta = [p p-W p p-W];
# ybar = eta.*cosd + q.*sind;
# dbar = eta.*sind - q.*cosd;
# R = sqrt(zi.^2 + eta.^2 + q.^2);
# X = sqrt(zi.^2 + q.^2);
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# %% Calculate some more commonly used values %%
# %% These are introduced to reduce repetive %%
# %% calculations. (see okada.m for Okada's %%
# %% form of the equations) %%
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# Reta = R+eta;
# Rzi = R+zi;
# Rdbar = R+dbar;
# qdivR = q./R;
# phi = atan(zi.*eta./q./R);
# if (abs(cosd) >= tol)
# I5 = alpha * 2 ./ cosd * atan((eta.*(X + q.*cosd) ...
# + (X.*(R + X)*sind))./(zi.*(R+X).*cosd));
# I4 = alpha./cosd * (log(Rdbar) - sind.*log(Reta));
# I3 = alpha .* (1./cosd.*ybar./Rdbar - log(Reta) ) ...
# + sind./cosd.*I4;
# I2 = alpha .* (-log(Reta)) - I3;
# I1 = alpha .* (-1./cosd.*zi./Rdbar) - sind./cosd.*I5;
# else
# I5 = -alpha.*(zi.*sind)./Rdbar;
# I4 = -alpha.*q./Rdbar;
# I3 = alpha./2 .*(eta./Rdbar + ybar.*q./Rdbar.^2 - log(Reta));
# I2 = alpha .* (-log(Reta)) - I3;
# I1 = -alpha/2 .* (zi.*q)./Rdbar.^2;
# end
# uxs = -U1./twopi .* (zi.*qdivR./(Reta) + phi + I1.*sind);
# uxd = -U2./twopi .* (qdivR - I3.*sind.*cosd);
# uxt = U3./twopi .* (q.*qdivR./(Reta) - I3.*sind.^2);
# uys = -U1./twopi .* (ybar.*qdivR./(Reta) + q.*cosd./(Reta) + I2.*sind);
# uyd = -U2./twopi .* (ybar.*qdivR./(Rzi) + cosd.*phi - I1.*sind.*cosd);
# uyt = U3./twopi .* (-dbar.*qdivR./(Rzi) - sind.*(zi.*qdivR./(Reta) - phi) - I1.*sind.^2);
# uzs = -U1./twopi .* (dbar.*qdivR./(Reta) + q.*sind./(Reta) + I4.*sind);
# uzd = -U2./twopi .* (dbar.*qdivR./(Rzi) + sind.*phi - I5.*sind.*cosd);
# uzt = U3./twopi .* (ybar.*qdivR./(Rzi) + cosd.*(zi.*qdivR./(Reta) - phi) - I5.*sind.^2);
# uxstot = uxs(:, 1) - uxs(:, 2) - uxs(:, 3) + uxs(:, 4);
# uxdtot = uxd(:, 1) - uxd(:, 2) - uxd(:, 3) + uxd(:, 4);
# uxttot = uxt(:, 1) - uxt(:, 2) - uxt(:, 3) + uxt(:, 4);
# uystot = uys(:, 1) - uys(:, 2) - uys(:, 3) + uys(:, 4);
# uydtot = uyd(:, 1) - uyd(:, 2) - uyd(:, 3) + uyd(:, 4);
# uyttot = uyt(:, 1) - uyt(:, 2) - uyt(:, 3) + uyt(:, 4);
# uzstot = uzs(:, 1) - uzs(:, 2) - uzs(:, 3) + uzs(:, 4);
# uzdtot = uzd(:, 1) - uzd(:, 2) - uzd(:, 3) + uzd(:, 4);
# uzttot = uzt(:, 1) - uzt(:, 2) - uzt(:, 3) + uzt(:, 4);
# %% Rotate the station displacements back to include the effect of the strike
# [uxstot, uystot] = rotate_xy_vec(uxstot, uystot, -alpha_rot);
# [uxdtot, uydtot] = rotate_xy_vec(uxdtot, uydtot, -alpha_rot);
# [uxttot, uyttot] = rotate_xy_vec(uxttot, uyttot, -alpha_rot);
# return uxstot, uystot, uzstot, uxdtot, uydtot, uzdtot, uxttot, uyttot, uzttot
```
| github_jupyter |
```
import scanpy as sc
import pandas as pd
import numpy as np
import scipy as sp
from statsmodels.stats.multitest import multipletests
import matplotlib.pyplot as plt
import seaborn as sns
from anndata import AnnData
import os
import time
from gprofiler import GProfiler
# scTRS tools
import scTRS.util as util
import scTRS.data_loader as dl
import scTRS.method as md
# autoreload
%load_ext autoreload
%autoreload 2
# Read data
DATA_PATH='/n/holystore01/LABS/price_lab/Users/mjzhang/scTRS_data/single_cell_data/liver_atlas'
df_cluster = pd.read_csv(DATA_PATH+'/GSE124395_clusterpartition.txt', sep=' ')
df_barcode = pd.read_csv(DATA_PATH+'/GSE124395_celseq_barcodes.192.txt', sep='\t', header=None, index_col=0)
df_cluster_ct_map = pd.read_excel(DATA_PATH+'/Aizarani_Nature_2019_celltype_marker.xlsx', index_col=0)
df_data = pd.read_csv(DATA_PATH+'/GSE124395_Normalhumanlivercellatlasdata.txt.gz', sep='\t', compression='gzip')
# Make anndata
adata_raw = AnnData(X=df_data.T)
adata_raw.X = sp.sparse.csr_matrix(adata_raw.X)
adata_raw.obs = adata_raw.obs.join(df_cluster)
adata_raw.obs.columns = ['cluster_id']
adata_raw.obs['cell_id'] = adata_raw.obs.index
adata_raw.obs['celltype'] = ''
for celltype in df_cluster_ct_map.index:
for cluster in df_cluster_ct_map.loc[celltype, 'cluster'].split(','):
adata_raw.obs.loc[adata_raw.obs['cluster_id']==int(cluster), 'celltype'] = celltype
print('# Before filtering', adata_raw.shape)
adata_raw = adata_raw[~adata_raw.obs['cluster_id'].isna()]
sc.pp.filter_genes(adata_raw, min_cells=10)
print('# After filtering', adata_raw.shape)
adata_raw.write(DATA_PATH+'/obj_raw.h5ad')
# Make .cov file
df_cov = pd.DataFrame(index=adata_raw.obs.index)
df_cov['const'] = 1
df_cov['n_genes'] = (adata_raw.X>0).sum(axis=1)
df_cov.to_csv(DATA_PATH+'/aizarani.cov', sep='\t')
# Cluster the data to have UMAP plot
adata = adata_raw.copy()
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
sc.pp.log1p(adata)
print(adata.shape)
sc.pp.highly_variable_genes(adata, subset = False, min_disp=.5,
min_mean=.0125, max_mean=10, n_bins=20, n_top_genes=None)
sc.pp.scale(adata, max_value=10, zero_center=False)
sc.pp.pca(adata, n_comps=50, use_highly_variable=True, svd_solver='arpack')
sc.pp.neighbors(adata, n_neighbors=15, n_pcs=20)
sc.tl.louvain(adata, resolution = 0.5)
sc.tl.leiden(adata, resolution = 0.5)
sc.tl.umap(adata)
sc.tl.diffmap(adata)
# sc.tl.draw_graph(adata)
adata.write(DATA_PATH+'/obj_processed.h5ad')
sc.pl.umap(adata, color='celltype')
df_zonation_hep = pd.read_csv(DATA_PATH+'/Aizarani_Nature_2019_liver_supp_table3.txt', sep='\t')
(adata.obs['celltype'] == 'other').sum()
adata.obs.groupby(['cluster_id']).agg({'cell_id':len})
cell_set = set(df_zonation_hep.columns)
adata.obs.loc[adata.obs['cluster_id']==36]
(adata.X.sum(axis=1)<1000).sum()
adata.X.sum(axis=1)
ind_select = [x in cell_set for x in adata.obs.index]
adata.obs.loc[ind_select].groupby('cluster_id').agg({'cell_id': len})
ind_select = [x for x in adata.obs.index if ('CD45' in x)]
adata.obs.loc[ind_select].groupby('cluster_id').agg({'cell_id': len})
df_cluster
adata.shape[0] - 10372
1052 + 1282
temp_list = [x for x in df_data.columns if 'HEP' in x]
print(set(temp_list))
temp_list = [x.split('_')[0] for x in df_data.columns]
print(set(temp_list))
temp_list = [x.split('_')[1] for x in df_data.columns]
print(set(temp_list))
temp_list = [x.split('_')[3] for x in df_data.columns]
print(set(temp_list))
df_data.columns
```
| github_jupyter |
# A First Shot at Deep Learning with PyTorch
> "Create a hello world for deep learning using PyTorch."
- toc: false
- branch: master
- author: Elvis Saravia
- badges: true
- comments: true
- categories: [deep learning, beginner, neural network]
- image: images/model-nn.png
- hide: false
## About
In this notebook, we are going to take a baby step into the world of deep learning using PyTorch. There are a ton of notebooks out there that teach you the fundamentals of deep learning and PyTorch, so here the idea is to give you some basic introduction to deep learning and PyTorch at a very high level. Therefore, this notebook is targeting beginners but it can also serve as a review for more experienced developers.
After completion of this notebook, you are expected to know the basic components of training a basic neural network with PyTorch. I have also left a couple of exercises towards the end with the intention of encouraging more research and practise of your deep learning skills.
---
**Author:** Elvis Saravia - [Twitter](https://twitter.com/omarsar0) | [LinkedIn](https://www.linkedin.com/in/omarsar/)
**Complete Code Walkthrough:** [Blog post](https://medium.com/dair-ai/a-first-shot-at-deep-learning-with-pytorch-4a8252d30c75)
## Importing the libraries
Like with any other programming exercise, the first step is to import the necessary libraries. As we are going to be using Google Colab to program our neural network, we need to install and import the necessary PyTorch libraries.
```
!pip3 install torch torchvision
## The usual imports
import torch
import torch.nn as nn
## print out the pytorch version used
print(torch.__version__)
```
## The Neural Network

Before building and training a neural network the first step is to process and prepare the data. In this notebook, we are going to use syntethic data (i.e., fake data) so we won't be using any real world data.
For the sake of simplicity, we are going to use the following input and output pairs converted to tensors, which is how data is typically represented in the world of deep learning. The x values represent the input of dimension `(6,1)` and the y values represent the output of similar dimension. The example is taken from this [tutorial](https://github.com/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%202%20-%20Lesson%202%20-%20Notebook.ipynb).
The objective of the neural network model that we are going to build and train is to automatically learn patterns that better characterize the relationship between the `x` and `y` values. Essentially, the model learns the relationship that exists between inputs and outputs which can then be used to predict the corresponding `y` value for any given input `x`.
```
## our data in tensor form
x = torch.tensor([[-1.0], [0.0], [1.0], [2.0], [3.0], [4.0]], dtype=torch.float)
y = torch.tensor([[-3.0], [-1.0], [1.0], [3.0], [5.0], [7.0]], dtype=torch.float)
## print size of the input tensor
x.size()
```
## The Neural Network Components
As said earlier, we are going to first define and build out the components of our neural network before training the model.
### Model
Typically, when building a neural network model, we define the layers and weights which form the basic components of the model. Below we show an example of how to define a hidden layer named `layer1` with size `(1, 1)`. For the purpose of this tutorial, we won't explicitly define the `weights` and allow the built-in functions provided by PyTorch to handle that part for us. By the way, the `nn.Linear(...)` function applies a linear transformation ($y = xA^T + b$) to the data that was provided as its input. We ignore the bias for now by setting `bias=False`.
```
## Neural network with 1 hidden layer
layer1 = nn.Linear(1,1, bias=False)
model = nn.Sequential(layer1)
```
### Loss and Optimizer
The loss function, `nn.MSELoss()`, is in charge of letting the model know how good it has learned the relationship between the input and output. The optimizer (in this case an `SGD`) primary role is to minimize or lower that loss value as it tunes its weights.
```
## loss function
criterion = nn.MSELoss()
## optimizer algorithm
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
## Training the Neural Network Model
We have all the components we need to train our model. Below is the code used to train our model.
In simple terms, we train the model by feeding it the input and output pairs for a couple of rounds (i.e., `epoch`). After a series of forward and backward steps, the model somewhat learns the relationship between x and y values. This is notable by the decrease in the computed `loss`. For a more detailed explanation of this code check out this [tutorial](https://medium.com/dair-ai/a-simple-neural-network-from-scratch-with-pytorch-and-google-colab-c7f3830618e0).
```
## training
for i in range(150):
model = model.train()
## forward
output = model(x)
loss = criterion(output, y)
optimizer.zero_grad()
## backward + update model params
loss.backward()
optimizer.step()
model.eval()
print('Epoch: %d | Loss: %.4f' %(i, loss.detach().item()))
```
## Testing the Model
After training the model we have the ability to test the model predictive capability by passing it an input. Below is a simple example of how you could achieve this with our model. The result we obtained aligns with the results obtained in this [notebook](https://github.com/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%202%20-%20Lesson%202%20-%20Notebook.ipynb), which inspired this entire tutorial.
```
## test the model
sample = torch.tensor([10.0], dtype=torch.float)
predicted = model(sample)
print(predicted.detach().item())
```
## Final Words
Congratulations! In this tutorial you learned how to train a simple neural network using PyTorch. You also learned about the basic components that make up a neural network model such as the linear transformation layer, optimizer, and loss function. We then trained the model and tested its predictive capabilities. You are well on your way to become more knowledgeable about deep learning and PyTorch. I have provided a bunch of references below if you are interested in practising and learning more.
*I would like to thank Laurence Moroney for his excellent [tutorial](https://github.com/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%202%20-%20Lesson%202%20-%20Notebook.ipynb) which I used as an inspiration for this tutorial.*
## Exercises
- Add more examples in the input and output tensors. In addition, try to change the dimensions of the data, say by adding an extra value in each array. What needs to be changed to successfully train the network with the new data?
- The model converged really fast, which means it learned the relationship between x and y values after a couple of iterations. Do you think it makes sense to continue training? How would you automate the process of stopping the training after the model loss doesn't subtantially change?
- In our example, we used a single hidden layer. Try to take a look at the PyTorch documentation to figure out what you need to do to get a model with more layers. What happens if you add more hidden layers?
- We did not discuss the learning rate (`lr-0.001`) and the optimizer in great detail. Check out the [PyTorch documentation](https://pytorch.org/docs/stable/optim.html) to learn more about what other optimizers you can use.
## References
- [The Hello World of Deep Learning with Neural Networks](https://github.com/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%202%20-%20Lesson%202%20-%20Notebook.ipynb)
- [A Simple Neural Network from Scratch with PyTorch and Google Colab](https://medium.com/dair-ai/a-simple-neural-network-from-scratch-with-pytorch-and-google-colab-c7f3830618e0?source=collection_category---4------1-----------------------)
- [PyTorch Official Docs](https://pytorch.org/docs/stable/nn.html)
- [PyTorch 1.2 Quickstart with Google Colab](https://medium.com/dair-ai/pytorch-1-2-quickstart-with-google-colab-6690a30c38d)
- [A Gentle Intoduction to PyTorch](https://medium.com/dair-ai/pytorch-1-2-introduction-guide-f6fa9bb7597c)
| github_jupyter |
```
import psycopg2
import pandas as pd
import pandas.io.sql as pd_sql
import numpy as np
import matplotlib.pyplot as plt
#from sqlalchemy import create_engine
def connectDB(DB):
# connect to the PostgreSQL server
return psycopg2.connect(
database=DB,
user="postgres",
password="Georgetown16",
host="database-1.c5vispb5ezxg.us-east-1.rds.amazonaws.com",
port='5432')
def disconnectDB():
cur.close()
conn.close()
# connect to "Dataset" DB
conn = connectDB("Dataset")
# extract everything from 'table_name' into a dataframe
df = pd_sql.read_sql(f"select * from public.\"FinalData\" ", con=conn)
pd.set_option('display.max_column',50)
df.head()
```
## Random Forest Model for Year 2013 STEM Class
```
## Create a temporary data frame for Year 2013 Term J and Term B STEM class
tempDf = df[['year','term','module_domain','code_module','region',
'Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region',
'South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region',
'gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education','imd_band','studied_credits']]
tempDf = tempDf.loc[(tempDf.year == 0)&(tempDf.module_domain==1)]
# Show first 20 observations of the dataset
tempDf.head(20)
# drop missing values
tempDf=tempDf.dropna()
tempDf.count()
X=tempDf[['region','gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score',
'date_registration','age_band','module_presentation_length','num_of_prev_attempts',
'highest_education','imd_band','studied_credits']]
y=tempDf['final_result']
```
# Feature Selection using Random Forest
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100))
sel.fit(X_train, y_train)
## SelectFromModel will select those features which importance is greater
## than the mean importance of all the features by default
sel.get_support()
# It will return an array of boolean values.
# True for the features whose importance is greater than the mean importance and False for the rest.
# We can now make a list and count the selected features.
# It will return an Integer representing the number of features selected by the random forest.
selected_feat= X_train.columns[(sel.get_support())]
len(selected_feat)
# Get the name of the features selected
print(selected_feat)
# check and plot the distribution of importance.
pd.Series(sel.estimator_.feature_importances_.ravel()).hist()
# After dropping those missing values, we have 7786 observatios for the dataset
# Define our predictors
X=tempDf[['b4_sum_clicks','half_sum_clicks','std_half_score']]
y=tempDf['final_result']
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
regressor = RandomForestClassifier(n_estimators=100, random_state=0)
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
print(X.columns,regressor.feature_importances_)
# Get a Confusion Matrix
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
# Print the Accuracy
from sklearn import metrics
print('Accuracy: ',metrics.accuracy_score(y_test,y_pred))
## Create a temporary data frame for Year 2014 Term J and Term B STEM class
tempDf2 = df[['year','term','module_domain','code_module','region','Scotland','East Anglian Region','London Region',
'South Region','North Western Region','West Midlands Region','South West Region','East Midlands Region',
'South East Region','Wales','Yorkshire Region','North Region','gender','disability',
'b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education',
'imd_band','studied_credits']]
tempDf2 = tempDf2.loc[(tempDf2.year ==1)&(tempDf2.module_domain==1)]
# Show first 5 observations of the dataset
tempDf2.head(5)
tempDf2=tempDf2.dropna()
# df2 = pd.DataFrame(tempDf2,columns= ['b4_sum_clicks','std_half_score','half_sum_clicks'])
df2 = pd.DataFrame(tempDf2,columns= ['b4_sum_clicks','half_sum_clicks','std_half_score'])
X=tempDf[['b4_sum_clicks','half_sum_clicks','std_half_score']]
# X=tempDf[['b4_sum_clicks','std_half_score','half_sum_clicks']]
y=tempDf['final_result']
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Start Random forest Modelling
from sklearn.ensemble import RandomForestClassifier
regressor = RandomForestClassifier(n_estimators=100, random_state=0).fit(X_train, y_train)
y_pred_new = regressor.predict(df2)
y_test_new=tempDf2['final_result']
# Print the Accuracy
from sklearn import metrics
print('Accuracy: ',metrics.accuracy_score(y_test_new, y_pred_new))
```
## Random Forest for 2013 Social Science Class
```
## Create a temporary data frame for Year 2013 Term J and Term B Social Science class
## Create a temporary data frame for Year 2013 Term J and Term B STEM class
SStempDf = df[['year','term','module_domain','code_module','region',
'Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region',
'South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region',
'gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education','imd_band','studied_credits']]
SStempDf = SStempDf.loc[(SStempDf.year == 0)&(SStempDf.module_domain==0)]
# Show first 5 observations of the dataset
SStempDf.head(5)
X=SStempDf[['region','Scotland','East Anglian Region','London Region','South Region',
'North Western Region','West Midlands Region','South West Region','East Midlands Region',
'South East Region','Wales','Yorkshire Region','North Region','gender','disability','b4_sum_clicks',
'half_sum_clicks','std_half_score','date_registration','age_band','module_presentation_length',
'num_of_prev_attempts','highest_education','imd_band','studied_credits']]
y=SStempDf['final_result']
## Feature Selection
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100))
sel.fit(X_train, y_train)
sel.get_support()
selected_feat= X_train.columns[(sel.get_support())]
len(selected_feat)
print(selected_feat)
# check and plot the distribution of importance.
pd.Series(sel.estimator_.feature_importances_.ravel()).hist()
# Let's start our modeling
X=SStempDf[['b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','imd_band']]
y=SStempDf['final_result']
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
regressor = RandomForestClassifier(n_estimators=100, random_state=0)
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
print(X.columns,regressor.feature_importances_)
# Get a Confusion Matrix
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
# Print the Accuracy
from sklearn import metrics
print('Accuracy: ',metrics.accuracy_score(y_test,y_pred))
## Let's predict 2014 Socical Science Results
## Create a temporary data frame for Year 2014 Term J and Term B STEM class
SStempDf2 = df[['year','term','module_domain','code_module','region','Scotland','East Anglian Region','London Region',
'South Region','North Western Region','West Midlands Region','South West Region','East Midlands Region',
'South East Region','Wales','Yorkshire Region','North Region','gender','disability',
'b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education',
'imd_band','studied_credits']]
SStempDf2 = SStempDf2.loc[(SStempDf2.year ==1)&(SStempDf2.module_domain==0)]
# Show first 5 observations of the dataset
SStempDf2.head(5)
SStempDf2=SStempDf2.dropna()
df2 = pd.DataFrame(SStempDf2,columns= ['b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','imd_band'])
X=SStempDf[['b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','imd_band']]
y=SStempDf['final_result']
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Start Random forest Modelling
from sklearn.ensemble import RandomForestClassifier
regressor = RandomForestClassifier(n_estimators=100, random_state=0).fit(X_train, y_train)
y_pred_new = regressor.predict(df2)
y_test_new=SStempDf2['final_result']
# Print the Accuracy
from sklearn import metrics
print('Accuracy: ',metrics.accuracy_score(y_test_new,y_pred_new))
```
## Combine STEM and Social Science Together
```
ComtempDf = df[['year','term','module_domain','code_module','region','Scotland','East Anglian Region','London Region',
'South Region','North Western Region','West Midlands Region','South West Region','East Midlands Region',
'South East Region','Wales','Yorkshire Region','North Region','gender','disability',
'b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education',
'imd_band','studied_credits']]
ComtempDf = ComtempDf.loc[(ComtempDf.year ==0)]
# Show first 5 observations of the dataset
ComtempDf.head(5)
ComtempDf.count()
X=ComtempDf[['term','region','gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score',
'date_registration','age_band','module_presentation_length','num_of_prev_attempts',
'highest_education','imd_band','studied_credits']]
y=ComtempDf['final_result']
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100))
sel.fit(X_train, y_train)
sel.get_support()
# We can now make a list and count the selected features.
# It will return an Integer representing the number of features selected by the random forest.
selected_feat= X_train.columns[(sel.get_support())]
len(selected_feat)
print(selected_feat)
# Redefine our predictors
X=ComtempDf[['b4_sum_clicks','half_sum_clicks','std_half_score']]
y=ComtempDf['final_result']
from sklearn.ensemble import RandomForestClassifier
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
regressor = RandomForestClassifier(n_estimators=100, random_state=0)
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
# Get a Confusion Matrix
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
# Let's predict 2014 results
## Create a temporary data frame for Year 2014 Term J and Term B STEM class
ComtempDf2 = df[['year','term','module_domain','code_module','region','Scotland','East Anglian Region','London Region',
'South Region','North Western Region','West Midlands Region','South West Region','East Midlands Region',
'South East Region','Wales','Yorkshire Region','North Region','gender','disability',
'b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education',
'imd_band','studied_credits']]
ComtempDf2 = ComtempDf2.loc[(ComtempDf2.year ==1)]
ComtempDf2.count()
com2014 = pd.DataFrame(ComtempDf2,columns= ['b4_sum_clicks','half_sum_clicks','std_half_score'])
y_pred_new = regressor.predict(com2014)
y_test_new=ComtempDf2['final_result']
from sklearn.metrics import classification_report
print(classification_report(y_test_new, y_pred_new))
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test_new, y_pred_new, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
```
| github_jupyter |
```
!date
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns
%matplotlib inline
sns.set_context('paper')
sns.set_style('darkgrid')
```
# Mixture Model in PyMC3
Original NB by Abe Flaxman, modified by Thomas Wiecki
```
import pymc3 as pm, theano.tensor as tt
# simulate data from a known mixture distribution
np.random.seed(12345) # set random seed for reproducibility
k = 3
ndata = 500
spread = 5
centers = np.array([-spread, 0, spread])
# simulate data from mixture distribution
v = np.random.randint(0, k, ndata)
data = centers[v] + np.random.randn(ndata)
plt.hist(data);
# setup model
model = pm.Model()
with model:
# cluster sizes
a = pm.constant(np.array([1., 1., 1.]))
p = pm.Dirichlet('p', a=a, shape=k)
# ensure all clusters have some points
p_min_potential = pm.Potential('p_min_potential', tt.switch(tt.min(p) < .1, -np.inf, 0))
# cluster centers
means = pm.Normal('means', mu=[0, 0, 0], sd=15, shape=k)
# break symmetry
order_means_potential = pm.Potential('order_means_potential',
tt.switch(means[1]-means[0] < 0, -np.inf, 0)
+ tt.switch(means[2]-means[1] < 0, -np.inf, 0))
# measurement error
sd = pm.Uniform('sd', lower=0, upper=20)
# latent cluster of each observation
category = pm.Categorical('category',
p=p,
shape=ndata)
# likelihood for each observed value
points = pm.Normal('obs',
mu=means[category],
sd=sd,
observed=data)
# fit model
with model:
step1 = pm.Metropolis(vars=[p, sd, means])
step2 = pm.ElemwiseCategoricalStep(vars=[category], values=[0, 1, 2])
tr = pm.sample(10000, step=[step1, step2])
```
## Full trace
```
pm.plots.traceplot(tr, ['p', 'sd', 'means']);
```
## After convergence
```
# take a look at traceplot for some model parameters
# (with some burn-in and thinning)
pm.plots.traceplot(tr[5000::5], ['p', 'sd', 'means']);
# I prefer autocorrelation plots for serious confirmation of MCMC convergence
pm.autocorrplot(tr[5000::5], varnames=['sd'])
```
## Sampling of cluster for individual data point
```
i=0
plt.plot(tr['category'][5000::5, i], drawstyle='steps-mid')
plt.axis(ymin=-.1, ymax=2.1)
def cluster_posterior(i=0):
print('true cluster:', v[i])
print(' data value:', np.round(data[i],2))
plt.hist(tr['category'][5000::5,i], bins=[-.5,.5,1.5,2.5,], rwidth=.9)
plt.axis(xmin=-.5, xmax=2.5)
plt.xticks([0,1,2])
cluster_posterior(i)
```
| github_jupyter |
# Handling Volume with Apache Spark for Image Classification
Use Apache Spark to perform image classification from preprocessed images.
## License
MIT License
Copyright (c) 2019 PT Bukalapak.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
## Software Version
```
import sys
print("Python %s" % sys.version)
import base64
import pickle
import shutil
import numpy as np
print("NumPy %s" % np.__version__)
import tensorflow as tf
print("TensorFlow %s" % tf.__version__)
import pyspark
print("PySpark %s" % pyspark.__version__)
from pyspark.sql import SparkSession, Row
```
## Perform Image Classification using Notebook (NB)
### Settings
```
print("Built With CUDA:", tf.test.is_built_with_cuda())
print("Keras FP Precision:", tf.keras.backend.floatx())
LOCAL_PROJECT_URL = '/home/jovyan/work/'
```
### Create Spark Context
Set config to run Spark locally with 2 threads.
```
APP_NAME = "bukalapak-core-ai.big-data-3v.volume-spark-img-clsf"
spark = SparkSession \
.builder \
.appName(APP_NAME) \
.config("spark.master", "local[2]") \
.getOrCreate()
sc = spark.sparkContext
sc
sc.getConf().getAll()
```
### Generate Preprocessed Images for Input
Generate preprocessed images and store them in HDFS Orc file. Preprocessed images normally come from previous process.
```
# ped: preprocessed image
def convert_image_url_to_ped(x):
import io, base64, pickle
from PIL import Image as pil_image
import tensorflow as tf
# Convert URL to PIL image
image_url = x[0]
image_pil = pil_image.open(image_url)
# Make Sure the Image is in RGB (not BW)
if image_pil.mode != 'RGB':
image_pil = image_pil.convert('RGB')
# Resize Image
target_size = (224, 224)
width_height_tuple = (target_size[1], target_size[0])
if image_pil.size != width_height_tuple:
image_pil = image_pil.resize(width_height_tuple, pil_image.NEAREST)
# Normalise Image
image_np = tf.keras.preprocessing.image.img_to_array(image_pil)
image_np = tf.keras.applications.vgg16.preprocess_input(image_np)
# Convert numpy array to string
image_ped = base64.b64encode(pickle.dumps(image_np)).decode('UTF-8')
return ["local_disk", image_url, "jeket", image_ped]
def convert_image_url_to_ped_orc(image_url_pathfilename, image_ped_orc_pathfilename):
image_url_file_rdd = sc.textFile(image_url_pathfilename)
print(" Number of Partitions:", image_url_file_rdd.getNumPartitions())
image_url_list_rdd = image_url_file_rdd.map(lambda x: x.split('\n'))
image_ped_list_rdd = image_url_list_rdd.map(lambda x: convert_image_url_to_ped(x))
image_ped_dict_rdd = image_ped_list_rdd.map(lambda x: Row(tid=x[0],
iid=x[1],
l=x[2],
i_ped=x[3]))
image_ped_dict_df = spark.createDataFrame(image_ped_dict_rdd)
image_ped_dict_df.write.save(image_ped_orc_pathfilename, format="orc")
# Input file path
image_url_pathfilename = "file:{0}data/image_paths_10.txt".format(LOCAL_PROJECT_URL)
# Output file path
image_ped_orc_directory = "{0}data/images_ped.orc".format(LOCAL_PROJECT_URL)
image_ped_orc_pathfilename = "file:%s" % image_ped_orc_directory
# Remove existing output directory
shutil.rmtree(image_ped_orc_directory, ignore_errors=True)
# Start preprocessing
convert_image_url_to_ped_orc(image_url_pathfilename, image_ped_orc_pathfilename)
```
### Making Inference
Making inference using VGG16 model[1] pretrained with imagenet.
```
# input is the list of [tid, iid, l, i_ped] from preprocessed image table
# output is the list of [tid, iid, pred] for inference result table
# where:
# - tid: table ID
# - iid: image ID
# - l: label
# - i_ped: preprocessed image
# - pred: output of prediction layer (the inference result)
def make_inference(xs):
import base64, pickle
import numpy as np
import tensorflow as tf
# Extract input lists
inference_lists = []
images_array = []
for x in xs:
inference_lists.append([x.tid, x.iid])
images_array.append(pickle.loads(base64.b64decode(x.i_ped.encode('UTF-8'))))
images_np = np.array(images_array)
# Load VGG16 model
vgg = tf.keras.applications.vgg16.VGG16(weights='imagenet', include_top=True)
# Do inference
inference = vgg.predict(images_np)
# Add inference results to inference table lists
if len(inference_lists) != len(inference):
raise ValueError('The total of inference table lists is not ' +
'the same as the total of inference result')
for i in range(len(inference_lists)):
inference_lists[i].append(
base64.b64encode(pickle.dumps(inference[i])).decode('UTF-8')
)
return iter(inference_lists)
def convert_image_ped_orc_to_infr_orc(image_ped_orc_pathfilename,
image_infr_orc_pathfilename):
image_ped_dict_df = spark.read.orc(image_ped_orc_pathfilename)
image_ped_dict_rdd = image_ped_dict_df.rdd
print(" Number of Partitions:", image_ped_dict_rdd.getNumPartitions())
image_infr_list_rdd = image_ped_dict_rdd.mapPartitions(make_inference)
image_infr_dict_rdd = image_infr_list_rdd.map(lambda x: Row(tid=x[0],
iid=x[1],
pred=x[2]))
image_infr_dict_df = spark.createDataFrame(image_infr_dict_rdd)
image_infr_dict_df.write.save(image_infr_orc_pathfilename, format="orc")
# Input file path
image_ped_orc_directory = "{0}data/images_ped.orc".format(LOCAL_PROJECT_URL)
image_ped_orc_pathfilename = "file:{0}".format(image_ped_orc_directory)
# Output file path
image_infr_orc_directory = "{0}data/images_infr.orc".format(LOCAL_PROJECT_URL)
image_infr_orc_pathfilename = "file:{0}".format(image_infr_orc_directory)
# Remove existing output directory
shutil.rmtree(image_infr_orc_directory, ignore_errors=True)
# Start making inference
convert_image_ped_orc_to_infr_orc(image_ped_orc_pathfilename,
image_infr_orc_pathfilename)
```
### Check Result
```
# Read inference result
image_infr_orc_directory = "{0}data/images_infr.orc".format(LOCAL_PROJECT_URL)
image_infr_orc_pathfilename = "file:{0}".format(image_infr_orc_directory)
image_infr_dict_df = spark.read.orc(image_infr_orc_pathfilename)
image_infr_dict_rdd = image_infr_dict_df.rdd
inference_lists = image_infr_dict_rdd.take(1)[0]
inference_lists.tid
inference_lists.iid
inference_lists.pred[:300]
def convert_pbs_to_infr_tensor(pbs):
return np.array(
[list(pickle.loads(base64.b64decode(pbs.encode('UTF-8'))))])
tf.keras.applications.vgg16.decode_predictions(
convert_pbs_to_infr_tensor(inference_lists.pred))
```
Expected result:
```
[[('n04370456', 'sweatshirt', 0.971396),
('n04599235', 'wool', 0.015504221),
('n03595614', 'jersey', 0.005360415),
('n02963159', 'cardigan', 0.0013214782),
('n03594734', 'jean', 0.0011332377)]]
```
### Close Spark Context
```
sc.stop()
spark.stop()
```
## Perform Image Classification using Spark Submit (SS)
Use previously generated [preprocessed images](#Generate-Preprocessed-Images-for-Input) for the input.
### Making Inference
```
%%writefile bukalapak-core-ai.big-data-3v.volume-spark-img-clsf.py
# Copyright (c) 2019 PT Bukalapak.com
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from pyspark.sql import SparkSession
APP_NAME = "bukalapak-core-ai.big-data-3v.volume-spark-img-clsf"
def make_inference(xs):
import base64, pickle
import numpy as np
import tensorflow as tf
# Extract input lists
inference_lists = []
images_array = []
for x in xs:
inference_lists.append([x.tid, x.iid])
images_array.append(pickle.loads(base64.b64decode(x.i_ped.encode('UTF-8'))))
images_np = np.array(images_array)
# Load VGG16 model
vgg = tf.keras.applications.vgg16.VGG16(weights='imagenet', include_top=True)
# Do inference
inference = vgg.predict(images_np)
# Add inference results to inference table lists
if len(inference_lists) != len(inference):
raise ValueError('The total of inference table lists is not ' +
'the same as the total of inference result')
for i in range(len(inference_lists)):
inference_lists[i].append(
base64.b64encode(pickle.dumps(inference[i])).decode('UTF-8')
)
return iter(inference_lists)
def main(spark):
from pyspark.sql import Row
# Input
image_ped_orc_pathfilename = \
"file:/home/jovyan/work/" + \
"data/images_ped.orc"
# Output
image_infr_orc_pathfilename = \
"file:/home/jovyan/work/" + \
"data/images_infr_ss.orc"
# Read input file
image_ped_dict_df = spark.read.orc(image_ped_orc_pathfilename)
image_ped_dict_rdd = image_ped_dict_df.rdd
print(" Number of Partitions:", image_ped_dict_rdd.getNumPartitions())
# Perform inference
image_infr_list_rdd = image_ped_dict_rdd.mapPartitions(make_inference)
# Write output file
image_infr_dict_rdd = image_infr_list_rdd.map(lambda x: Row(tid=x[0],
iid=x[1],
pred=x[2]))
image_infr_dict_df = spark.createDataFrame(image_infr_dict_rdd)
image_infr_dict_df.write.save(image_infr_orc_pathfilename, format="orc")
if __name__ == "__main__":
# Configure Spark
spark = SparkSession \
.builder \
.appName(APP_NAME) \
.getOrCreate()
main(spark)
spark.stop()
```
__Note__: Don't forget to delete existing `images_infr_ss.orc` directory in `data/`. Following Spark implementation does not overwrite existing data but it will throw error.
__Note__: Initial run will take very long as the Spark code needs to download about 500 MB of VGG16 model.
```
%%bash
/usr/local/spark/bin/spark-submit \
--executor-memory 2g --executor-cores 1 --num-executors 2 \
bukalapak-core-ai.big-data-3v.volume-spark-img-clsf.py
```
### Check Result
__Settings__
```
LOCAL_PROJECT_URL = '/home/jovyan/work/'
```
__Create Spark Context__
Set config to run Spark locally with 2 threads.
```
APP_NAME = "bukalapak-core-ai.big-data-3v.volume-spark-img-clsf"
spark = SparkSession \
.builder \
.appName(APP_NAME) \
.config("spark.master", "local[2]") \
.getOrCreate()
sc = spark.sparkContext
```
__Read Inference Result__
```
image_infr_orc_directory = "{0}data/images_infr_ss.orc".format(LOCAL_PROJECT_URL)
image_infr_orc_pathfilename = "file:{0}".format(image_infr_orc_directory)
image_infr_dict_df = spark.read.orc(image_infr_orc_pathfilename)
image_infr_dict_rdd = image_infr_dict_df.rdd
inference_lists = image_infr_dict_rdd.take(1)[0]
inference_lists.tid
inference_lists.iid
inference_lists.pred[:300]
def convert_pbs_to_infr_tensor(pbs):
return np.array(
[list(pickle.loads(base64.b64decode(pbs.encode('UTF-8'))))])
tf.keras.applications.vgg16.decode_predictions(
convert_pbs_to_infr_tensor(inference_lists.pred))
```
Expected result:
```
[[('n04370456', 'sweatshirt', 0.971396),
('n04599235', 'wool', 0.015504221),
('n03595614', 'jersey', 0.005360415),
('n02963159', 'cardigan', 0.0013214782),
('n03594734', 'jean', 0.0011332377)]]
```
__Close Spark Context__
```
sc.stop()
spark.stop()
```
## References
1. K Simonyan, A Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR, abs/1409.1556. http://arxiv.org/abs/1409.1556.
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
Example script to serve as starting point for displaying the results of the brain reconstruction.
The current script reads results from the simulation and displays them.
Prerequisite:
You should have executed the following on your command prompt
./run_simulation_brain.sh
./run_reconstruction_brain.sh
Author: Kris Thielemans
"""
%matplotlib notebook
```
# Initial imports
```
import numpy
import matplotlib.pyplot as plt
import stir
from stirextra import *
import os
```
# go to directory with input files
```
# adapt this path to your situation (or start everything in the exercises directory)
os.chdir(os.getenv('STIR_exercises_PATH'))
```
# change directory to where the output files are
```
os.chdir('working_folder/single_slice')
```
# Read in images
```
OSEM240=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_240.hv'));
OSEMZ240=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_240.hv'));
OSEMZ10R240=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_240.hv'));
```
# find max and slice number for plots
```
maxforplot=OSEM240.max();
# pick central slice
slice=numpy.int(OSEM240.shape[0]/2);
```
# bitmap display of images OSEM vs OSEM zoomed
```
fig=plt.figure();
ax=plt.subplot(1,2,1);
plt.imshow(OSEM240[slice,:,:,]);
plt.clim(0,maxforplot)
plt.axis('off');
ax.set_title('OSEM240');
ax=plt.subplot(1,2,2);
plt.imshow(2*OSEMZ240[slice,:,:,]);
plt.clim(0,maxforplot)
plt.axis('off');
ax.set_title('OSEM z2');
fig.savefig('OSEM_vs_OSEMx2zoom.png')
```
# bitmap display of images OSEM vs OSEM with half voxel size in each direction
```
fig=plt.figure();
ax=plt.subplot(2,2,1);
plt.imshow(OSEM240[slice,:,:,]);
plt.clim(0,maxforplot)
plt.colorbar();
plt.axis('off');
ax.set_title('OSEM 240');
ax=plt.subplot(2,2,2);
plt.imshow(2*OSEMZ240[slice,:,:,]);
plt.clim(0,maxforplot)
plt.colorbar();
plt.axis('off');
ax.set_title('OSEMz2 240');
ax=plt.subplot(2,2,3);
plt.imshow(2*OSEMZ10R240[slice,:,:,]);
plt.clim(0,maxforplot)
plt.colorbar();
plt.axis('off');
ax.set_title('OSEMz2 R10 240');
diff=OSEMZ240-OSEMZ10R240;
ax=plt.subplot(2,2,4);
plt.imshow(diff[slice,:,:,]);
plt.clim(-maxforplot/50,maxforplot/50)
plt.colorbar();
plt.axis('off');
ax.set_title('diff');
fig.savefig('OSEM_vs_OSEMx2zoom_bitmaps.png')
```
# Display central horizontal profiles through the image
```
# pick line through tumor
row=84;
fig=plt.figure()
plt.plot(OSEM240[slice,row,0:185],'b');
#plt.hold(True)
plt.plot(2*OSEMZ240[slice,2*row-1,0:369:2],'c');
plt.plot(2*OSEMZ10R240[slice,2*row-1,0:369:2],'r');
plt.legend(('OSEM240','OSEMx2zoom240','OSEMx2zoom10rays240'));
fig.savefig('OSEM_vs_OSEM2zoom_profiles.png')
for line in open("OSEM.log"):
if "Total CPU" in line:
print line
for line in open("OSEM_more_voxels.log"):
if "Total CPU" in line:
print line
for line in open("OSEM_more_voxels_more_rays.log"):
if "Total CPU" in line:
print line
```
# example code for seeing evaluation over iterations with OSEM, OSEM with zoom, OSEM with more rays
```
# First read in all iterations
OSEM24=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_24.hv'));
OSEM48=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_48.hv'));
OSEM72=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_72.hv'));
OSEM96=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_96.hv'));
OSEM120=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_120.hv'));
OSEM144=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_144.hv'));
OSEM168=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_168.hv'));
OSEM192=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_192.hv'));
OSEM216=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_216.hv'));
OSEM240=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_240.hv'));
OSEMZ10R24=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_24.hv'));
OSEMZ10R48=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_48.hv'));
OSEMZ10R72=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_72.hv'));
OSEMZ10R96=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_96.hv'));
OSEMZ10R120=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_120.hv'));
OSEMZ10R144=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_144.hv'));
OSEMZ10R168=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_168.hv'));
OSEMZ10R192=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_192.hv'));
OSEMZ10R216=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_216.hv'));
OSEMZ10R240=to_numpy(stir.FloatVoxelsOnCartesianGrid.read_from_file('OSEM_zoom_10rays_240.hv'));
it = [24, 48, 72, 96, 120, 144, 168, 192, 216, 240];
#The location of the voxel is chosen to be in the lung lesion
col=59;
colz=col*2;
rowz=2*row-1;
OSEMvalues = [OSEM24[slice,row,col], OSEM48[slice,row,col], OSEM72[slice,row,col], OSEM96[slice,row,col], OSEM120[slice,row,col], OSEM144[slice,row,col], OSEM168[slice,row,col], OSEM192[slice,row,col], OSEM216[slice,row,col], OSEM240[slice,row,col]];
OSEMZ10Rvalues = [2*OSEMZ10R24[slice,rowz,colz], 2*OSEMZ10R48[slice,rowz,colz], 2*OSEMZ10R72[slice,rowz,colz], 2*OSEMZ10R96[slice,rowz,colz], 2*OSEMZ10R120[slice,rowz,colz], 2*OSEMZ10R144[slice,rowz,colz], 2*OSEMZ10R168[slice,rowz,colz], 2*OSEMZ10R192[slice,rowz,colz], 2*OSEMZ10R216[slice,rowz,colz], 2*OSEMZ10R240[slice,rowz,colz]];
fig=plt.figure()
plt.plot(it, OSEMvalues, 'bo')
plt.plot(it, OSEMZ10Rvalues, 'ro')
plt.axis([0, 250, 40, 70])
plt.title('Image value over subiteration')
plt.legend(('OSEM','Zoomed OSEM'))
plt.show();
```
# close all plots
```
plt.close('all')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import MySQLdb
import os
import matplotlib.pyplot as plt
from matplotlib import pyplot as plt
from astropy.coordinates import SkyCoord
from astropy import units as u
import numpy as np
db = MySQLdb.connect(host='104.154.94.28',db='loganp',\
read_default_file="~/.my.cnf",\
autocommit=True,\
local_infile = 1)
c=db.cursor()
def get_targets(ra,dec,ang_diam,project,name,dist=10000,con=db,database='master_gaia_database'):
"""
Given an RA/Dec array of any size, returns a Pandas databse of
the targets from the master database that fall within
a circle of specified angular diameter of the given RA/Decs
All angles in degrees
Args:
ra,dec (array, float [deg]): arrays of pointing coordinates in decimal degrees
ang_diam (float [deg]): angular size of the diameter of the beam you are simulating. Default of 0.8 deg
is slightly smaller than the MeerKAT beam in L-band to provide a conservative estimate
project (string): project name that this pointing came from
name (string): name of the primary beam target
dist (float, [pc]): depth of the desired query in parsecs. Default is set larger than the largest distance
in the master database to return essentially no distance cut
con (MySQL connection object): the connection you would like to use for the retrieval
database (str): the database to query. Default is the master database of all Gaia targets in our program
Returns:
Pandas dataframe of every object in the database meeting criteria
"""
index = range(len(ra))
appended_data = [] #make a list to store dataframes
for r,d,i in zip(ra,dec,index):
string = 'SELECT * FROM '+str(database)+' \
WHERE POWER((ra-('+str(r)+')),2) + POWER((decl - ('+str(d)+')),2) < '+str((ang_diam/2.)**2)+' \
AND `dist.c` <= '+str(dist)+';'
dataframe = pd.read_sql(string, con=con)
dataframe['project']=project
dataframe['name']=name
# store DataFrame in list
appended_data.append(dataframe)
print "I've done ",i+1," of ",len(ra)," total pointings"
targets = pd.concat(appended_data, axis=0)
return targets
clusters = ['M33','LMC','SMC']
```
LMC angular size is 323 x 275 = 5.38 x 4.58 deg (Simbad)
Or NED diameter is 41509.80" = 11.5305 deg -> radius = 5.77 deg.
The NED diameter is slightly larger, so I will use that to include more targets and be more conservative.
```
h=41509.80*u.arcsec
h=h.to(u.degree)
lmc_targets = get_targets([80.9], [-69.75],h.value,'TRAPUM Nearby Galaxies','LMC')
```
SMC angular diameter is 19867.9" in NED = 5.5 deg
```
h=19867.90*u.arcsec
h=h.to(u.degree)
h
smc_targets = get_targets([13.186588], [-72.828599],h.value,'TRAPUM Nearby Galaxies','SMC')
```
M 33 NED ang diam: 4447.90" = 1.24 deg
```
h=4447.90*u.arcsec
h=h.to(u.degree)
h
m33_targets = get_targets([23.462042], [30.660222],h.value,'TRAPUM Nearby Galaxies','M33')
targets = pd.concat([lmc_targets,smc_targets,m33_targets],ignore_index=True)
targets
targets.to_csv('../Our_targets/trapum_nearbygalaxies.csv')
r,d = 80.9, -69.75
%matplotlib notebook
plt.figure(figsize=(6,6))
a = plt.subplot(111)
plt.scatter(lmc_targets['ra'],lmc_targets['decl'],alpha=0.5,label='Gaia source',marker='.')
plt.scatter(r,d,color='orange',s=50,label='Pointing')
#plt.scatter(m['ra'].values[0],m['dec'].values[0],color='orange',marker='*',s=200,label='Exoplanet Host')
plt.annotate("{0} Targets".format(lmc_targets['ra'].shape[0]),xy=(0.75,0.92),xycoords='axes fraction')
circle = plt.Circle((r,d), 0.45, color='r',fill=False)
a.add_artist(circle)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
#plt.xlim(r-0.5,r+0.5)
#plt.ylim(d-0.55,d+0.55)
plt.tight_layout()
plt.legend(fontsize=12,loc=4)
plt.show()
#plt.savefig('lmc_pointing.png',format='png')
```
| github_jupyter |
## 2.3 Deutsch-Jozsa Algorithm
* [Q# exercise: Grover's algorithm](./2-Quantum_Algorithms/3-Deutsch-Jozsa_Algorithm.ipynb#qex)
Deutsch-Jozsa algorithm can be thought of as a general case of Deutsch's algorithm for 푓푓-qubits. It was
proposed by David Deutsch and Richard Jozsa in 1992, which inspired the later development of Simon's
algorithm and Shor's algorithm. Let's start with three qubits. We have a black box that takes in three bits
and outputs one bit:
<img src="img/3-dj001.png" style="width: 300px"/>
As in the Deutsch's algorithm, we can define two kinds of black boxes like this – _Constant_ and _Balanced_.
The truth table for a black box that always returns 0 no matter the input:
<table style="width: 200px">
<tr>
<th>$x$ </th>
<th>$y = f(x)$ </th>
</tr>
<tr>
<td> $000$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $001$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $010$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $011$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $100$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $101$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $110$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $111$ </td>
<td> $0$ </td>
</tr>
</table>
```
Table 2.3.1 Constant 0
```
The truth table for a black box that always returns 1 no matter the input:
<table style="width: 200px">
<tr>
<th>$x$ </th>
<th>$y = f(x)$ </th>
</tr>
<tr>
<td> $000$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $001$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $010$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $011$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $100$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $101$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $110$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $111$ </td>
<td> $1$ </td>
</tr>
</table>
```
Table 2.3.2 – Constant 1
```
The truth table for a black box that returns 0 for half of the inputs and returns 1 for the remaining half:
<table style="width: 200px">
<tr>
<th>$x$ </th>
<th>$y = f(x)$ </th>
</tr>
<tr>
<td> $000$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $001$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $010$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $011$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $100$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $101$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $110$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $111$ </td>
<td> $1$ </td>
</tr>
</table>
```
Table 2.3.3 – Balanced
```
The truth table for another black box that returns 0 for half of the inputs and returns 1 for the remaining
half:
<table style="width: 200px">
<tr>
<th>$x$ </th>
<th>$y = f(x)$ </th>
</tr>
<tr>
<td> $000$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $001$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $010$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $011$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $100$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $101$ </td>
<td> $1$ </td>
</tr>
<tr>
<td> $110$ </td>
<td> $0$ </td>
</tr>
<tr>
<td> $111$ </td>
<td> $1$ </td>
</tr>
</table>
```
Table 2.3.4 – Balanced
```
It is apparent there are exactly two _Constant_ black boxes ( _Constant_ 0 and _Constant_ 1), but many possible
_Balanced_ black boxes.
If one is given such a black box and a constraint that the black box will be either _Constant_ or _Balanced_ but not any other random output, how many times does one have to execute that black box (obviously with different inputs for each iteration) to identify whether the black box is _Constant_ or _Balanced_? Like Deustch's algorithm, it is only needed to identify if the given black box is _Constant_ or _Balanced_ and not the exact black box. In general, using classical computing, for 푓푓-bit inputs, the count will be $\frac{2^n}{2} + 1 = 2^{n-1} + 1$ (one more than half of the number of possible inputs). The 3-bit example will yield a count of $5 (= 4 + 1)$ - this is the count for the worst-case scenario. If you are lucky you may be able to find it by just two executions (for some _Balanced_ black boxes). However, using the Deutsch-Jozsa
algorithm, a quantum computer can solve this problem with just one execution, no matter how large $n$ might be! Let's see how.
Before proceeding with the algorithm, we need to create such black boxes (a.k.a. Quantum
Oracles) that work on a quantum computer (as discussed in the previous sessions).
Truth table for _Constant_ 0:
<table style="width: 500px">
<tr>
<th>$x$ </th>
<th>$|x_0 \rangle$ </th>
<th>$|x_1 \rangle$ </th>
<th>$|x_2 \rangle$ </th>
<th>$|y \rangle$ </th>
<th>$f(x)$ </th>
<th>$|y \oplus f(x) \rangle$ </th>
</tr> <tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
</table>
For every $x$ value, there are two possible states for $|y\rangle$ ($|0\rangle$ or $|1\rangle$). The circuit for the above truth table:
<img src="img/3-dj002.png" style="width: 300px"/>
The truth table for _Constant_ 1:
<table style="width: 500px">
<tr>
<th>$x$ </th>
<th>$|x_0 \rangle$ </th>
<th>$|x_1 \rangle$ </th>
<th>$|x_2 \rangle$ </th>
<th>$|y \rangle$ </th>
<th>$f(x)$ </th>
<th>$|y \oplus f(x) \rangle$ </th>
</tr> <tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
</table>
The circuit for the above truth table (same as _Constant_ 0, but with an X gate; compare the last columns in both truth tables) _:_
<img src="img/3-dj003.png" style="width: 300px"/>
The truth table for one possible _Balanced_ black box:
<table style="width: 500px">
<tr>
<th>$x$ </th>
<th>$|x_0 \rangle$ </th>
<th>$|x_1 \rangle$ </th>
<th>$|x_2 \rangle$ </th>
<th>$|y \rangle$ </th>
<th>$f(x)$ </th>
<th>$|y \oplus f(x) \rangle$ </th>
</tr> <tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
</table>
Circuit for the above truth table (because the last column is the same as $|y \rangle$ except when $|x_0\rangle = |1 \rangle$, the
values are flipped):
<img src="img/3-dj004.png" style="width: 300px"/>
Truth table for another possible _Balanced_ black box:
<table style="width: 500px">
<tr>
<th>$x$ </th>
<th>$|x_0 \rangle$ </th>
<th>$|x_1 \rangle$ </th>
<th>$|x_2 \rangle$ </th>
<th>$|y \rangle$ </th>
<th>$f(x)$ </th>
<th>$|y \oplus f(x) \rangle$ </th>
</tr> <tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $2$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $3$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $4$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $5$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $0$ </td>
<td> $|0 \rangle$ </td>
</tr>
<tr>
<td> $6$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $0$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|0 \rangle$ </td>
<td> $1$ </td>
<td> $|1 \rangle$ </td>
</tr>
<tr>
<td> $7$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $|1 \rangle$ </td>
<td> $1$ </td>
<td> $|0 \rangle$ </td>
</tr>
</table>
Circuit for the above truth table (because the last column is the same as $|y \rangle$ but is flipped when $|x_2 \rangle = |1 \rangle$):
<img src="img/3-dj005.png" style="width: 300px" />
We can build several _Balanced_ black boxes in the same way.
Once any such unknown black box ( _U_ ) is given, we use it in the following Deutsch-Jozsa algorithm:
<img src="img/3-dj006.png" style="width: 300px" />
We follow these steps as shown in the circuit:
1. Start with |0⟩ for all the four qubits;
2. Make the last one |1⟩ by applying an X gate;
3. Apply H gate to all the qubits;
4. Run the given black box;
5. Apply H gate again only to the input qubits – the first three qubits;
6. Measure only the input qubits.
After executing the circuit, if we measure "0" in all the three input qubits, then we can conclude that the
given black box is a _Constant_ one ( _Constant_ 0 or _Constant_ 1). For any other combination, we can conclude
that the given black box is a _Balanced_ one (could be any of the many possible _Balanced_ black boxes).
Here's the proof. Let's take the generic case where we have $n$-qubits instead of three. The initial state of the system will be:
$$|\,000 \ldots 0\;(n - qubits)\; \rangle |0 \rangle$$
Applying X gate on the last qubit:
$$|\,000 \ldots 0\;(n - qubits)\; \rangle |1 \rangle$$
which can be rewritten as:
$$|\,0 \rangle \otimes |0 \rangle \otimes |0 \rangle \ldots (n - qubits) \otimes |1 \rangle$$
Applying H gate on all the qubits:
$$\left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \ldots (n - times) \ldots \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$ ,
which can be rewritten as:
$$\frac{1}{\sqrt 2^n} \sum_{x = 0}^{2^n -1} |x \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
>_Math insert – proof of above_ -----------------------------------------------------------------------------
>
>Using three qubits with H gates applied on them as an example, without loss of
>generality,
>
>$$\left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right)$$
>
>$$= \frac{1}{\sqrt {2^3}} \; \left( \; |000 \rangle + |001 \rangle + |010 \rangle + |011 \rangle + |100 \rangle + |101 \rangle + |110 \rangle + |111 \rangle \; \right)$$
>
>$$= \frac{1}{\sqrt {2^3}} \; \left( \; |0 \rangle + |1 \rangle + |2 \rangle + |3 \rangle + |4 \rangle + |5 \rangle + |6 \rangle + |7 \rangle \; \right)$$
>
>$$= \frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^3 -1} |x \rangle $$
>
Now, apply the black box on this state, the new state will become
$$\frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^n -1} |x \rangle \otimes
\left( \frac{|0 \oplus f(x) \rangle}{\sqrt 2} - \frac{|1 \oplus f(x)\rangle}{\sqrt 2} \right)$$
$$= \frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^n -1} |x \rangle \otimes
\left( \frac{|f(x) \rangle}{\sqrt 2} - \frac{|\overline{f(x)} \rangle}{\sqrt 2} \right)$$.
We know that $f(x)$ has only two possible outputs $-\,0$ or $1$. Hence the above state can be written as
$$\frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^n -1} |x \rangle \otimes ( - 1)^{f(x)}
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
$$= \frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^n -1} ( - 1)^{f(x)} \,|x \rangle \otimes
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
>_Math insert – implementation of the above_ ----------------------------------------------------------
>
>To understand it more concretely, let's put $n = 3$:
>
$$\frac{1}{\sqrt {2^3}} \sum_{x = 0}^{2^3 -1} ( - 1)^{f(x)} \,|x \rangle \otimes
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
This becomes:
$$\frac{1}{\sqrt {2^3}} \left( ( - 1)^{f(0)} \, |000\rangle
+ ( - 1)^{f(1)} \, |001\rangle + ( - 1)^{f(2)} \, |010\rangle +
( - 1)^{f(3)} \, |011\rangle + ( - 1)^{f(4)} \, |100\rangle +
( - 1)^{f(5)} \, |101\rangle + ( - 1)^{f(6)} \, |110\rangle +
( - 1)^{f(7)} \, |111\rangle \right) \otimes \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
Apply H gate on all the qubits except on the last one:
$$\frac{1}{\sqrt {2^3}} \left(
( - 1)^{f(0)} \,
\left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \,
\left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(1)} \,
\left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(2)} \,
\left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(3)} \,
\left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(4)} \,
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(5)} \,
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(6)} \,
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \\
+ ( - 1)^{f(7)} \,
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \, \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) \right)
\otimes \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
$$= \frac{1}{\sqrt {2^3}} \times \\
\left( ( - 1)^{f(0)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle + |001 \rangle + |010 \rangle + |011 \rangle + |100 \rangle +
|101 \rangle + |110 \rangle + |111 \rangle + \right) \\
+ ( - 1)^{f(1)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle - |001 \rangle + |010 \rangle - |011 \rangle + |100 \rangle -
|101 \rangle + |110 \rangle - |111 \rangle \right) \\
+ ( - 1)^{f(2)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle + |001 \rangle - |010 \rangle - |011 \rangle + |100 \rangle +
|101 \rangle - |110 \rangle - |111 \rangle \right) \\
+ ( - 1)^{f(3)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle - |001 \rangle - |010 \rangle + |011 \rangle + |100 \rangle -
|101 \rangle - |110 \rangle + |111 \rangle \right) \\
+ ( - 1)^{f(4)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle + |001 \rangle + |010 \rangle + |011 \rangle - |100 \rangle -
|101 \rangle - |110 \rangle - |111 \rangle \right) \\
+ ( - 1)^{f(5)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle - |001 \rangle + |010 \rangle - |011 \rangle - |100 \rangle +
|101 \rangle - |110 \rangle + |111 \rangle \right) \\
+ ( - 1)^{f(6)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle + |001 \rangle - |010 \rangle - |011 \rangle - |100 \rangle -
|101 \rangle + |110 \rangle + |111 \rangle \right) \\
+ ( - 1)^{f(7)} \,
\left( \frac{1}{\sqrt {2^3}}
( |000 \rangle - |001 \rangle - |010 \rangle + |011 \rangle - |100 \rangle +
|101 \rangle + |110 \rangle - |111 \rangle \right)
\right) \\
\otimes \,\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) $$
After refactoring:
$$ \frac{1}{\sqrt {2^3}} \, \left( \\
\frac{1}{\sqrt {2^3}} \,
\left( \, (- 1)^{f(000)} + (- 1)^{f(001)} + (- 1)^{f(010)} + (- 1)^{f(011)} +
(- 1)^{f(100)} + (- 1)^{f(101)} + (- 1)^{f(110)} + (- 1)^{f(111)}
\right) \, |000 \rangle \\
+ \frac{1}{\sqrt {2^3}} \,
\left( \, (- 1)^{f(000)} - (- 1)^{f(001)} + (- 1)^{f(010)} - (- 1)^{f(011)} +
(- 1)^{f(100)} - (- 1)^{f(101)} + (- 1)^{f(110)} - (- 1)^{f(111)}
\right) \, |001 \rangle \\
+ \frac{1}{\sqrt {2^3}} \,
\left( \, (- 1)^{f(000)} + (- 1)^{f(001)} - (- 1)^{f(010)} - (- 1)^{f(011)} +
(- 1)^{f(100)} + (- 1)^{f(101)} - (- 1)^{f(110)} - (- 1)^{f(111)}
\right) \, |010 \rangle \\
+ \frac{1}{\sqrt {2^3}} \,
\left( \, (- 1)^{f(000)} - (- 1)^{f(001)} - (- 1)^{f(010)} + (- 1)^{f(011)} +
(- 1)^{f(100)} - (- 1)^{f(101)} - (- 1)^{f(110)} + (- 1)^{f(111)}
\right) \, |011 \rangle \\
\right) \\
\otimes
\left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
$$
$$= \frac{1}{\sqrt {2^3}} \frac{1}{\sqrt {2^3}} \, \sum_{y=0}^{2^3 - 1}
\,\left[ \, \sum_{x=0}^{2^3 - 1} ( - 1 )^{f(x)} \, ( - 1 )^{x x'} \right]
| x' \rangle \, \otimes \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) $$,
where $x.y = x_0 x'_0 \, \oplus x_1 x'_1 \, \oplus x_2 x'_2 $.
This becomes:
$$ \frac{1}{{2^3}} \, \sum_{y=0}^{2^3 - 1}
\,\left[ \, \sum_{x=0}^{2^3 - 1} ( - 1 )^{f(x)} \, ( - 1 )^{x x'} \right]
| x' \rangle \, \otimes \left( \frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
In this state of the system, what is the probability of getting $|000 \rangle$ when we measure the first three qubits?
We can obtain this by putting $x= 0 \; (i.e : |000 \rangle)$
$$\left| \frac{1}{{2^3}} \, \sum_{x=0}^{2^3 - 1} ( - 1 )^{f(x)} \, ( - 1 )^{x.0} \right| ^{\, 2}$$
$$= \left| \frac{1}{{2^3}} \, \sum_{x=0}^{2^3 - 1} ( - 1 )^{f(x)} \right| ^{\, 2}.$$
If $f(x)$ turns out to be _Constant_ 0, then the probability becomes:
$$= \left| \frac{1}{{2^3}} \, \sum_{x=0}^{2^3 - 1} 1 \, \right| ^{\, 2}$$
$$= \left|\, \frac{2^3}{{2^3}} \, \right| ^{\, 2},$$
which is $1$.
If $f(x)$ turns out to be _Constant_ 1, then the probability becomes:
$$= \left| \frac{1}{{2^3}} \, \sum_{x=0}^{2^3 - 1} - 1 \, \right| ^{\, 2}$$
$$= \left|\, \frac{- 2^3}{{2^3}} \, \right| ^{\, 2},$$
which is $1$.
Hence, we have proven that, as long as the given black box is either _Constant_ 0 or _Constant_ 1,
we will always obtain $|000 \rangle$ when measuring the first three qubits, i.e. $Pr (|000 \rangle) = 1 $.
What happens when the black box is a _Balanced_ one?
$$ \left| \frac{1}{{2^3}} \, \sum_{x=0}^{2^3 - 1} ( - 1 )^{f(x)} \right| ^{\, 2}.$$
In this equation, if $f(x)$ is _Balanced_ , half of the time $(-1)^{f(x)}$ will evaluate to $1$ and the remaining half of the time it will evaluate to $-1$. So the net sum will always be $0$. That means $Pr (|000 \rangle) = 0$, which implies that we will never get $0 = |000\rangle$ if the given black box is a _Balanced_ one.
In summary, we have proven that after executing the Deutsch-Jozsa algorithm circuit and measuring the first three qubits (or $n$-qubits in the generic case), if we get $|000 \rangle$ , we conclude that the given black box is a _Constant_ one ( _Constant_ 0 or _Constant_ 1); if we get any other value, we conclude that the given black box is one of the _Balanced_ ones.
### Q# exercise: Deutsch-Josza algorithm
<a id='#qex'></a>
1. Go to QuantumComputingViaQSharpSolution introduced in session 1.1.
2. Open 26_Demo Deutsch_Josza Algorithm Operation.qs in Visual Studio (Code).
3. Similar to Deutsch's algorithm, the black boxes are defined at the bottom of the script, starting
from line 56, just as how we constructed the circuit diagrams.

```
operation BlackBoxConstant0(inputQubits: Qubit[], outputQubits: Qubit[]) : ()
{
body
{
}
}
```

```
operation BlackBoxConstant1(inputQubits: Qubit[], outputQubits: Qubit[]) : ()
{
body
{
X(outputQubits[0]);
}
}
```

```
operation BlackBoxBalanced1(inputQubits: Qubit[], outputQubits: Qubit[]) : ()
{
body
{
CNOT(inputQubits[0], outputQubits[0]);
}
}
```

```
operation BlackBoxBalanced2(inputQubits: Qubit[], outputQubits: Qubit[]) : ()
{
body
{
CNOT(inputQubits[2], outputQubits[0]);
}
}
```
4. Lines 95 – 151 define other random Balanced black boxes. Try putting one in line 26 to see and run the script. You should be getting expected outputs.
5. More Deutsch-Josza Algorithm exercise can be found on [Quantum Katas](https://github.com/Microsoft/QuantumKatas).
6. In Visual Studio (Code) open folder “DeutschJoszaAlgorithm” and Task.qs. Or use the Jupyter Notebook. Go to Part II tasks.
7. As will be used in many of the algorithms, to apply H gate to each qubit input register, we can simply use
```
ApplyToEach(H, x);
```
| github_jupyter |
# ACA-Py & ACC-Py Basic Template
## Copy this template into the root folder of your notebook workspace to get started
### Imports
```
from aries_cloudcontroller import AriesAgentController
import os
from termcolor import colored
```
### Initialise the Agent Controller
```
api_key = os.getenv("ACAPY_ADMIN_API_KEY")
admin_url = os.getenv("ADMIN_URL")
print(f"Initialising a controller with admin api at {admin_url} and an api key of {api_key}")
agent_controller = AriesAgentController(admin_url,api_key)
```
### Start a Webhook Server
```
webhook_port = int(os.getenv("WEBHOOK_PORT"))
webhook_host = "0.0.0.0"
await agent_controller.init_webhook_server(webhook_host, webhook_port)
print(f"Listening for webhooks from agent at http://{webhook_host}:{webhook_port}")
```
## Register Agent Event Listeners
You can see some examples within the webhook_listeners recipe. Copy any relevant cells across and customise as needed.
```
listeners = []
# Receive connection messages
def connections_handler(payload):
state = payload['state']
connection_id = payload["connection_id"]
their_role = payload["their_role"]
routing_state = payload["routing_state"]
print("----------------------------------------------------------")
print("Connection Webhook Event Received")
print("Connection ID : ", connection_id)
print("State : ", state)
print("Routing State : ", routing_state)
print("Their Role : ", their_role)
print("----------------------------------------------------------")
if state == "invitation":
# Your business logic
print("invitation")
elif state == "request":
# Your business logic
print("request")
elif state == "response":
# Your business logic
print("response")
elif state == "active":
# Your business logic
print(colored("Connection ID: {0} is now active.".format(connection_id), "green", attrs=["bold"]))
connection_listener = {
"handler": connections_handler,
"topic": "connections"
}
listeners.append(connection_listener)
agent_controller.register_listeners(listeners)
```
## Create Invitation
Note the current arguments specified are in their default configurations.
```
# Alias for invited connection
alias = None
auto_accept = False
# Use public DID?
public = "false"
# Should this invitation be usable by multiple invitees?
multi_use = "false"
invitation_response = await agent_controller.connections.create_invitation(alias, auto_accept, public, multi_use)
# Is equivalent to above. Arguments are optionally
# invitation_response = await agent_controller.connections.create_invitation()
# You probably want to keep this somewhere so you can enage in other protocols with this connection.
connection_id = invitation_response["connection_id"]
```
## Share Invitation Object with External Agent
Typically in this jupyter notebook playground that involves copying it across to another agent's business logic notebook where they are the invitee. (see invitee_template recipe)
```
invitation = invitation_response["invitation"]
## Copy this output
print(invitation)
```
## Display Invite as QR Code
This is useful if you wish to establish a connection with a mobile wallet.
```
import qrcode
# Link for connection invitation
invitation_url = invitation_response["invitation_url"]
# Creating an instance of qrcode
qr = qrcode.QRCode(
version=1,
box_size=5,
border=5)
qr.add_data(invitation_url)
qr.make(fit=True)
img = qr.make_image(fill='black', back_color='white')
img
```
## Accept Invitation Response
Note: You may not need to run this cell. It depends if this agent has the ACAPY_AUTO_ACCEPT_REQUESTS=true flag set.
```
# Endpoint you expect to recieve messages at
my_endpoint = None
accept_request_response = await agent_controller.connections.accept_request(connection_id, my_endpoint)
```
## Send Trust Ping
Once connection moves to response state one agent, either inviter or invitee needs to send a trust ping.
Note: you may not need to run this cell. It depends one of the agents has the ACAPY_AUTO_PING_CONNECTION=true flag set.
```
comment = "Some Optional Comment"
message = await agent_controller.messaging.trust_ping(connection_id, comment)
```
## Your Own Business Logic
Now you should have an established, active connection you can write any custom logic you want to engage with protocols with the connection
```
## Custom Logic
```
## Terminate Controller
Whenever you have finished with this notebook, be sure to terminate the controller. This is especially important if your business logic runs across multiple notebooks.
```
await agent_controller.terminate()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Bert Quantization with ONNX Runtime on CPU
In this tutorial, we will load a fine tuned [HuggingFace BERT](https://huggingface.co/transformers/) model trained with [PyTorch](https://pytorch.org/) for [Microsoft Research Paraphrase Corpus (MRPC)](https://www.microsoft.com/en-us/download/details.aspx?id=52398) task , convert the model to ONNX, and then quantize PyTorch and ONNX model respectively. Finally, we will demonstrate the performance, accuracy and model size of the quantized PyTorch and OnnxRuntime model in the [General Language Understanding Evaluation benchmark (GLUE)](https://gluebenchmark.com/)
## 0. Prerequisites ##
If you have Jupyter Notebook, you can run this notebook directly with it. You may need to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/), [transformers](https://huggingface.co/transformers/) and other required packages.
Otherwise, you can setup a new environment. First, install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:
```console
conda create -n cpu_env python=3.6
conda activate cpu_env
conda install jupyter
jupyter notebook
```
The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
### 0.1 Install packages
Let's install nessasary packages to start the tutorial. We will install PyTorch 1.6, OnnxRuntime 1.5.1, latest ONNX, OnnxRuntime-tools, transformers, and sklearn.
```
# Install or upgrade PyTorch 1.6.0 and OnnxRuntime 1.5.1 for CPU-only.
import sys
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.5.1
!{sys.executable} -m pip install --upgrade onnxruntime-tools
# Install other packages used in this notebook.
!{sys.executable} -m pip install --upgrade transformers
!{sys.executable} -m pip install onnx sklearn
```
### 0.2 Download GLUE data and Fine-tune BERT model for MPRC task
HuggingFace [text-classification examples]( https://github.com/huggingface/transformers/tree/master/examples/text-classification) shows details on how to fine-tune a MPRC tack with GLUE data.
#### Firstly, Let's download the GLUE data with download_glue_data.py [script](https://github.com/huggingface/transformers/blob/master/utils/download_glue_data.py) and unpack it to directory glue_data under current directory.
```
!wget https://raw.githubusercontent.com/huggingface/transformers/f98ef14d161d7bcdc9808b5ec399981481411cc1/utils/download_glue_data.py
!python download_glue_data.py --data_dir='glue_data' --tasks='MRPC'
!ls glue_data/MRPC
!curl https://download.pytorch.org/tutorial/MRPC.zip --output MPRC.zip
!unzip -n MPRC.zip
```
#### Next, we can fine-tune the model based on the [MRPC example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#mrpc) with command like:
`
export GLUE_DIR=./glue_data
export TASK_NAME=MRPC
export OUT_DIR=./$TASK_NAME/
python ./run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--save_steps 100000 \
--output_dir $OUT_DIR
`
In order to save time, we download the fine-tuned BERT model for MRPC task by PyTorch from:https://download.pytorch.org/tutorial/MRPC.zip.
```
!curl https://download.pytorch.org/tutorial/MRPC.zip --output MPRC.zip
!unzip -n MPRC.zip
```
## 1.Load and quantize model with PyTorch
In this section, we will load the fine-tuned model with PyTorch, quantize it and measure the performance. We reused the code from PyTorch's [BERT quantization blog](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) for this section.
### 1.1 Import modules and set global configurations
In this step, we import the necessary PyTorch, transformers and other necessary modules for the tutorial, and then set up the global configurations, like data & model folder, GLUE task settings, thread settings, warning settings and etc.
```
from __future__ import absolute_import, division, print_function
import logging
import numpy as np
import os
import random
import sys
import time
import torch
from argparse import Namespace
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from tqdm import tqdm
from transformers import (BertConfig, BertForSequenceClassification, BertTokenizer,)
from transformers import glue_compute_metrics as compute_metrics
from transformers import glue_output_modes as output_modes
from transformers import glue_processors as processors
from transformers import glue_convert_examples_to_features as convert_examples_to_features
# Setup warnings
import warnings
warnings.filterwarnings(
action='ignore',
category=DeprecationWarning,
module=r'.*'
)
warnings.filterwarnings(
action='default',
module=r'torch.quantization'
)
# Setup logging level to WARN. Change it accordingly
logger = logging.getLogger(__name__)
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.WARN)
#logging.getLogger("transformers.modeling_utils").setLevel(
# logging.WARN) # Reduce logging
print(torch.__version__)
configs = Namespace()
# The output directory for the fine-tuned model, $OUT_DIR.
configs.output_dir = "./MRPC/"
# The data directory for the MRPC task in the GLUE benchmark, $GLUE_DIR/$TASK_NAME.
configs.data_dir = "./glue_data/MRPC"
# The model name or path for the pre-trained model.
configs.model_name_or_path = "bert-base-uncased"
# The maximum length of an input sequence
configs.max_seq_length = 128
# Prepare GLUE task.
configs.task_name = "MRPC".lower()
configs.processor = processors[configs.task_name]()
configs.output_mode = output_modes[configs.task_name]
configs.label_list = configs.processor.get_labels()
configs.model_type = "bert".lower()
configs.do_lower_case = True
# Set the device, batch size, topology, and caching flags.
configs.device = "cpu"
configs.eval_batch_size = 1
configs.n_gpu = 0
configs.local_rank = -1
configs.overwrite_cache = False
# Set random seed for reproducibility.
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
set_seed(42)
```
### 1.2 Load and quantize the fine-tuned BERT model with PyTorch
In this step, we load the fine-tuned BERT model, and quantize it with PyTorch's dynamic quantization. And show the model size comparison between full precision and quantized model.
```
# load model
model = BertForSequenceClassification.from_pretrained(configs.output_dir)
model.to(configs.device)
# quantize model
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
#print(quantized_model)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/(1024*1024))
os.remove('temp.p')
print_size_of_model(model)
print_size_of_model(quantized_model)
```
### 1.3 Evaluate the accuracy and performance of PyTorch quantization
This section reused the tokenize and evaluation function from [Huggingface](https://github.com/huggingface/transformers/blob/45e26125de1b9fbae46837856b1f518a4b56eb65/examples/movement-pruning/masked_run_glue.py).
```
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def evaluate(args, model, tokenizer, prefix=""):
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_task_names = ("mnli", "mnli-mm") if args.task_name == "mnli" else (args.task_name,)
eval_outputs_dirs = (args.output_dir, args.output_dir + '-MM') if args.task_name == "mnli" else (args.output_dir,)
results = {}
for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:
os.makedirs(eval_output_dir)
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu eval
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
for batch in tqdm(eval_dataloader, desc="Evaluating"):
model.eval()
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[3]}
if args.model_type != 'distilbert':
inputs['token_type_ids'] = batch[2] if args.model_type in ['bert', 'xlnet'] else None # XLM, DistilBERT and RoBERTa don't use segment_ids
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs['labels'].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs['labels'].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
if args.output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif args.output_mode == "regression":
preds = np.squeeze(preds)
result = compute_metrics(eval_task, preds, out_label_ids)
results.update(result)
output_eval_file = os.path.join(eval_output_dir, prefix, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
return results
def load_and_cache_examples(args, task, tokenizer, evaluate=False):
if args.local_rank not in [-1, 0] and not evaluate:
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
processor = processors[task]()
output_mode = output_modes[task]
# Load data features from cache or dataset file
cached_features_file = os.path.join(args.data_dir, 'cached_{}_{}_{}_{}'.format(
'dev' if evaluate else 'train',
list(filter(None, args.model_name_or_path.split('/'))).pop(),
str(args.max_seq_length),
str(task)))
if os.path.exists(cached_features_file) and not args.overwrite_cache:
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args.data_dir)
label_list = processor.get_labels()
if task in ['mnli', 'mnli-mm'] and args.model_type in ['roberta']:
# HACK(label indices are swapped in RoBERTa pretrained model)
label_list[1], label_list[2] = label_list[2], label_list[1]
examples = processor.get_dev_examples(args.data_dir) if evaluate else processor.get_train_examples(args.data_dir)
features = convert_examples_to_features(examples,
tokenizer,
label_list=label_list,
max_length=args.max_seq_length,
output_mode=output_mode,
)
if args.local_rank in [-1, 0]:
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
if args.local_rank == 0 and not evaluate:
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
# Convert to Tensors and build dataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
if output_mode == "classification":
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
elif output_mode == "regression":
all_labels = torch.tensor([f.label for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
return dataset
def time_model_evaluation(model, configs, tokenizer):
eval_start_time = time.time()
result = evaluate(configs, model, tokenizer, prefix="")
eval_end_time = time.time()
eval_duration_time = eval_end_time - eval_start_time
print(result)
print("Evaluate total time (seconds): {0:.1f}".format(eval_duration_time))
# define the tokenizer
tokenizer = BertTokenizer.from_pretrained(
configs.output_dir, do_lower_case=configs.do_lower_case)
# Evaluate the original FP32 BERT model
print('Evaluating PyTorch full precision accuracy and performance:')
time_model_evaluation(model, configs, tokenizer)
# Evaluate the INT8 BERT model after the dynamic quantization
print('Evaluating PyTorch quantization accuracy and performance:')
time_model_evaluation(quantized_model, configs, tokenizer)
# Serialize the quantized model
quantized_output_dir = configs.output_dir + "quantized/"
if not os.path.exists(quantized_output_dir):
os.makedirs(quantized_output_dir)
quantized_model.save_pretrained(quantized_output_dir)
```
## 2. Quantization and Inference with ORT ##
In this section, we will demonstrate how to export the PyTorch model to ONNX, quantize the exported ONNX model, and infererence the quantized model with ONNXRuntime.
### 2.1 Export to ONNX model and optimize with ONNXRuntime-tools
This step will export the PyTorch model to ONNX and then optimize the ONNX model with [ONNXRuntime-tools](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers), which is an offline optimizer tool for transformers based models.
```
import onnxruntime
def export_onnx_model(args, model, tokenizer, onnx_model_path):
with torch.no_grad():
inputs = {'input_ids': torch.ones(1,128, dtype=torch.int64),
'attention_mask': torch.ones(1,128, dtype=torch.int64),
'token_type_ids': torch.ones(1,128, dtype=torch.int64)}
outputs = model(**inputs)
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
(inputs['input_ids'], # model input (or a tuple for multiple inputs)
inputs['attention_mask'],
inputs['token_type_ids']), # model input (or a tuple for multiple inputs)
onnx_model_path, # where to save the model (can be a file or file-like object)
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input_ids', # the model's input names
'input_mask',
'segment_ids'],
output_names=['output'], # the model's output names
dynamic_axes={'input_ids': symbolic_names, # variable length axes
'input_mask' : symbolic_names,
'segment_ids' : symbolic_names})
logger.info("ONNX Model exported to {0}".format(onnx_model_path))
export_onnx_model(configs, model, tokenizer, "bert.onnx")
# optimize transformer-based models with onnxruntime-tools
from onnxruntime_tools import optimizer
from onnxruntime_tools.transformers.onnx_model_bert import BertOptimizationOptions
# disable embedding layer norm optimization for better model size reduction
opt_options = BertOptimizationOptions('bert')
opt_options.enable_embed_layer_norm = False
opt_model = optimizer.optimize_model(
'bert.onnx',
'bert',
num_heads=12,
hidden_size=768,
optimization_options=opt_options)
opt_model.save_model_to_file('bert.opt.onnx')
```
### 2.2 Quantize ONNX model
We will call [onnxruntime.quantization.quantize](https://github.com/microsoft/onnxruntime/blob/fe0b2b2abd494b7ff14c00c0f2c51e0ccf2a3094/onnxruntime/python/tools/quantization/README.md) to apply quantization on the HuggingFace BERT model. It supports dynamic quantization with IntegerOps and static quantization with QLinearOps. For activation ONNXRuntime supports only uint8 format for now, and for weight ONNXRuntime supports both int8 and uint8 format.
We apply dynamic quantization for BERT model and use int8 for weight.
```
def quantize_onnx_model(onnx_model_path, quantized_model_path):
from onnxruntime.quantization import quantize_dynamic, QuantType
import onnx
onnx_opt_model = onnx.load(onnx_model_path)
quantize_dynamic(onnx_model_path,
quantized_model_path,
weight_type=QuantType.QInt8)
logger.info(f"quantized model saved to:{quantized_model_path}")
quantize_onnx_model('bert.opt.onnx', 'bert.opt.quant.onnx')
print('ONNX full precision model size (MB):', os.path.getsize("bert.opt.onnx")/(1024*1024))
print('ONNX quantized model size (MB):', os.path.getsize("bert.opt.quant.onnx")/(1024*1024))
```
### 2.3 Evaluate ONNX quantization performance and accuracy
In this step, we will evalute OnnxRuntime quantization with GLUE data set.
```
def evaluate_onnx(args, model_path, tokenizer, prefix=""):
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_options.intra_op_num_threads=1
session = onnxruntime.InferenceSession(model_path, sess_options)
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_task_names = ("mnli", "mnli-mm") if args.task_name == "mnli" else (args.task_name,)
eval_outputs_dirs = (args.output_dir, args.output_dir + '-MM') if args.task_name == "mnli" else (args.output_dir,)
results = {}
for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:
os.makedirs(eval_output_dir)
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu eval
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
#eval_loss = 0.0
#nb_eval_steps = 0
preds = None
out_label_ids = None
for batch in tqdm(eval_dataloader, desc="Evaluating"):
batch = tuple(t.detach().cpu().numpy() for t in batch)
ort_inputs = {
'input_ids': batch[0],
'input_mask': batch[1],
'segment_ids': batch[2]
}
logits = np.reshape(session.run(None, ort_inputs), (-1,2))
if preds is None:
preds = logits
#print(preds.shape)
out_label_ids = batch[3]
else:
preds = np.append(preds, logits, axis=0)
out_label_ids = np.append(out_label_ids, batch[3], axis=0)
#print(preds.shap)
#eval_loss = eval_loss / nb_eval_steps
if args.output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif args.output_mode == "regression":
preds = np.squeeze(preds)
#print(preds)
#print(out_label_ids)
result = compute_metrics(eval_task, preds, out_label_ids)
results.update(result)
output_eval_file = os.path.join(eval_output_dir, prefix + "_eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
return results
def time_ort_model_evaluation(model_path, configs, tokenizer, prefix=""):
eval_start_time = time.time()
result = evaluate_onnx(configs, model_path, tokenizer, prefix=prefix)
eval_end_time = time.time()
eval_duration_time = eval_end_time - eval_start_time
print(result)
print("Evaluate total time (seconds): {0:.1f}".format(eval_duration_time))
print('Evaluating ONNXRuntime full precision accuracy and performance:')
time_ort_model_evaluation('bert.opt.onnx', configs, tokenizer, "onnx.opt")
print('Evaluating ONNXRuntime quantization accuracy and performance:')
time_ort_model_evaluation('bert.opt.quant.onnx', configs, tokenizer, "onnx.opt.quant")
```
## 3 Summary
In this tutorial, we demonstrated how to quantize a fine-tuned BERT model for MPRC task on GLUE data set. Let's summarize the main metrics of quantization.
### Model Size
PyTorch quantizes torch.nn.Linear modules only and reduce the model from 438 MB to 181 MB. OnnxRuntime quantizes not only Linear(MatMul), but also the embedding layer. It achieves almost the ideal model size reduction with quantization.
| Engine | Full Precision(MB) | Quantized(MB) |
| --- | --- | --- |
| PyTorch 1.6 | 417.7 | 173.1 |
| ORT 1.5 | 417.7 | 106.5 |
### Accuracy
OnnxRuntime achieves a little bit better accuracy and F1 score, even though it has small model size.
| Metrics | Full Precision | PyTorch 1.6 Quantization | ORT 1.5 Quantization |
| --- | --- | --- | --- |
| Accuracy | 0.86029 | 0.85784 | 0.85784 |
| F1 score | 0.90189 | 0.89931 | 0.90203 |
| Acc and F1 | 0.88109 | 0.87857 | 0.87994 |
### Performance
The evaluation data set has 408 sample. Table below shows the performance on my machine with Intel(R) Xeon(R) E5-1650 v4@3.60GHz CPU. Comparing with PyTorch full precision, PyTorch quantization achieves ~1.33x speedup, and ORT quantization achieves ~1.73x speedup. And ORT quantization can achieve ~1.33x speedup, comparing with PyTorch quantization.
You can run the [benchmark.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/benchmark.py) for comparison on more models.
|Engine | Full Precision Latency(s) | Quantized(s) |
| --- | --- | --- |
| PyTorch 1.6 | 33.8 | 22.5 |
| ORT 1.5 | 26.0 | 19.5 |
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
path ="GlassdoorUSA.csv"
class DataFrame_Loader():
data = path
def __init__(self):
print("Loadind DataFrame")
def read_csv(self,data):
self.df = pd.read_csv(data)
def load_csv(self):
return self.df
Data = DataFrame_Loader()
Data.read_csv(path)
df = Data.load_csv()
class DataFrame_Information():
def __init__(self):
print("Attribute Information object created")
def Feature_information(self,data):
data_info = pd.DataFrame(
columns=['No of observation',
'No of Variables',
'No of Numerical Variables',
'No of Factor Variables',
'No of Categorical Variables',
'No of Logical Variables',
'No of Date Variables',
'No of zero variance variables'])
data_info.loc[0,'No of observation'] = df.shape[0]
data_info.loc[0,'No of Variables'] = df.shape[1]
data_info.loc[0,'No of Numerical Variables'] = df._get_numeric_data().shape[1]
data_info.loc[0,'No of Factor Variables'] = df.select_dtypes(include='category').shape[1]
data_info.loc[0,'No of Logical Variables'] = df.select_dtypes(include='bool').shape[1]
data_info.loc[0,'No of Categorical Variables'] = df.select_dtypes(include='object').shape[1]
data_info.loc[0,'No of Date Variables'] = df.select_dtypes(include='datetime64').shape[1]
data_info.loc[0,'No of zero variance variables'] = df.loc[:,df.apply(pd.Series.nunique)==1].shape[1]
data_info =data_info.transpose()
data_info.columns=['value']
data_info['value'] = data_info['value'].astype(int)
return data_info
def Value_Counts_of_Data(self,data):
categorical_columns = [x for x in data.dtypes.index if data.dtypes[x]=='object']
for col in categorical_columns:
print('\nFrequency of Categories for varible %s'%col)
print(data[col].value_counts())
def DataFrame_Aggregation(self,data):
print("=" * 110)
print("Aggregation of Table")
print("=" * 110)
table = pd.DataFrame(data.dtypes,columns=['dtypes'])
table1 =pd.DataFrame(data.columns,columns=['Names'])
table = table.reset_index()
table= table.rename(columns={'index':'Name'})
table['No of Missing'] = data.isnull().sum().values
table['No of Uniques'] = data.nunique().values
table['Percent of Missing'] = ((data.isnull().sum().values)/ (data.shape[0])) *100
table['First Observation'] = data.loc[0].values
table['Second Observation'] = data.loc[1].values
table['Third Observation'] = data.loc[2].values
for name in table['Name'].value_counts().index:
table.loc[table['Name'] == name, 'Entropy'] = round(stats.entropy(data[name].value_counts(normalize=True), base=2),2)
return table
print("=" * 110)
def __IQR(self,x):
return x.quantile(q=0.75) - x.quantile(q=0.25)
def __Outlier_Count(self,x):
upper_out = x.quantile(q=0.75) + 1.5 * self.__IQR(x)
lower_out = x.quantile(q=0.25) - 1.5 * self.__IQR(x)
return len(x[x > upper_out]) + len(x[x < lower_out])
def Numeric_Count_Summary(self,df):
df_num = df._get_numeric_data()
data_info_num = pd.DataFrame()
i=0
for c in df_num.columns:
data_info_num.loc[c,'Negative values count']= df_num[df_num[c]<0].shape[0]
data_info_num.loc[c,'Positive values count']= df_num[df_num[c]>0].shape[0]
data_info_num.loc[c,'Zero count']= df_num[df_num[c]==0].shape[0]
data_info_num.loc[c,'Unique count']= len(df_num[c].unique())
data_info_num.loc[c,'Negative Infinity count']= df_num[df_num[c]== -np.inf].shape[0]
data_info_num.loc[c,'Positive Infinity count']= df_num[df_num[c]== np.inf].shape[0]
data_info_num.loc[c,'Missing Percentage']= df_num[df_num[c].isnull()].shape[0]/ df_num.shape[0]
data_info_num.loc[c,'Count of outliers']= self.__Outlier_Count(df_num[c])
i = i+1
return data_info_num
def Statistical_Summary(self,df):
df_num = df._get_numeric_data()
data_stat_num = pd.DataFrame()
try:
data_stat_num = pd.concat([df_num.describe().transpose(),
pd.DataFrame(df_num.quantile(q=0.10)),
pd.DataFrame(df_num.quantile(q=0.90)),
pd.DataFrame(df_num.quantile(q=0.95))],axis=1)
data_stat_num.columns = ['count','mean','std','min','25%','50%','75%','max','10%','90%','95%']
except:
pass
return data_stat_num
Info = DataFrame_Information()
Info.Feature_information(df)
Info.Value_Counts_of_Data(df)
Info.DataFrame_Aggregation(df)
Info.Numeric_Count_Summary(df)
Info.Statistical_Summary(df)
class DataFrame_Preprocessor():
def __init__(self):
print("Preprocessor object created")
def Preprocessor(self,df):
df['Salary Estimate'] = df['Salary Estimate'].apply(lambda x: x.split('(')[0])
df['Salary Estimate'] = df['Salary Estimate'].apply(lambda x: x.replace("K","").replace("$",""))
df['Min Salary'] =df['Salary Estimate'].apply(lambda x:int(x.split("-")[0]))
df['Max Salary'] = df['Salary Estimate'].apply(lambda x:int(x.split("-")[1]))
df['Avg Salary'] = (df['Min Salary'] + df['Max Salary'])/2
df['Company Name'] = df['Company Name'].apply(lambda x: x.split("\n")[0])
df['Job State'] = df['Location'].apply(lambda x: x.split(',')[1])
df['Same State'] = df.apply(lambda x: 1 if x.Location == x.Headquarters else 0,axis =1)
df['Age'] = df['Founded'].apply(lambda x: x if x <1 else 2020-x)
df['Python'] = df['Job Description'].apply(lambda x: 1 if 'python' in x.lower() else 0)
df['R'] = df['Job Description'].apply(lambda x: 1 if 'r studio' in x.lower() or 'r-studio' in x.lower() or ",R" in x or "R," in x or ",R" in x or " R " in x in x.lower() else 0)
df['AWS'] = df['Job Description'].apply(lambda x: 1 if 'aws' in x.lower() else 0)
df['Excel'] = df['Job Description'].apply(lambda x: 1 if 'excel' in x.lower() else 0)
df['Spark'] = df['Job Description'].apply(lambda x: 1 if 'spark' in x.lower() else 0)
df['Tableau'] = df['Job Description'].apply(lambda x: 1 if 'tableau' in x.lower() else 0)
df['SQL'] = df['Job Description'].apply(lambda x: 1 if 'sql' in x.lower() else 0)
df['TensorFlow'] = df['Job Description'].apply(lambda x: 1 if 'tensorflow' in x.lower() else 0)
df['PowerBI'] = df['Job Description'].apply(lambda x: 1 if 'powerbi' in x.lower() else 0)
df['SaS'] = df['Job Description'].apply(lambda x: 1 if 'sas' in x.lower() else 0)
df['Flask'] = df['Job Description'].apply(lambda x: 1 if 'flask' in x.lower() else 0)
df['Hadoop'] = df['Job Description'].apply(lambda x: 1 if 'hadoop' in x.lower() else 0)
df['Statistics']=df["Job Description"].apply(lambda x:1 if 'statistics' in x.lower() or 'statistical' in x.lower() else 0)
return df
preprocess = DataFrame_Preprocessor()
preprocess.Preprocessor(df)
df = df.drop("Unnamed: 0",axis = 1)
df.columns
df.to_csv("Scrapped_Glassdoor_Salary_cleaned.csv",index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rajesh-bhat/data-aisummit-2021-databricks-conversational-ai/blob/main/Pytorch_Intent_Classification_using_DistilBERT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Install Libraries
```
!pip install pandas torch transformers tqdm
```
### Download Data
```
!gdown --id 1OlcvGWReJMuyYQuOZm149vHWwPtlboR6 --output train.csv
!gdown --id 1Oi5cRlTybuIF2Fl5Bfsr-KkqrXrdt77w --output valid.csv
!gdown --id 1ep9H6-HvhB4utJRLVcLzieWNUSG3P_uF --output test.csv
```
### Read Data
```
import pandas as pd
train = pd.concat([pd.read_csv(file) for file in ["train.csv","valid.csv"]])
train = train.groupby('intent').sample(frac=0.25)
test = pd.read_csv("test.csv")
print(train.shape)
print(test.shape)
train.head()
train.intent.value_counts()
intent_mapping = {x:idx for idx,x in enumerate(train.intent.unique().tolist())}
train['target'] = train['intent'].map(intent_mapping)
train.head()
```
### Load libraries
```
import random
import numpy as np
from tqdm import tqdm
import torch
from torch.utils.data import DataLoader, Dataset
from transformers import DistilBertForSequenceClassification, DistilBertTokenizer, AdamW
```
### Utilities
```
def set_seed(seed):
"""To make the training process reproducible"""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
class MyDataset(Dataset):
def __init__(self, queries, intents, tokenizer, max_len):
self.queries = queries
self.intents = intents
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self) -> int:
return len(self.queries)
def __getitem__(self, index: int):
query = self.queries[index]
intent = self.intents[index]
# use encode plus of huggingface tokenizer to encode the sentence.
encoding = self.tokenizer.encode_plus(
query,
add_special_tokens=True,
max_length=self.max_len,
padding='max_length',
truncation=True,
return_token_type_ids=False,
return_tensors="pt",
)
return {
"query": query,
"intent": torch.tensor(intent, dtype=torch.long),
"input_ids": encoding["input_ids"].flatten(),
"attention_mask": encoding["attention_mask"].flatten(),
}
def dataset_loader(queries, intents, tokenizer, max_len, batch_size):
ds = MyDataset(
queries=queries.to_numpy(),
intents=intents.to_numpy(),
tokenizer=tokenizer,
max_len=max_len,
)
return DataLoader(ds, batch_size=batch_size, num_workers=4)
```
### Model Training
```
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
BATCH_SIZE = 64
MAX_LEN = 256
EPOCHS = 3
SEED = 42
set_seed(SEED)
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
train_dataloader = dataset_loader(queries=train['text'],
intents=train['target'],
tokenizer=tokenizer,
max_len=MAX_LEN,
batch_size=BATCH_SIZE)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=7)
model.to(DEVICE)
model.train()
optimizer = AdamW(model.parameters(), lr=5e-5)
for epoch in range(EPOCHS):
for batch in tqdm(train_dataloader):
optimizer.zero_grad()
input_ids = batch['input_ids'].to(DEVICE)
attention_mask = batch['attention_mask'].to(DEVICE)
targets = batch['intent'].to(DEVICE)
outputs = model(input_ids=input_ids,
attention_mask=attention_mask,
labels=targets)
loss = outputs.loss
loss.backward()
optimizer.step()
print(f"Training loss in epoch {epoch+1}: {round(loss.item(),4)}")
```
### Scoring
```
import torch.nn.functional as F
def to_numpy(tensor):
if tensor.requires_grad:
return tensor.detach().cpu().numpy()
return tensor.cpu().numpy()
def score(model, tokenizer, intent_mapping, query):
encoding = tokenizer.encode_plus(
query,
add_special_tokens=True,
max_length=MAX_LEN,
padding='max_length',
truncation=True,
return_token_type_ids=False,
return_tensors="pt",
)
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
output = model(input_ids, attention_mask)
probs = F.softmax(output.logits, dim=1)
_, prediction = torch.max(output.logits, dim=1)
return {
"query": query,
"predicted_intent": intent_mapping.get(prediction[0].item())
}
query = test['text'][3]
score(model=model.to('cpu'),
tokenizer=tokenizer,
intent_mapping={value: key for key, value in intent_mapping.items()},
query=query)
```
| github_jupyter |
```
from PIL import Image
import requests as rq
from io import BytesIO
import pandas as pd
import numpy as np
from keras.preprocessing import image
from keras.preprocessing.image import img_to_array
from keras.layers import (LSTM, Embedding, Input, Dense, Dropout)
from keras.models import Model
from keras.optimizers import Adam
from keras.preprocessing.image import img_to_array, load_img
from keras.preprocessing.text import Tokenizer
from keras.applications.vgg16 import VGG16, preprocess_input
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers.merge import add
from random import randint, sample
import csv
from keras.callbacks import LambdaCallback
# Model to pre-process images
cnn_model = VGG16()
# re-structure the model
cnn_model.layers.pop()
cnn_model = Model(inputs=cnn_model.inputs, outputs=cnn_model.layers[-1].output)
# Read in twitter data [cols: images, captions]
data = pd.read_csv("masterdata.csv")
l = len(data)
# List to hold the images' feature vectors
features = []
# List to hold the captions
captions = []
# List to hold the indices explored
indices = []
# Dimension of images
image_dim = 224
# Number of image-caption pairs to extract
num_images = 10000
# Maximum number of characters that a caption can be
longest_caption = 60
while len(features) < num_images:
try:
i = randint(0, l)
if i in indices:
continue
elif len(data.caption[i]) > longest_caption:
continue
try:
# Get image from URL and run it through VGG16 to get feature vector
url = data.photo[i]
response = rq.get(url)
img = Image.open(BytesIO(response.content)).resize((image_dim,image_dim))
x = image.img_to_array(img)
x = x.reshape((1, x.shape[0], x.shape[1], x.shape[2]))
x = preprocess_input(x)
f = cnn_model.predict(x)
features.append(f)
# Append caption
captions.append(data.caption[i])
# Append index
indices.append(i)
except:
continue
except:
print(data.caption[i])
continue
# Print statement to check in on progress
if (len(features) % 100 == 0):
print(str(len(features)) + "\tpreprocessed")
# Replace all newline characters in captions
captions = [c.replace('\n', ' ') for c in captions]
# Tokenize captions: leave punctuation and upper-case letters, tokenize on a char level
tokenizer = Tokenizer(lower=False, char_level=True,filters='\t\n')
tokenizer.fit_on_texts(captions)
encoded_captions = tokenizer.texts_to_sequences(captions)
start = len(tokenizer.word_index) + 1
stop = start + 1
vocab_size = stop + 1
# Insert start and stop sequences to the encoded caption size
encoded_captions = [([start] + c) for c in encoded_captions]
encoded_captions = [(c + [stop]) for c in encoded_captions]
# Write extracted data to file
import csv
with open("encoded.csv","w") as f:
wr = csv.writer(f)
wr.writerows(encoded_captions)
with open("captions.csv","w") as f:
wr = csv.writer(f)
wr.writerows(captions)
with open("features.csv","w") as f:
wr = csv.writer(f)
wr.writerows(features)
# Create input-output vectors
# Input: image feature vector, first n characters in caption
# Output: n+1 character
max_cap = max(len(c) for c in encoded_captions)
X1 = []
X2 = []
y = []
for i in range(len(encoded_captions)):
c = encoded_captions[i]
for j in range(len(c)):
in_seq, out_seq = c[:j], c[j]
in_seq = pad_sequences([in_seq], max_cap)[0]
out_seq = to_categorical(out_seq, num_classes = vocab_size)
X1.append(features[i])
X2.append(in_seq)
y.append(out_seq)
X1 = np.reshape(X1,(np.shape(X1)[0], np.shape(X1)[2]))
# feature extractor model
inputs1 = Input(shape=(4096,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
# sequence model
inputs2 = Input(shape=(max_cap,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
# decoder model
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
# tie it together [image, seq] [word]
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics = ['accuracy'])
# summarize model
print(model.summary())
# Function to convert a token to a letter
def to_letter(yhat):
for k, v in tokenizer.word_index.items():
if v == yhat:
return k
import sys
# Function to test model at the end of each epoch -- might need a little work
def on_epoch_end(epoch, _):
"""Function invoked at end of each epoch. Prints generated text."""
print()
print('Generating caption after epoch %d' % epoch)
i = randint(0, num_images)
generated = ''
print('-------- Real caption: "' + captions[i] + '"')
x_pred = np.zeros((1, max_cap))
x_pred[0] = start
f = np.array(features[i])
for i in range(max_cap):
preds = np.argmax(model.predict([f, x_pred], verbose=0))
if preds == start or preds == stop:
break
next_char = to_letter(preds)
generated += next_char
print("--- Generated caption: \"" + generated + "\"")
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
# Train on 80% of the data, test on 20%
amt_data = int(len(X1) * 4 / 5)
# Create training and testing vectors
X1train = np.array(X1[:amt_data])
X1test = np.array(X1[amt_data:])
X2train = np.array(X2[:amt_data])
X2test = np.array(X2[amt_data:])
ytrain = np.array(y[:amt_data])
ytest = np.array(y[amt_data:])
# Fit the model
model.fit([X1train, X2train], ytrain, epochs=15, verbose=1, validation_data=([X1test, X2test], ytest), callbacks=[print_callback])
# Test fitted model on 100 random images
for x in range(100):
i = randint(0, num_images)
print(i)
print("Real:")
print("\t" + captions[i])
pred_encoded = [start]
f = np.array(features[i])
yhat = 0
pred_capt = ""
# Loop to continue feeding generated string in until we hit the stop sequence
while yhat != stop and len(pred_capt) < 100:
pred_pad = np.array(pad_sequences([pred_encoded], max_cap))
yhat = np.argmax(model.predict([f, pred_pad]))
if yhat == start or yhat == stop:
break
try:
pred_encoded.append(yhat)
pred_capt += to_letter(yhat)
except:
print("OH NO")
print("Predicted:")
print("\t" + pred_capt)
print()
```
| github_jupyter |
# 基于注意力的神经机器翻译
此笔记本训练一个将马拉地语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。
训练完此笔记本中的模型后,你将能够输入一个马拉地语句子,例如 *"आम्ही जिंकलो."*,并返回其英语翻译 *"We won."*
对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。
<img src="https://tensorflow.google.cn/images/spanish-english.png" alt="spanish-english attention plot">
请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。
```
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
## 下载和准备数据集
我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
这个数据集中有很多种语言可供选择。我们将使用英语 - 马拉地语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据:
1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。
2. 删除特殊字符以清理句子。
3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。
4. 将每个句子填充(pad)到最大长度。
```
'''
# 下载文件
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
'''
path_to_file = "./lan/mar.txt"
# 将 unicode 文件转换为 ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# 在单词与跟在其后的标点符号之间插入一个空格
# 例如: "he is a boy." => "he is a boy ."
# 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# 除了 (a-z, A-Z, ".", "?", "!", ","),将所有字符替换为空格
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# 给句子加上开始和结束标记
# 以便模型知道何时开始和结束预测
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. 去除重音符号
# 2. 清理句子
# 3. 返回这样格式的单词对:[ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def max_length(tensor):
return max(len(t) for t in tensor)
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# 创建清理过的输入输出对
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
```
### 限制数据集的大小以加快实验速度(可选)
在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低):
```
# 尝试实验不同大小的数据集
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# 计算目标张量的最大长度 (max_length)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
# 采用 80 - 20 的比例切分训练集和验证集
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# 显示长度
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
```
### 创建一个 tf.data 数据集
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
## 编写编码器 (encoder) 和解码器 (decoder) 模型
实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。
<img src="https://tensorflow.google.cn/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。
下面是所实现的方程式:
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号:
* FC = 完全连接(密集)层
* EO = 编码器输出
* H = 隐藏层状态
* X = 解码器输入
以及伪代码:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。
* `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。
* `embedding output` = 解码器输入 X 通过一个嵌入层。
* `merged vector = concat(embedding output, context vector)`
* 此合并后的向量随后被传送到 GRU
每个步骤中所有向量的形状已在代码的注释中阐明:
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# 样本输入
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# 隐藏层的形状 == (批大小,隐藏层大小)
# hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小)
# 这样做是为了执行加法以计算分数
hidden_with_time_axis = tf.expand_dims(query, 1)
# 分数的形状 == (批大小,最大长度,1)
# 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V
# 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1)
attention_weights = tf.nn.softmax(score, axis=1)
# 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# 用于注意力
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x 在通过嵌入层后的形状 == (批大小,1,嵌入维度)
x = self.embedding(x)
# x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# 将合并后的向量传送到 GRU
output, state = self.gru(x)
# 输出的形状 == (批大小 * 1,隐藏层大小)
output = tf.reshape(output, (-1, output.shape[2]))
# 输出的形状 == (批大小,vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
## 定义优化器和损失函数
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## 检查点(基于对象保存)
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## 训练
1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。
2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。
3. 解码器返回 *预测* 和 *解码器隐藏层状态*。
4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。
5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。
6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。
7. 最后一步是计算梯度,并将其应用于优化器和反向传播。
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# 教师强制 - 将目标词作为下一个输入
for t in range(1, targ.shape[1]):
# 将编码器输出 (enc_output) 传送至解码器
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# 使用教师强制
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# 每 2 个周期(epoch),保存(检查点)一次模型
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## 翻译
* 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。
* 当模型预测 *结束标记* 时停止预测。
* 存储 *每个时间步的注意力权重*。
请注意:对于一个输入,编码器输出仅计算一次。
```
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# 存储注意力权重以便后面制图
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# 预测的 ID 被输送回模型
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# 注意力权重制图函数
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## 恢复最新的检查点并验证
```
# 恢复检查点目录 (checkpoint_dir) 中最新的检查点
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# 错误的翻译
translate(u'trata de averiguarlo.')
```
| github_jupyter |
## Keras
Keras это высокоуровневая обертка над tensorflow. Но в текущих реализациях библиотеке tf они живут очень рядом.
Когда мы собираем свои нейронки мы берем уже готовые слои из keras и добавляем что-то свое, если нам требуется.
Но keras можно использовать без явного использования TF пытаясь свести задачу к fit-predict.
### Sequential
Самое простое, что мы можем сделать это собирать слои последовательно друг за другом - займеся же этим!
```
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras import Sequential, layers as L # подгружаем нужные модули.
import tensorflow.keras as keras
# в keras лежит несколько наборов данных. Для примера возьмем fashion_mnist - как mnist, но про предметы одежды :)
fashion_mnist = tf.keras.datasets.fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=10**4, random_state=42)
X_train = X_train/ 255.
X_val = X_val/ 255.
X_test = X_test/ 255.
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# для того, чтобы учить через cross_entropy нам нужно сделать OHE таргетам. И эта функция есть в keras!
from tensorflow.keras.utils import to_categorical
y_train_ohe = to_categorical(y_train)
y_test_ohe = to_categorical(y_test)
y_val_ohe = to_categorical(y_val)
## Первая простая нейронка
tf.random.set_seed(42) # фиксируем random_seed
model = Sequential(name = 'first_try')
model.add(L.Input(shape = (28,28))) # входной нейрон с данными. Его обычно можно опускать, сразу передавая
# в нейрон размерность. Но Dense ячейки не умеют работать с картинками, поэтому оставляем Input
model.add(L.Flatten()) # разворачиваем картинку в вектор
model.add(L.Dense(100, kernel_initializer='random_normal',name='First')) # можно именовать и потом брать слои по именам
model.add(L.ReLU()) # добавляем активацию
model.add(L.Dense(10, kernel_initializer = 'random_normal',name='Output'))
model.add(L.Softmax())
opt = keras.optimizers.Adam(learning_rate=1e-4) # так же нам нужно указать оптимайзер
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"]) # и собрать нашу модель, указав метрики,loss и оптимизатор
history1 = model.fit(X_train,y_train_ohe,batch_size=500,epochs=2,validation_data = (X_val,y_val_ohe))
# и процесс обучения. Задаем количество эпох, размер батча и валидационную часть наших данных
#history1.params
#history1.history
# Эту же модель можно записать чуть в меньшее количество строчек кода
model = Sequential(name = 'first_try')
model.add(L.Input(shape = (28,28)))
model.add(L.Flatten())
model.add(L.Dense(100, kernel_initializer='random_normal',name='First',activation='relu')) # можно именовать и потом брать слои по именам
model.add(L.Dense(10, kernel_initializer = 'random_normal',name='Output',activation='softmax'))
opt = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"])
history1 = model.fit(X_train,y_train_ohe,batch_size=500,epochs=2,validation_data = (X_val,y_val_ohe))
# ?L.Dense
```
Из приятного - все в таком подходе можно кастомизировать под себя!
Но пока продолжим рассмотрение о следующем подходе сборки моделей.
Класс Sequential не дает нам вообще никакой гибкости, позволяя набирать слои только последовательно.
Что же у нас есть новый герой - Model.
Он позволяет собирать сетки практически любой архитектуры
```
from tensorflow.keras import Model # подгружаем нужные модули.
init = 'uniform'
act = 'relu'
input_tensor = L.Input(shape=(28, 28)) # задаем вход
x = L.Flatten()(input_tensor)# применение нейрона к входу
x = L.Dense(100, kernel_initializer=init, activation=act)(x) # повторяем всю логику сколько нам надо
x = L.Dense(100, kernel_initializer=init, activation=act)(x)
output_tensor = L.Dense(10, kernel_initializer=init, activation='softmax')(x)
model = keras.Model(input_tensor, output_tensor) # Keras под копотом сам собирает граф.
# Если он может получить из входа выхода то вы великолепны.
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"])
history = model.fit(X_train,y_train_ohe,batch_size=500,epochs=2,validation_data = (X_val,y_val_ohe))
```
Такой подход позволяет делать практически любой гибкости нейронки. Как пример - двухголовая!
```
input_1 = L.Input(shape=(28, 28))
input_2 = L.Input(shape=(28, 28))
x1 = L.Flatten()(input_1)
x1 = L.Dense(100, kernel_initializer=init, activation=act)(x1)
x1 = L.Dense(100, kernel_initializer=init, activation=act)(x1)
x2 = L.Flatten()(input_2)
x2 = L.Dense(100, kernel_initializer=init, activation=act)(x2)
x2 = L.Dense(100, kernel_initializer=init, activation=act)(x2)
x = L.concatenate([x1, x2]) # Волшебное слово, которое позволяет нам соеденять несколько потоков наших данных
output = L.Dense(10, kernel_initializer=init, activation='softmax')(x)
model = keras.Model([input_1, input_2], output)
model.summary()
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"])
history = model.fit([X_train,X_train],y_train_ohe,batch_size=500,epochs=2,validation_data = ([X_val,X_val],y_val_ohe))
# нужно для винды, если не видит путь до graphviz
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
from tensorflow.keras.utils import plot_model
plot_model(model) # можно нарисовать модельку
```
## 4.2 Несколько выходов и функций потерь
```
init = 'uniform'
act = 'relu'
input_tensor = L.Input(shape=(28, 28))
x = L.Flatten()(input_tensor)
x1 = L.Dense(100, kernel_initializer=init, activation=act)(x)
x2 = L.Dense(100, kernel_initializer=init, activation=act)(x)
x3 = L.Dense(100, kernel_initializer=init, activation=act)(x)
output_1 = L.Dense(1, kernel_initializer=init, activation='sigmoid',name='gender')(x1)
output_2 = L.Dense(10, kernel_initializer=init, activation='softmax',name='income')(x2)
output_3 = L.Dense(1, kernel_initializer=init,name='age')(x3)
model = keras.Model(input_tensor, [output_1, output_2, output_3])
model.summary()
# чтобы модель не переобучилась под самую большую функцию потерь
# их можно взвесить
model.compile(optimizer='adam', loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'],
loss_weights=[0.25, 1., 10.])
# если дали выходам имена, можно вот так:
model.compile(optimizer='adam',
loss={'age': 'mse',
'income': 'categorical_crossentropy',
'gender': 'binary_crossentropy'},
loss_weights={'age': 0.25,
'income': 1.,
'gender': 10.})
```
Помните статью про то, как люди рисовали функции потерь? [Теперь появилась галерея!](https://losslandscape.com/gallery/) Есть подход как skip-connection, он сильно меняет нашу функцию.

Такую модель нельзя собрать через `Sequence`-стиль, но можно через функциональный стиль. Давайте попробуем сделать это. Заодно посмотрим насколько сильно в нашей ситуации будет меняться траектория обучения. (Сравним обычный 6-ти слойный персептрон и с прокидыванием инфы, например со 2 слоя на 5ый). Ну и также сразу поэксперементируем с функциями активаций и batchnorm.
```
### ╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ создайте сеть по инструкциям ниже
tf.random.set_seed(42)
## Соберите обычный 6ти слойный перспептрон
## Соберите обычный 6ти слойный перспептрон с функциями активации сигмоид
## Соберите с батчнормом и релу
## Соберите ну и наконец прокидывание данных
# Функция для удобной отрисовки всего
def plot_history(histories, key='loss', start=0):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch[start:], history.history['val_'+key][start:],
#'--',
label=name.title()+' Val')
#plt.plot(history.epoch[start:], history.history[key][start:], color=val[0].get_color(),
# label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([start, max(history.epoch)])
plot_history([('simple', history_simple),
('simple_sigmoid', history_simple_sigmoid),
('simple_BN_and_init', history_simple_BN_and_init),
('complex!!!', history_complex)
],
start=0)
```
#### Более интересная часть - кастомизация
В keras уже есть несколько удобных callback.
```
from tensorflow.keras.callbacks import EarlyStopping,LearningRateScheduler,ReduceLROnPlateau,ModelCheckpoint
EarlyStopping # останавливает обучение если наша метрика не меняет n эпох
LearningRateScheduler # меняет наш learning_rate по расписанию
ReduceLROnPlateau # понижает на LR если не происходит улучшения
ModelCheckpoint # сохраняет нашу лучшую модель
early_stop = EarlyStopping(patience=3)
reduce_on_plateau = ReduceLROnPlateau(patience=3)
# filepath="checkpoint_path/weights-improvement-{epoch:02d}-{val_categorical_accuracy:.2f}.hdf5"
filepath="checkpoint_path/weights-improvement.hdf5"
model_checkpoing = ModelCheckpoint(filepath,
save_best_only=True,
save_weights_only=True)
def create_simple_model():
model = Sequential(name = 'simple_model')
model.add(L.Input(shape = (28,28)))
model.add(L.Flatten())
model.add(L.Dense(100, kernel_initializer='random_normal',name='First',activation='relu'))
model.add(L.Dense(100, kernel_initializer='random_normal',name='Second',activation='relu'))
model.add(L.Dense(10, kernel_initializer = 'random_normal',name='Output',activation='softmax'))
opt = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"])
return model
model = create_simple_model()
history = model.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [early_stop,reduce_on_plateau,model_checkpoing],verbose=0)
model.load_weights('checkpoint_path/weights-improvement.hdf5')
filepath="checkpoint_path/full_model_improvement.hdf5"
model_checkpoing = ModelCheckpoint(filepath,
save_best_only=True,
save_weights_only=False)
simple_model = create_simple_model()
history = simple_model.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [early_stop,reduce_on_plateau,model_checkpoing],verbose=0)
simple_model = keras.models.load_model(filepath)
simple_model.evaluate(x=X_val,y=y_val_ohe)
from tensorflow.keras import callbacks
class My_Callback(callbacks.Callback): # Класс My_Callback унаследовал свойства класса Callback
def on_train_begin(self, logs={}): # Функция, которая выполняется в начале обучения
return
def on_train_end(self, logs={}): # Функция, которая выполняется в конце обучения
return
def on_epoch_begin(self, logs={}): # В начале каждой эпохи
return
def on_epoch_end(self, epoch, logs={}):
# В конце каждой эпохи
return
def on_batch_begin(self, batch, logs={}): # В начале батча
return
def on_batch_end(self, batch, logs={}): # В конце батча
return
from tensorflow.keras import callbacks
class Printlogs(callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch==10:
print(logs)
simple_model = create_simple_model()
our_callback = Printlogs()
history = simple_model.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [our_callback],
verbose=0)
import numpy as np
# Напишем изменение скорости обучения
INIT_LR=0.1
# Стратегия для понижения скорости
def lr_scheduler(epoch):
drop = 0.5
epochs_drop = 1.0
lrate = INIT_LR * np.math.pow(drop, np.math.floor((epoch)/epochs_drop))
return lrate
lrate = LearningRateScheduler(lr_scheduler)
# класс чтобы отслеживать бесчинства
class Print_lr(callbacks.Callback):
def on_epoch_end(self, epoch, logs):
print(float(tf.keras.backend.get_value(self.model.optimizer.lr)))
# чтобы установить свой LR надо указать
# LR_OUR = ....
# tf.keras.backend.set_value(self.model.optimizer.lr, LR_OUR)
simple_model = create_simple_model()
history = simple_model.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [lrate,Print_lr()],
verbose=0)
# Давайте заставим модель посчитать метрики каждую эпоху, как пример
# Считать будем на X_val, y_val
```
### Кастомные loss и метрики
Что уже есть
https://keras.io/api/metrics/
https://keras.io/api/losses/
```
celsius = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
model = Sequential()
model.add(L.Dense(1))
opt = tf.keras.optimizers.Adam( )
model.compile(loss='mse', optimizer=opt)
model.fit(celsius, fahrenheit, epochs=3, verbose=1)
def custom_loss_function(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1)
%%time
model = Sequential()
model.add(L.Dense(1))
opt = tf.keras.optimizers.Adam( )
model.compile(loss=custom_loss_function, optimizer=opt)
model.fit(celsius, fahrenheit, epochs=3,verbose=1)
## тоже самое можно делать и с метриками. Также можно следить сразу за несколькими метриками, что бывает полезно.
## Если хотим добавить совсем сложную логику то мы это будем делать через callback
```
В этой тетрадке немного поработаем с градусами по цельсию и фаренгейту! Снова попробуем восстановить формулу
$$ f = c \times 1.8 + 32 $$
```
## возьмем срезы
tf.keras.backend.set_floatx('float64')
model = Sequential()
model.add(L.Dense(1,name='our_neural'))
opt = tf.keras.optimizers.Adam(0.1 )
model.compile(loss='mse', optimizer=opt)
model.fit(celsius, fahrenheit, epochs=600, verbose=0)
## элементарная задача, но из-за того, что данные не скалированы сходились вечность
our_layer = model.get_layer(name='our_neural')
our_layer.variables
our_layer(np.array(celsius).reshape((7,1))) ## Берем прогнозы от слоя
```
Есть понимание, как взять определить в callback на какой эпохе мы получили правильное значение весов? Проверить как ведет себя наша нейронка? (с учетом того, что мы точно знаем формулу)
Ну и заодно быстро проверить гипотезу - а поможет ли нам batchnorm в данной ситуации?
### Пишем класс нейронки с TF и keras вместе
```
# транспонировали выборку
x_train = celsius[:,None]
y_train = fahrenheit[:,None]
tf.keras.backend.set_floatx('float64')
class Super_puper_neural_net(Model):
def __init__(self, n_hidden_neurons):
super(Super_puper_neural_net, self).__init__()
self.fc1 = L.Dense(n_hidden_neurons, kernel_initializer='glorot_uniform',
activation='relu', trainable=True)
self.fc2 = L.Dense(n_hidden_neurons, kernel_initializer='glorot_uniform',
trainable=True)
def encode(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
model_super = Super_puper_neural_net(1)
model_super.encode(x_train)
# Ошибка для модели
def mean_square(y_pred, y_true):
return tf.reduce_mean((y_pred-y_true)**2)
# оптимизатор
optimizer = tf.optimizers.SGD(learning_rate=0.001)
# процесс оптимизации
def model_train(X, Y):
# находим loss и пробрасываем градиент
with tf.GradientTape() as g:
pred = model_super.encode(X)
loss = mean_square(pred, Y)
# Вычисляем градиенты
gradients = g.gradient(loss, model_super.variables)
# Обновляем веса a и b в ходе одной итерации спуска
optimizer.apply_gradients(zip(gradients, model_super.variables))
#Обучение
epochs = 1000 # число эпох
for i in range(epochs):
# Делаем щаг градиентного спуска
model_train(x_train, y_train)
# Каждую сотую итерацию следим за тем, что произошло
if i%100 == 0:
y_pred = model_super.encode(x_train)
loss_val = mean_square(y_pred, y_train)
print("step: %i, loss: %f" % (i, loss_val))
```
# Свой слой на Tensorflow для Keras
Новые слои можно писать на основе керасовского класса `Layer`. Если прописать `help(tf.keras.layers.Layer)`, можно почитать про него. Если в кратце, нужно реализовать три части:
* Конструктор, в нём мы описываем гиперпараметры
* Метод `build`, в которм мы описываем все переменные
* Метод `call`, который делает forward pass
```
class MyLinear(L.Layer):
# Задаём консруктор
def __init__(self, units=32):
super(MyLinear, self).__init__() # чтобы коректно унаследовались методы
self.units = units # число нейронов
def build(self, input_shape):
# add_weight внутри build то же самое что и Variable, но совместимо с Keras
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
# Применение
def call(self, inputs):
# сразу делаем и линейное преобразование и ReLU (а почему бы и нет)
return tf.nn.relu(tf.matmul(inputs, self.w) + self.b)
model_custom = Sequential(name = 'simple_model')
model_custom.add(L.Input(shape = (28,28)))
model_custom.add(L.Flatten())
model_custom.add(L.Dense(100, kernel_initializer='random_normal',name='First',activation='relu'))
model_custom.add(MyLinear()) ### Самый красивый слой
model_custom.add(L.Dense(10, kernel_initializer = 'random_normal',name='Output',activation='softmax'))
opt = keras.optimizers.Adam(learning_rate=1e-4)
model_custom.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=["categorical_accuracy"])
history = model_custom.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [early_stop,reduce_on_plateau],verbose=1)
```
Ну и нам остался пример, как взять срез модели. Посмотреть прогнозы в середине
```
model = create_simple_model()
history = model.fit(X_train,y_train_ohe,batch_size=500,epochs=20,
validation_data = (X_val,y_val_ohe),
callbacks = [early_stop,reduce_on_plateau],verbose=0)
# Извлекаем выходы верхних 2х слоев
layer_outputs = [layer.output for layer in model.layers[1:3]]
# создаем модель, которая вернет эти выходы с учетом заданнаго входа
activation_model = Model(inputs=model.input, outputs=layer_outputs)
prediction = activation_model.predict(X_val)
prediction
layer_outputs
```
Что сегодня не вошло - как переопределить градиенты для своих слоев (на уровне keras очень геморойно, если уже занимаетесь этим то вряд ли пишете на верхнеуровневом фраемворке)
Как работать с уже готовыми и обучеными моделями, дофичивать нейронки по кусочкам. Но это уже в следующих сериях :)
| github_jupyter |
### ENVELOPE SPECTRUM - INNER RACE (Fault Diameter 0.007")
```
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
import lee_dataset_CWRU
from lee_dataset_CWRU import *
import envelope_spectrum
from envelope_spectrum import *
faultRates = [3.585, 5.415, 1] #[outer, inner, shaft]
Fs = 12000
DE_I1, FE_I1, t_DE_I1, t_FE_I1, RPM_I1, samples_s_DE_I1, samples_s_FE_I1 = lee_dataset('../DataCWRU/105.mat')
DE_I2, FE_I2, t_DE_I2, t_FE_I2, RPM_I2, samples_s_DE_I2, samples_s_FE_I2 = lee_dataset('../DataCWRU/106.mat')
DE_I3, FE_I3, t_DE_I3, t_FE_I3, RPM_I3, samples_s_DE_I3, samples_s_FE_I3 = lee_dataset('../DataCWRU/107.mat')
DE_I4, FE_I4, t_DE_I4, t_FE_I4, RPM_I4, samples_s_DE_I4, samples_s_FE_I4 = lee_dataset('../DataCWRU/108.mat')
fr_I1 = RPM_I1 / 60
BPFI_I1 = 5.4152 * fr_I1
BPFO_I1 = 3.5848 * fr_I1
fr_I2 = RPM_I2 / 60
BPFI_I2 = 5.4152 * fr_I2
BPFO_I2 = 3.5848 * fr_I2
fr_I3 = RPM_I3 / 60
BPFI_I3 = 5.4152 * fr_I3
BPFO_I3 = 3.5848 * fr_I3
fr_I4 = RPM_I4 / 60
BPFI_I4 = 5.4152 * fr_I4
BPFO_I4 = 3.5848 * fr_I4
fSpec_I1, xSpec_I1 = envelope_spectrum2(DE_I1, Fs)
fSpec_I2, xSpec_I2 = envelope_spectrum2(DE_I2, Fs)
fSpec_I3, xSpec_I3 = envelope_spectrum2(DE_I3, Fs)
fSpec_I4, xSpec_I4 = envelope_spectrum2(DE_I4, Fs)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
fig.set_size_inches(14, 10)
ax1.plot(fSpec_I1, xSpec_I1, label = 'Env. spectrum')
ax1.axvline(x = fr_I1, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax1.axvline(x = BPFI_I1, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax1.axvline(x = BPFO_I1, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax1.set_xlim(0,200)
ax1.set_xlabel('Frequency')
ax1.set_ylabel('Env. spectrum')
ax1.set_title('Inner race. Fault Diameter 0.007", 1797 RPM')
ax1.legend(loc = 2)
ax2.plot(fSpec_I2, xSpec_I2, label = 'Env. spectrum')
ax2.axvline(x = fr_I2, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax2.axvline(x = BPFI_I2, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax2.axvline(x = BPFO_I2, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax2.set_xlim(0,200)
ax2.legend(loc = 2)
ax2.set_xlabel('Frequency')
ax2.set_ylabel('Env. spectrum')
ax2.set_title('Inner race. Fault Diameter 0.007", 1772 RPM')
ax3.plot(fSpec_I3, xSpec_I3, label = 'Env. spectrum')
ax3.axvline(x = fr_I3, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax3.axvline(x = BPFI_I3, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax3.axvline(x = BPFO_I3, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax3.set_xlim(0,200)
ax3.legend(loc = 2)
ax3.set_xlabel('Frequency')
ax3.set_ylabel('Env. spectrum')
ax3.set_title('Inner race. Fault Diameter 0.007", 1750 RPM')
ax4.plot(fSpec_I4, xSpec_I4, label = 'Env. spectrum')
ax4.axvline(x = fr_I4, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax4.axvline(x = BPFI_I4, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax4.axvline(x = BPFO_I4, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax4.set_xlim(0,200)
ax4.legend(loc = 2)
ax4.set_xlabel('Frequency')
ax4.set_ylabel('Env. spectrum')
ax4.set_title('Inner race. Fault Diameter 0.007", 1730 RPM')
clasificacion_inner = pd.DataFrame({'Señal': ['105.mat', '106.mat', '107.mat', '108.mat'],
'Estado': ['Fallo Inner Race'] * 4,
'Predicción': [clasificacion_envelope(fSpec_I1, xSpec_I1, fr_I1, BPFO_I1, BPFI_I1),
clasificacion_envelope(fSpec_I2, xSpec_I2, fr_I2, BPFO_I2, BPFI_I2),
clasificacion_envelope(fSpec_I3, xSpec_I3, fr_I3, BPFO_I3, BPFI_I3),
clasificacion_envelope(fSpec_I4, xSpec_I4, fr_I4, BPFO_I4, BPFI_I4)]})
clasificacion_inner
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
This program makes learning ev-gmm.
"""
# __future__ module make compatible python2 and python3
from __future__ import division, print_function
# basic modules
import os
import os.path
import time
# for warning ignore
import warnings
#warning.filterwarnings('ignore')
# for file system manupulation
from shutil import rmtree
import glob
import argparse
# for save object
import pickle
# for make glaph
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
plt.rcParams['figure.figsize'] = (16, 5)
import librosa.display
# for scientific computing
import numpy as np
from numpy.linalg import norm
from sklearn.decomposition import PCA
from sklearn.mixture import GMM # GMM class cannot use after sklearn 0.20.0
import sklearn.mixture
#from sklearn.mixture.gaussian_mixture import _compute_precision_cholesky
from sklearn.preprocessing import StandardScaler
import scipy.sparse
from scipy.signal import firwin, lfilter
# for display audio controler
from IPython.display import Audio
# for manuplate audio data
import soundfile as sf
import pyworld as pw
import pysptk
from dtw import dtw
from fastdtw import fastdtw
class WORLD(object):
"""
WORLD based speech analyzer and synthezer.
Ref : https://github.com/k2kobayashi/sprocket/
"""
def __init__(self, fs=16000, fftl=1024, shiftms=5.0, minf0=40.0, maxf0=500.0):
"""
Parameters
----------
fs : int
Sampling frequency
fftl : int
FFT length
shiftms : float
Shift length [ms]
minf0 : float
Floor in F0 estimation
maxf0 : float
Seli in F0 estimation
"""
self.fs = fs
self.fftl = fftl
self.shiftms = shiftms
self.minf0 = minf0
self.maxf0 = maxf0
def analyze(self, x):
"""
Analyze acoustic featueres.
Parameters
----------
x : array, shape(`T`)
monoral speech signal in time domain
Returns
----------
f0 : array, shape(`T`)
F0 sequence
sp : array, shape(`T`, `fftl / 2 + 1`)
Spectral envelope sequence
ap : array, shape(`T`, `fftl / 2 + 1`)
aperiodicity sequence
"""
f0, time_axis = pw.harvest(x, self.fs, f0_floor=self.minf0,
f0_ceil=self.maxf0, frame_period=self.shiftms)
sp = pw.cheaptrick(x, f0, time_axis, self.fs, fft_size=self.fftl)
ap = pw.d4c(x, f0, time_axis, self.fs, fft_size=self.fftl)
assert sp.shape == ap.shape
return f0, sp, ap
def analyze_f0(self, x):
"""
Analyze f0.
Parameters
----------
x : array, shape(`T`)
monoral speech signal in time domain
Returns
----------
f0 : array, shape(`T`)
F0 sequence
"""
f0, time_axis = pw.harvest(x, self.fs, f0_floor=self.minf0,
f0_ceil=self.maxf0, frame_period=self.shiftms)
assert f0.shape == x.shape()
return f0
def synthesis(self, f0, sp, ap):
"""
Re-synthesizes a speech waveform from acoustic featueres.
Parameters
----------
f0 : array, shape(`T`)
F0 sequence
sp : array, shape(`T`, `fftl / 2 + 1`)
Spectral envelope sequence
ap : array, shape(`T`, `fftl / 2 + 1`)
aperiodicity sequence
"""
return pw.synthesize(f0, sp, ap, self.fs, frame_period=self.shiftms)
class FeatureExtractor(object):
"""
Analyze acoustic features from a waveform.
This class may have several types of estimeter like WORLD or STRAIGHT.
Default type is WORLD.
Ref : https://github.com/k2kobayashi/sprocket/
"""
def __init__(self, analyzer='world', fs=16000, fftl=1024,
shiftms=5.0, minf0=50.0, maxf0=500.0):
"""
Parameters
----------
analyzer : str
Analyzer
fs : int
Sampling frequency
fftl : int
FFT length
shiftms : float
Shift length [ms]
minf0 : float
Floor in F0 estimation
maxf0 : float
Seli in F0 estimation
"""
self.analyzer = analyzer
self.fs = fs
self.fftl = fftl
self.shiftms = shiftms
self.minf0 = minf0
self.maxf0 = maxf0
if self.analyzer == 'world':
self.analyzer = WORLD(fs=self.fs, fftl=self.fftl,
minf0=self.minf0, maxf0=self.maxf0, shiftms=self.shiftms)
else:
raise('Analyzer Error : not support type, see FeatureExtractor class.')
self._f0 = None
self._sp = None
self._ap = None
def analyze(self, x):
"""
Analyze acoustic featueres.
Parameters
----------
x : array, shape(`T`)
monoral speech signal in time domain
Returns
----------
f0 : array, shape(`T`)
F0 sequence
sp : array, shape(`T`, `fftl / 2 + 1`)
Spectral envelope sequence
ap : array, shape(`T`, `fftl / 2 + 1`)
aperiodicity sequence
"""
self.x = np.array(x, dtype=np.float)
self._f0, self._sp, self._ap = self.analyzer.analyze(self.x)
# check f0 < 0
self._f0[self._f0 < 0] = 0
if np.sum(self._f0) == 0.0:
print("Warning : F0 values are all zero.")
return self._f0, self._sp, self._ap
def analyze_f0(self, x):
"""
Analyze f0.
Parameters
----------
x : array, shape(`T`)
monoral speech signal in time domain
Returns
----------
f0 : array, shape(`T`)
F0 sequence
"""
self.x = np.array(x, dtype=np.float)
self._f0 = self.analyzer.analyze_f0(self.x)
# check f0 < 0
self._f0[self._f0 < 0] = 0
if np.sum(self._f0) == 0.0:
print("Warning : F0 values are all zero.")
return self._f0
def mcep(self, dim=24, alpha=0.42):
"""
Convert mel-cepstrum sequence from spectral envelope.
Parameters
----------
dim : int
mel-cepstrum dimension
alpha : float
parameter of all-path filter
Returns
----------
mcep : array, shape(`T`, `dim + 1`)
mel-cepstrum sequence
"""
self._analyzed_check()
return pysptk.sp2mc(self._sp, dim, alpha)
def codeap(self):
"""
"""
self._analyzed_check()
return pw.code_aperiodicity(self._ap, self.fs)
def npow(self):
"""
Normalized power sequence from spectral envelope.
Returns
----------
npow : vector, shape(`T`, `1`)
Normalized power sequence of the given waveform
"""
self._analyzed_check()
npow = np.apply_along_axis(self._spvec2pow, 1, self._sp)
meanpow = np.mean(npow)
npow = 10.0 * np.log10(npow / meanpow)
return npow
def _spvec2pow(self, specvec):
"""
"""
fftl2 = len(specvec) - 1
fftl = fftl2 * 2
power = specvec[0] + specvec[fftl2]
for k in range(1, fftl2):
power += 2.0 * specvec[k]
power /= fftl
return power
def _analyzed_check(self):
if self._f0 is None and self._sp is None and self._ap is None:
raise('Call FeatureExtractor.analyze() before this method.')
class Synthesizer(object):
"""
Synthesize a waveform from acoustic features.
Ref : https://github.com/k2kobayashi/sprocket/
"""
def __init__(self, fs=16000, fftl=1024, shiftms=5.0):
"""
Parameters
----------
fs : int
Sampling frequency
fftl : int
FFT length
shiftms : float
Shift length [ms]
"""
self.fs = fs
self.fftl = fftl
self.shiftms = shiftms
def synthesis(self, f0, mcep, ap, rmcep=None, alpha=0.42):
"""
Re-synthesizes a speech waveform from acoustic featueres.
Parameters
----------
f0 : array, shape(`T`)
F0 sequence
mcep : array, shape(`T`, `dim`)
mel-cepstrum sequence
ap : array, shape(`T`, `fftl / 2 + 1`)
aperiodicity sequence
rmcep : array, shape(`T`, `dim`)
array of reference mel-cepstrum sequence
alpha : float
parameter of all-path filter
Returns
----------
wav : array,
syntesized waveform
"""
if rmcep is not None:
# power modification
mcep = mod_power(mcep, rmcep, alpha=alpha)
sp = pysptk.mc2sp(mcep, alpha, self.fftl)
wav = pw.synthesize(f0, sp, ap, self.fs, frame_period=self.shiftms)
return wav
def synthesis_diff(self, x, diffmcep, rmcep=None, alpha=0.42):
"""
Re-synthesizes a speech waveform from acoustic featueres.
filtering with a differential mel-cepstrum.
Parameters
----------
x : array, shape(`samples`)
array of waveform sequence
diffmcep : array, shape(`T`, `dim`)
array of differential mel-cepstrum sequence
rmcep : array, shape(`T`, `dim`)
array of reference mel-cepstrum sequence
alpha : float
parameter of all-path filter
Returns
----------
wav : array,
syntesized waveform
"""
x = x.astype(np.float64)
dim = diffmcep.shape[1] - 1
shiftl = int(self.fs / 1000 * self.shiftms)
if rmcep is not None:
# power modification
diffmcep = mod_power(rmcep + diffmcep, rmcep, alpha=alpha) - rmcep
# mc2b = transform mel-cepstrum to MLSA digital filter coefficients.
b = np.apply_along_axis(pysptk.mc2b, 1, diffmcep, alpha)
mlsa_fil = pysptk.synthesis.Synthesizer(pysptk.synthesis.MLSADF(dim, alpha=alpha),
shiftl)
wav = mlsa_fil.synthesis(x, b)
return wav
def synthesis_sp(self, f0, sp, ap):
"""
Re-synthesizes a speech waveform from acoustic featueres.
Parameters
----------
f0 : array, shape(`T`)
F0 sequence
spc : array, shape(`T`, `dim`)
mel-cepstrum sequence
ap : array, shape(`T`, `fftl / 2 + 1`)
aperiodicity sequence
Returns
----------
wav : array,
syntesized waveform
"""
wav = pw.synthesize(f0, sp, ap, self.fs, frame_period=self.shiftms)
return wav
def mod_power(cvmcep, rmcep, alpha=0.42, irlen=256):
"""
power modification based on inpuulse responce
Parameters
----------
cvmcep : array, shape(`T`, `dim`)
array of converted mel-cepstrum
rmcep : arraym shape(`T`, `dim`)
array of reference mel-cepstrum
alpha : float
parameter of all-path filter
irlen : int
Length for IIR filter
Returns
----------
modified_cvmcep : array, shape(`T`, `dim`)
array of power modified converted mel-cepstrum
"""
if rmcep.shape != cvmcep.shape:
raise ValueError(
"The shape of the converted and reference mel-cepstrum are different : {} / {}.format(cvmcep.shape, rmcep.shape)"
)
# mc2e = Compute energy from mel-cepstrum. e-option
cv_e = pysptk.mc2e(cvmcep, alpha=alpha, irlen=irlen)
r_e = pysptk.mc2e(rmcep, alpha=alpha, irlen=irlen)
dpow = np.log(r_e / cv_e) / 2
modified_cvmcep = np.copy(cvmcep)
modified_cvmcep[:, 0] += dpow
return modified_cvmcep
# def util methods
def melcd(array1, array2):
"""
calculate mel-cepstrum distortion
Parameters
----------
array1, array2 : array, shape(`T`, `dim`) or shape(`dim`)
Array of original and target.
Returns
----------
mcd : scala, number > 0
Scala of mel-cepstrum distoriton
"""
if array1.shape != array2.shape:
raise ValueError(
"The shape of both array are different : {} / {}.format(array1.shape,array2.shape)"
)
if array1.ndim == 2:
diff = array1 - array2
mcd = 10.0 / np.log(10) * np.mean(np.sqrt(2.0 * np.sum(diff ** 2, axis=1)))
elif array1.ndim == 1:
diff = array1 - array2
mcd = 10.0 / np.log(10) * np.sqrt(2.0 * np.sum(diff ** 2))
else:
raise ValueError("Dimension mismatch.")
return mcd
def delta(data, win=[-1.0, 1.0, 0]):
"""
calculate delta component
Parameters
----------
data : array, shape(`T`, `dim`)
Array of static matrix sequence.
win : array, shape(`3`)
The shape of window matrix.
Returns
----------
delta : array, shape(`T`, `dim`)
Array of delta matrix sequence.
"""
if data.ndim == 1:
# change vector into 1d-array
T = len(data)
dim = data.ndim
data = data.reshape(T, dim)
else:
T, dim = data.shape
win = np.array(win, dtype=np.float64)
delta = np.zeros((T, dim))
delta[0] = win[0] * data[0] + win[1] * data[1]
delta[-1] = win[0] * data[-2] + win[1] * data[-1]
for i in range(len(win)):
delta[1:T - 1] += win[i] * delta[i:T - 2 + i]
return delta
def static_delta(data, win=[-1.0, 1.0, 0]):
"""
calculate static and delta component
Parameters
----------
data : array, shape(`T`, `dim`)
Array of static matrix sequence.
win : array, shape(`3`)
The shape of window matrix.
Returns
----------
sddata : array, shape(`T`, `dim * 2`)
Array of static and delta matrix sequence.
"""
sddata = np.c_[data, delta(data, win)]
assert sddata.shape[1] == data.shape[1] * 2
return sddata
def construct_static_and_delta_matrix(T, D, win=[-1.0, 1.0, 0]):
"""
calculate static and delta transformation matrix
Parameters
----------
T : scala, `T`
Scala of time length
D : scala, `D`
Scala of the number of dimension.
win : array, shape(`3`)
The shape of window matrix.
Returns
----------
W : array, shape(`2 * D * T`, `D * T`)
Array of static and delta transformation matrix.
"""
static = [0, 1, 0]
delta = win
assert len(static) == len(delta)
# generate full W
DT = D * T
ones = np.ones(DT)
row = np.arange(2 * DT).reshape(2 * T, D) # generate serial numbers
static_row = row[::2] # [1,2,3,4,5] => [1,3,5]
delta_row = row[1::2] # [1,2,3,4,5] => [2,4]
col = np.arange(DT)
data = np.array([ones * static[0], ones * static[1],
ones * static[2], ones * delta[0],
ones * delta[1], ones * delta[2]]).flatten()
row = np.array([[static_row] * 3, [delta_row] * 3]).flatten()
col = np.array([[col - D, col, col + D] * 2]).flatten()
# remove component at first and end frame
valid_idx = np.logical_not(np.logical_or(col < 0, col >= DT))
W = scipy.sparse.csr_matrix(
(data[valid_idx], (row[valid_idx], col[valid_idx])), shape=(2 * DT, DT))
W.eliminate_zeros()
return W
def extfrm(data, npow, power_threshold=-20):
"""
Extract frame over the power threshold
Parameters
----------
data : array, shape(`T`, `dim`)
array of input data
npow : array, shape(`T`)
vector of normalized power sequence
threshold : scala
scala of power threshold [dB]
Returns
----------
data : array, shape(`T_ext`, `dim`)
remaining data after extracting frame
`T_ext` <= `T`
"""
T = data.shape[0]
if T != len(npow):
raise("Length of two vectors is different.")
valid_index = np.where(npow > power_threshold)
extdata = data[valid_index]
assert extdata.shape[0] <= T
return extdata
def estimate_twf(orgdata, tardata, distance='melcd', fast=True, otflag=None):
"""
time warping function estimator
Parameters
----------
orgdata : array, shape(`T_org`, `dim`)
array of source feature
tardata : array, shape(`T_tar`, `dim`)
array of target feature
distance : str
distance function
fast : bool
use fastdtw instead of dtw
otflag : str
Alignment into the length of specification
'org' : alignment into original length
'tar' : alignment into target length
Returns
----------
twf : array, shape(`2`, `T`)
time warping function between original and target
"""
if distance == 'melcd':
def distance_func(x, y): return melcd(x, y)
else:
raise ValueError('this distance method is not support.')
if fast:
_, path = fastdtw(orgdata, tardata, dist=distance_func)
twf = np.array(path).T
else:
_, _, _, twf = dtw(orgdata, tardata, distance_func)
if otflag is not None:
twf = modify_twf(twf, otflag=otflag)
return twf
def align_data(org_data, tar_data, twf):
"""
get aligned joint feature vector
Parameters
----------
org_data : array, shape(`T_org`, `dim_org`)
Acoustic feature vector of original speaker
tar_data : array, shape(`T_tar`, `dim_tar`)
Acoustic feature vector of target speaker
twf : array, shape(`2`, `T`)
time warping function between original and target
Returns
----------
jdata : array, shape(`T_new`, `dim_org + dim_tar`)
Joint feature vector between source and target
"""
jdata = np.c_[org_data[twf[0]], tar_data[twf[1]]]
return jdata
def modify_twf(twf, otflag=None):
"""
align specified length
Parameters
----------
twf : array, shape(`2`, `T`)
time warping function between original and target
otflag : str
Alignment into the length of specification
'org' : alignment into original length
'tar' : alignment into target length
Returns
----------
mod_twf : array, shape(`2`, `T_new`)
time warping function of modified alignment
"""
if otflag == 'org':
of, indice = np.unique(twf[0], return_index=True)
mod_twf = np.c_[of, twf[1][indice]].T
elif otflag == 'tar':
tf, indice = np.unique(twf[1], return_index=True)
mod_twf = np.c_[twf[0][indice], tf].T
return mod_twf
def low_cut_filter(x, fs, cutoff=70):
"""
low cut filter
Parameters
----------
x : array, shape('samples')
waveform sequence
fs : array, int
Sampling frequency
cutoff : float
cutoff frequency of low cut filter
Returns
----------
lct_x : array, shape('samples')
Low cut filtered waveform sequence
"""
nyquist = fs // 2
norm_cutoff = cutoff / nyquist
# low cut filter
fil = firwin(255, norm_cutoff, pass_zero=False)
lct_x = lfilter(fil, 1, x)
return lct_x
def extsddata(data, npow, power_threshold=-20):
"""
get power extract static and delta feature vector
Parameters
----------
data : array, shape(`T`, `dim`)
acoustic feature vector
npow : array, shape(`T`)
normalized power vector
power_threshold : float
power threshold
Returns
----------
extsddata : array, shape(`T_new`, `dim * 2`)
silence remove static and delta feature vector
"""
extsddata = extfrm(static_delta(data), npow, power_threshold=power_threshold)
return extsddata
def transform_jnt(array_list):
num_files = len(array_list)
for i in range(num_files):
if i == 0:
jnt = array_list[i]
else:
jnt = np.r_[jnt, array_list[i]]
return jnt
class F0statistics(object):
"""
Estimate F0 statistics and convert F0
"""
def __init__(self):
pass
def estimate(self, f0list):
"""
estimate F0 statistics from list of f0
Parameters
----------
f0list : list, shape(`f0num`)
List of several F0 sequence
Returns
----------
f0stats : array, shape(`[mean, std]`)
values of mean and standard deviation for log f0
"""
n_files = len(f0list)
for i in range(n_files):
f0 = f0list[i]
nonzero_indices = np.nonzero(f0)
if i == 0:
f0s = np.log(f0[nonzero_indices])
else:
f0s = np.r_[f0s, np.log(f0[nonzero_indices])]
f0stats = np.array([np.mean(f0s), np.std(f0s)])
return f0stats
def convert(self, f0, orgf0stats, tarf0stats):
"""
convert F0 based on F0 statistics
Parameters
----------
f0 : array, shape(`T`, `1`)
array of F0 sequence
orgf0stats : array, shape(`[mean, std]`)
vectors of mean and standard deviation of log f0 for original speaker
tarf0stats : array, shape(`[mean, std]`)
vectors of mean and standard deviation of log f0 for target speaker
Returns
----------
cvf0 : array, shape(`T`, `1`)
array of converted F0 sequence
"""
# get length and dimension
T = len(f0)
# perform f0 conversion
cvf0 = np.zeros(T)
nonzero_indices = f0 > 0
cvf0[nonzero_indices] = np.exp((tarf0stats[1] / orgf0stats[1]) * (np.log(f0[nonzero_indices]) - orgf0stats[0]) + tarf0stats[0])
return cvf0
class GV(object):
"""
Estimate statistics and perform postfilter based on the GV statistics.
"""
def __init__(self):
pass
def estimate(self, datalist):
"""
estimate GV statistics from list of data
Parameters
----------
datalist : list, shape(`num_data`)
List of several data ([T, dim]) sequence
Returns
----------
gvstats : array, shape(`2`, `dim`)
array of mean and standard deviation for GV
"""
n_files = len(datalist)
var = []
for i in range(n_files):
data = datalist[i]
var.append(np.var(data, axis=0))
# calculate vm and vv
vm = np.mean(np.array(var), axis=0)
vv = np.var(np.array(var), axis=0)
gvstats = np.r_[vm, vv]
gvstats = gvstats.reshape(2, len(vm))
return gvstats
def postfilter(self, data, gvstats, cvgvstats=None, alpha=1.0, startdim=1):
"""
perform postfilter based on GV statistics into data
Parameters
----------
data : array, shape(`T`, `dim`)
array of data sequence
gvstats : array, shape(`2`, `dim`)
array of mean and variance for target GV
cvgvstats : array, shape(`2`, `dim`)
array of mean and variance for converted GV
alpha : float
morphing coefficient between GV transformed data and data.
alpha * gvpf(data) + (1 - alpha) * data
startdim : int
start dimension to perform GV postfilter
Returns
----------
filtered_data : array, shape(`T`, `data`)
array of GV postfiltered data sequnece
"""
# get length and dimension
T, dim = data.shape
assert gvstats is not None
assert dim == gvstats.shape[1]
# calculate statics of input data
datamean = np.mean(data, axis=0)
if cvgvstats is None:
# use variance of the given data
datavar = np.var(data, axis=0)
else:
# use variance of trained gv stats
datavar = cvgvstats[0]
# perform GV postfilter
filterd = np.sqrt(gvstats[0, startdim:] / datavar[startdim:]) * (data[:, startdim:] - datamean[startdim:]) + datamean[startdim:]
filterd_data = np.c_[data[:, :startdim], filterd]
return alpha * filterd_data + (1 - alpha) * data
# 0. config path
__versions = "pre-stored-jp"
__same_path = "./utterance/" + __versions + "/"
pre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav'
pre_stored_list = __same_path + "pre/**/V01/T01/**/*.wav"
output_path = __same_path + "output/"
# 1. estimate features
feat = FeatureExtractor()
synthesizer = Synthesizer()
org_f0list = None
org_splist = None
org_mceplist = None
org_aplist = None
org_npowlist = None
org_codeaplist = None
if os.path.exists(output_path + "_org_f0.pickle") \
and os.path.exists(output_path + "_org_sp.pickle") \
and os.path.exists(output_path + "_org_ap.pickle") \
and os.path.exists(output_path + "_org_mcep.pickle") \
and os.path.exists(output_path + "_org_npow.pickle") \
and os.path.exists(output_path + "_org_codeap.pickle"):
with open(output_path + "_org_f0.pickle", 'rb') as f:
org_f0list = pickle.load(f)
with open(output_path + "_org_sp.pickle", 'rb') as f:
org_splist = pickle.load(f)
with open(output_path + "_org_ap.pickle", 'rb') as f:
org_aplist = pickle.load(f)
with open(output_path + "_org_mcep.pickle", 'rb') as f:
org_mceplist = pickle.load(f)
with open(output_path + "_org_npow.pickle", 'rb') as f:
org_npowlist = pickle.load(f)
with open(output_path + "_org_codeap.pickle", 'rb') as f:
org_codeaplist = pickle.load(f)
else:
org_f0list = []
org_splist = []
org_mceplist = []
org_aplist = []
org_npowlist = []
org_codeaplist = []
for files in sorted(glob.iglob(pre_stored_source_list, recursive=True)):
wavf = files
x, fs = sf.read(wavf)
x = np.array(x, dtype=np.float)
x = low_cut_filter(x, fs, cutoff=70)
assert fs == 16000
print("extract acoustic featuers: " + wavf)
f0, sp, ap = feat.analyze(x)
mcep = feat.mcep()
npow = feat.npow()
codeap = feat.codeap()
#name, ext = os.path.splitext(wavf)
#np.save(name + "_or_f0", f0)
#np.save(name + "_or_sp", sp)
#np.save(name + "_or_ap", ap)
#np.save(name + "_or_mcep", mcep)
#np.save(name + "_or_codeap", codeap)
org_f0list.append(f0)
org_splist.append(sp)
org_mceplist.append(mcep)
org_aplist.append(ap)
org_npowlist.append(npow)
org_codeaplist.append(codeap)
#wav = synthesizer.synthesis(f0, mcep, ap)
#wav = np.clip(wav, -32768, 32767)
#sf.write(name + "_ansys.wav", wav, fs)
with open(output_path + "_org_f0.pickle", 'wb') as f:
pickle.dump(org_f0list, f)
with open(output_path + "_org_sp.pickle", 'wb') as f:
pickle.dump(org_splist, f)
with open(output_path + "_org_npow.pickle", 'wb') as f:
pickle.dump(org_npowlist, f)
with open(output_path + "_org_ap.pickle", 'wb') as f:
pickle.dump(org_aplist, f)
with open(output_path + "_org_mcep.pickle", 'wb') as f:
pickle.dump(org_mceplist, f)
with open(output_path + "_org_codeap.pickle", 'wb') as f:
pickle.dump(org_codeaplist, f)
mid_f0list = None
mid_mceplist = None
mid_aplist = None
mid_npowlist = None
mid_splist = None
mid_codeaplist = None
if os.path.exists(output_path + "_mid_f0.pickle") \
and os.path.exists(output_path + "_mid_sp_0_.pickle") \
and os.path.exists(output_path + "_mid_ap_0_.pickle") \
and os.path.exists(output_path + "_mid_mcep.pickle") \
and os.path.exists(output_path + "_mid_npow.pickle") \
and os.path.exists(output_path + "_mid_codeap.pickle"):
with open(output_path + "_mid_f0.pickle", 'rb') as f:
mid_f0list = pickle.load(f)
for i in range(0, len(org_splist)*21, len(org_splist)):
with open(output_path + "_mid_sp_{}_.pickle".format(i), 'rb') as f:
temp_splist = pickle.load(f)
if mid_splist is None:
mid_splist = temp_splist
else:
mid_splist = mid_splist + temp_splist
for i in range(0, len(org_aplist)*21, len(org_aplist)):
with open(output_path + "_mid_ap_{}_.pickle".format(i), 'rb') as f:
temp_aplist = pickle.load(f)
if mid_aplist is None:
mid_aplist = temp_aplist
else:
mid_aplist = mid_aplist + temp_aplist
with open(output_path + "_mid_mcep.pickle", 'rb') as f:
mid_mceplist = pickle.load(f)
with open(output_path + "_mid_npow.pickle", 'rb') as f:
mid_npowlist = pickle.load(f)
with open(output_path + "_mid_codeap.pickle", 'rb') as f:
mid_codeaplist = pickle.load(f)
else:
mid_f0list = []
mid_mceplist = []
mid_aplist = []
mid_npowlist = []
mid_splist = []
mid_codeaplist = []
for files in sorted(glob.iglob(pre_stored_list, recursive=True)):
wavf = files
x, fs = sf.read(wavf)
x = np.array(x, dtype=np.float)
x = low_cut_filter(x, fs, cutoff=70)
assert fs == 16000
print("extract acoustic featuers: " + wavf)
f0, sp, ap = feat.analyze(x)
mcep = feat.mcep()
npow = feat.npow()
codeap = feat.codeap()
name, ext = os.path.splitext(wavf)
#np.save(name + "_or_f0", f0)
#np.save(name + "_or_sp", sp)
#np.save(name + "_or_ap", ap)
#np.save(name + "_or_mcep", mcep)
#np.save(name + "_or_codeap", codeap)
mid_f0list.append(f0)
mid_splist.append(sp)
mid_mceplist.append(mcep)
mid_aplist.append(ap)
mid_npowlist.append(npow)
mid_codeaplist.append(codeap)
#wav = synthesizer.synthesis(f0, mcep, ap)
#wav = np.clip(wav, -32768, 32767)
#sf.write(name + "_ansys.wav", wav, fs)
with open(output_path + "_mid_f0.pickle", 'wb') as f:
print(f)
pickle.dump(mid_f0list, f)
with open(output_path + "_mid_npow.pickle", 'wb') as f:
print(f)
pickle.dump(mid_npowlist, f)
for i in range(0, len(mid_splist), len(org_splist)):
with open(output_path + "_mid_sp_{}_.pickle".format(i), 'wb') as f:
print(f)
pickle.dump(mid_splist[i:i+len(org_splist)], f)
for i in range(0, len(mid_aplist), len(org_aplist)):
with open(output_path + "_mid_ap_{}_.pickle".format(i), 'wb') as f:
print(f)
pickle.dump(mid_aplist[i:i+len(org_aplist)], f)
with open(output_path + "_mid_mcep.pickle", 'wb') as f:
print(f)
pickle.dump(mid_mceplist, f)
with open(output_path + "_mid_codeap.pickle", 'wb') as f:
print(f)
pickle.dump(mid_codeaplist, f)
class GMMTrainer(object):
"""
this class offers the training of GMM with several types of covariance matrix.
Parameters
----------
n_mix : int
the number of mixture components of the GMM
n_iter : int
the number of iteration for EM algorithm
covtype : str
the type of covariance matrix of the GMM
'full': full-covariance matrix
Attributes
---------
param :
sklearn-based model parameters of the GMM
"""
def __init__(self, n_mix=64, n_iter=100, covtype='full', params='wmc'):
self.n_mix = n_mix
self.n_iter = n_iter
self.covtype = covtype
self.params = params
self.param = sklearn.mixture.GMM(n_components=self.n_mix,
covariance_type=self.covtype,
n_iter=self.n_iter, params=self.params)
def train(self, jnt):
"""
fit GMM parameter from given joint feature vector
Parametes
---------
jnt : array, shape(`T`, `jnt.shape[0]`)
joint feature vector of original and target feature vector consisting of static and delta components
"""
if self.covtype == 'full':
self.param.fit(jnt)
return
class GMMConvertor(object):
"""
this class offers the several conversion techniques such as Maximum Likelihood Parameter Generation (MLPG)
and Minimum Mean Square Error (MMSE).
Parametes
---------
n_mix : int
the number of mixture components of the GMM
covtype : str
the type of covariance matrix of the GMM
'full': full-covariance matrix
gmmmode : str
the type of the GMM for opening
`None` : Normal Joint Density - GMM (JD-GMM)
Attributes
---------
param :
sklearn-based model parameters of the GMM
w : shape(`n_mix`)
vector of mixture component weight of the GMM
jmean : shape(`n_mix`, `jnt.shape[0]`)
Array of joint mean vector of the GMM
jcov : shape(`n_mix`, `jnt.shape[0]`, `jnt.shape[0]`)
array of joint covariance matrix of the GMM
"""
def __init__(self, n_mix=64, covtype='full', gmmmode=None):
self.n_mix = n_mix
self.covtype = covtype
self.gmmmode = gmmmode
def open_from_param(self, param):
"""
open GMM from GMMTrainer
Parameters
----------
param : GMMTrainer
GMMTrainer class
"""
self.param = param
self._deploy_parameters()
return
def convert(self, data, cvtype='mlpg'):
"""
convert data based on conditional probability density function
Parametes
---------
data : array, shape(`T`, `dim`)
original data will be converted
cvtype : str
type of conversion technique
`mlpg` : maximum likelihood parameter generation
Returns
----------
odata : array, shape(`T`, `dim`)
converted data
"""
# estimate parameter sequence
cseq, wseq, mseq, covseq = self._gmmmap(data)
if cvtype == 'mlpg':
odata = self._mlpg(mseq, covseq)
else:
raise ValueError('please choose conversion mode in `mlpg`.')
return odata
def _gmmmap(self, sddata):
# paramete for sequencial data
T, sddim = sddata.shape
# estimate posterior sequence
wseq = self.pX.predict_proba(sddata)
# estimate mixture sequence
cseq = np.argmax(wseq, axis=1)
mseq = np.zeros((T, sddim))
covseq = np.zeros((T, sddim, sddim))
for t in range(T):
# read maximum likelihood mixture component in frame t
m = cseq[t]
# conditional mean vector sequence
mseq[t] = self.meanY[m] + self.A[m] @ (sddata[t] - self.meanX[m])
# conditional covariance sequence
covseq[t] = self.cond_cov_inv[m]
return cseq, wseq, mseq, covseq
def _mlpg(self, mseq, covseq):
# parameter for sequencial data
T, sddim = mseq.shape
# prepare W
W = construct_static_and_delta_matrix(T, sddim // 2)
# prepare D
D = get_diagonal_precision_matrix(T, sddim, covseq)
# calculate W'D
WD = W.T @ D
# W'DW
WDW = WD @ W
# W'Dm
WDM = WD @ mseq.flatten()
# estimate y = (W'DW)^-1 * W'Dm
odata = scipy.sparse.linalg.spsolve(WDW, WDM, use_umfpack=False).reshape(T, sddim // 2)
return odata
def _deploy_parameters(self):
# read JD-GMM parameters from self.param
self.W = self.param.weights_
self.jmean = self.param.means_
self.jcov = self.param.covars_
# devide GMM parameters into source and target parameters
sddim = self.jmean.shape[1] // 2
self.meanX = self.jmean[:, 0:sddim]
self.meanY = self.jmean[:, sddim:]
self.covXX = self.jcov[:, :sddim, :sddim]
self.covXY = self.jcov[:, :sddim, sddim:]
self.covYX = self.jcov[:, sddim:, :sddim]
self.covYY = self.jcov[:, sddim:, sddim:]
# change model parameter of GMM into that of gmmmode
if self.gmmmode is None:
pass
else:
raise ValueError('please choose GMM mode in [None]')
# estimate parameters for conversion
self._set_Ab()
self._set_pX()
return
def _set_Ab(self):
# calculate A and b from self.jmean, self.jcov
sddim = self.jmean.shape[1] // 2
# calculate inverse covariance for covariance XX in each mixture
self.covXXinv = np.zeros((self.n_mix, sddim, sddim))
for m in range(self.n_mix):
self.covXXinv[m] = np.linalg.inv(self.covXX[m])
# calculate A, b, and conditional covariance given X
self.A = np.zeros((self.n_mix, sddim, sddim))
self.b = np.zeros((self.n_mix, sddim))
self.cond_cov_inv = np.zeros((self.n_mix, sddim, sddim))
for m in range(self.n_mix):
# calculate A (A = yxcov_m * xxcov_m^-1)
self.A[m] = self.covYX[m] @ self.covXXinv[m]
# calculate b (b = mean^Y - A * mean^X)
self.b[m] = self.meanY[m] - self.A[m] @ self.meanX[m]
# calculate conditional covariance (cov^(Y|X)^-1 = (yycov - A * xycov)^-1)
self.cond_cov_inv[m] = np.linalg.inv(self.covYY[m] - self.A[m] @ self.covXY[m])
return
def _set_pX(self):
# probability density function of X
self.pX = sklearn.mixture.GMM(n_components=self.n_mix, covariance_type=self.covtype)
self.pX.weights_ = self.W
self.pX.means_ = self.meanX
self.pX.covariances_ = self.covXX
# following function is required to estimate porsterior
# p(x | \lambda^(X))
self.pX.precisions_cholesky_ = _compute_precision_cholesky(self.covXX, self.covtype)
return
def get_diagonal_precision_matrix(T, D, covseq):
return scipy.sparse.block_diag(covseq, format='csr')
def get_alignment(odata, onpow, tdata, tnpow, opow=-20, tpow=-20, sd=0, cvdata=None, given_twf=None, otflag=None, distance='melcd'):
"""
get alignment between original and target.
Parameters
----------
odata : array, shape(`T`, `dim`)
acoustic feature vector of original
onpow : array, shape(`T`)
Normalized power vector of original
tdata : array, shape(`T`, `dim`)
acoustic feature vector of target
tnpow : array, shape(`T`)
Normalized power vector of target
opow : float
power threshold of original
tpow : float
power threshold of target
sd : int
start dimension to be used for alignment
cvdata : array, shape(`T`, `dim`)
converted original data
given_twf : array, shape(`T_new`, `dim * 2`)
Alignment given twf
otflag : str
Alignment into the length of specification
'org' : alignment into original length
'tar' : alignment into target length
distance : str
Distance function to be used
Returns
----------
jdata : array, shape(`T_new`, `dim * 2`)
joint static and delta feature vector
twf : array, shape(`T_new`, `dim * 2`)
Time warping function
mcd : float
Mel-cepstrum distortion between arrays
"""
oexdata = extsddata(odata[:, sd:], onpow, power_threshold=opow)
texdata = extsddata(tdata[:, sd:], tnpow, power_threshold=tpow)
if cvdata is None:
align_odata = oexdata
else:
cvexdata = extsddata(cvdata, onpow, power_threshold=opow)
align_odata = cvexdata
if given_twf is None:
twf = estimate_twf(align_odata, texdata, distance=distance, otflag=otflag)
else:
twf = given_twf
jdata = align_data(oexdata, texdata, twf)
mcd = melcd(align_odata[twf[0]], texdata[twf[1]])
return jdata, twf, mcd
def align_feature_vectors(odata, onpows, tdata, tnpows, opow=-100, tpow=-100, itnum=3, sd=0, given_twfs=None, otflag=None):
"""
get alignment to create joint feature vector
Parameters
----------
odata : list, (`num_files`)
List of original feature vectors
onpow : list, (`num_files`)
List of original npows
tdata : list, (`num_files`)
List of target feature vectors
tnpow : list, (`num_files`)
List of target npows
opow : float
power threshold of original
tpow : float
power threshold of target
itnum : int
the number of iteration
sd : int
start dimension of feature vector to be used for alignment
given_twf : array, shape(`T_new`, `dim * 2`)
use given alignment while 1st iteration
otflag : str
Alignment into the length of specification
'org' : alignment into original length
'tar' : alignment into target length
distance : str
Distance function to be used
Returns
----------
jdata : array, shape(`T_new`, `dim * 2`)
joint static and delta feature vector
twf : array, shape(`T_new`, `dim * 2`)
Time warping function
mcd : float
Mel-cepstrum distortion between arrays
"""
it = 1
num_files = len(odata)
cvgmm, cvdata = None, None
for it in range(1, itnum+1):
print('{}-th joint feature extraction starts.'.format(it))
# alignment
twfs, jfvs = [], []
for i in range(num_files):
if it == 1 and given_twfs is not None:
gtwf = given_twfs[i]
else:
gtwf = None
if it > 1:
cvdata = cvgmm.convert(static_delta(odata[i][:, sd:]))
jdata, twf, mcd = get_alignment(odata[i], onpows[i], tdata[i], tnpows[i], opow=opow, tpow=tpow,
sd=sd, cvdata=cvdata, given_twf=gtwf, otflag=otflag)
twfs.append(twf)
jfvs.append(jdata)
print('distortion [dB] for {}-th file: {}'.format(i+1, mcd))
jnt_data = transform_jnt(jfvs)
if it != itnum:
# train GMM, if not final iteration
datagmm = GMMTrainer()
datagmm.train(jnt_data)
cvgmm = GMMConvertor()
cvgmm.open_from_param(datagmm.param)
it += 1
return jfvs, twfs
# 2. estimate twf and jnt
if os.path.exists(output_path + "_jnt_mcep_0_.pickle") \
and os.path.exists(output_path + "_jnt_codeap_0_.pickle"):
pass
else:
for i in range(0, len(mid_mceplist), len(org_mceplist)):
org_mceps = org_mceplist
org_npows = org_npowlist
mid_mceps = mid_mceplist[i:i+len(org_mceps)]
mid_npows = mid_npowlist[i:i+len(org_npows)]
assert len(org_mceps) == len(mid_mceps)
assert len(org_npows) == len(mid_npows)
assert len(org_mceps) == len(org_npows)
# dtw between original and target 0-th and silence
print("## alignment mcep 0-th and silence ##")
jmceps, twfs = align_feature_vectors(org_mceps, org_npows, mid_mceps, mid_npows, opow=-100, tpow=-100, sd=1)
jnt_mcep = transform_jnt(jmceps)
# save joint feature vectors
with open(output_path + "_jnt_mcep_{}_.pickle".format(i), 'wb') as f:
print(f)
pickle.dump(jnt_mcep, f)
# 3. make EV-GMM
initgmm = None
if os.path.exists(output_path + "initgmm.pickle"):
with open(output_path + "initgmm.pickle".format(i), 'rb') as f:
print(f)
initgmm = pickle.load(f)
else:
jnt, jnt_codeap = None, []
for i in range(0, len(mid_mceplist), len(org_mceplist)):
with open(output_path + "_jnt_mcep_{}_.pickle".format(i), 'rb') as f:
temp_jnt = pickle.load(f)
if jnt is None:
jnt = temp_jnt
else:
jnt = np.r_[jnt, temp_jnt]
# train initial gmm
initgmm = GMMTrainer()
initgmm.train(jnt)
with open(output_path + "initgmm.pickle", 'wb') as f:
print(f)
pickle.dump(initgmm, f)
# get initial gmm params
init_W = initgmm.param.weights_
init_jmean = initgmm.param.means_
init_jcov = initgmm.param.covars_
sddim = init_jmean.shape[1] // 2
init_meanX = init_jmean[:, :sddim]
init_meanY = init_jmean[:, sddim:]
init_covXX = init_jcov[:, :sddim, :sddim]
init_covXY = init_jcov[:, :sddim, sddim:]
init_covYX = init_jcov[:, sddim:, :sddim]
init_covYY = init_jcov[:, sddim:, sddim:]
fitted_source = init_meanX
fitted_target = init_meanY
sv = None
if os.path.exists(output_path + "_sv.npy"):
sv = np.array(sv)
sv = np.load(output_path + '_sv.npy')
else:
depengmm, depenjnt = None, None
sv = []
for i in range(0, len(mid_mceplist), len(org_mceplist)):
with open(output_path + "_jnt_mcep_{}_.pickle".format(i), 'rb') as f:
depenjnt = pickle.load(f)
depengmm = GMMTrainer(params='m')
depengmm.param.weights_ = init_W
depengmm.param.means_ = init_jmean
depengmm.param.covars_ = init_jcov
depengmm.train(depenjnt)
sv.append(depengmm.param.means_)
sv = np.array(sv)
np.save(output_path + "_sv", sv)
n_mix = 64
S = int(len(mid_mceplist) / len(org_mceplist))
assert S == 22
source_pca = sklearn.decomposition.PCA()
source_pca.fit(sv[:,:,:sddim].reshape((S, n_mix*sddim)))
target_pca = sklearn.decomposition.PCA()
target_pca.fit(sv[:,:,sddim:].reshape((S, n_mix*sddim)))
eigenvectors = source_pca.components_.reshape((n_mix, sddim, S)), target_pca.components_.reshape((n_mix, sddim, S))
biasvectors = source_pca.mean_.reshape((n_mix, sddim)), target_pca.mean_.reshape((n_mix, sddim))
# estimate statistic features
for_convert_source = __same_path + 'input/EJM10/V01/T01/ATR503/A/*.wav'
for_convert_target = __same_path + 'adaptation/EJF01/V01/T01/ATR503/A/*.wav'
src_f0list = []
src_splist = []
src_mceplist = []
src_aplist = []
src_npowlist = []
src_codeaplist = []
for files in sorted(glob.iglob(for_convert_source, recursive=True)):
wavf = files
x, fs = sf.read(wavf)
x = np.array(x, dtype=np.float)
x = low_cut_filter(x, fs, cutoff=70)
assert fs == 16000
print("extract acoustic featuers: " + wavf)
f0, sp, ap = feat.analyze(x)
mcep = feat.mcep()
npow = feat.npow()
codeap = feat.codeap()
src_f0list.append(f0)
src_splist.append(sp)
src_mceplist.append(mcep)
src_aplist.append(ap)
src_npowlist.append(npow)
src_codeaplist.append(codeap)
tar_f0list = []
tar_mceplist = []
tar_aplist = []
tar_npowlist = []
tar_splist = []
tar_codeaplist = []
for files in sorted(glob.iglob(for_convert_target, recursive=True)):
wavf = files
x, fs = sf.read(wavf)
x = np.array(x, dtype=np.float)
x = low_cut_filter(x, fs, cutoff=70)
assert fs == 16000
print("extract acoustic featuers: " + wavf)
f0, sp, ap = feat.analyze(x)
mcep = feat.mcep()
npow = feat.npow()
codeap = feat.codeap()
name, ext = os.path.splitext(wavf)
tar_f0list.append(f0)
tar_splist.append(sp)
tar_mceplist.append(mcep)
tar_aplist.append(ap)
tar_npowlist.append(npow)
tar_codeaplist.append(codeap)
f0stats = F0statistics()
srcf0stats = f0stats.estimate(org_f0list)
tarf0stats = f0stats.estimate(tar_f0list)
gv = GV()
srcgvstats = gv.estimate(org_mceplist)
targvstats = gv.estimate(tar_mceplist)
# 5. fitting target
epoch = 100
fitgmm = sklearn.mixture.GMM(n_components=n_mix,
covariance_type='full',
n_iter=100)
fitgmm.weights_ = init_W
fitgmm.means_ = init_meanY
fitgmm.covars_ = init_covYY
for i in range(len(tar_mceplist)):
print("adapt: ", i+1, "/", len(tar_mceplist))
target = tar_mceplist[i]
target_pow = target[:, 0]
target = target[:, 1:]
for x in range(epoch):
print("epoch = ", x)
predict = fitgmm.predict_proba(np.atleast_2d(static_delta(target)))
y = np.sum([predict[:, k:k+1] * (static_delta(target) - biasvectors[1][k]) for k in range(n_mix)], axis=1)
gamma = np.sum(predict, axis=0)
left = np.sum([gamma[k] * np.dot(eigenvectors[1][k].T,
np.linalg.solve(fitgmm.covars_, eigenvectors[1])[k])
for k in range(n_mix)], axis=0)
right = np.sum([np.dot(eigenvectors[1][k].T,
np.linalg.solve(fitgmm.covars_, y)[k])
for k in range(n_mix)], axis=0)
weight = np.linalg.solve(left, right)
fitted_target = np.dot(eigenvectors[1], weight) + biasvectors[1]
fitgmm.means_ = fitted_target
def mcepconvert(source, weights, jmean, meanX, covarXX, covarXY, covarYX, covarYY,
fitted_source, fitted_target):
M = 64
# set pX
px = sklearn.mixture.GMM(n_components=M, covariance_type='full', n_iter=100)
px.weights_ = weights
px.means_ = meanX
px.covars_ = covarXX
# set Ab
sddim = jmean.shape[1] // 2
covXXinv = np.zeros((M, sddim, sddim))
for m in range(M):
covXXinv[m] = np.linalg.inv(covarXX[m])
A = np.zeros((M, sddim, sddim))
b = np.zeros((M, sddim))
cond_cov_inv = np.zeros((M, sddim, sddim))
for m in range(M):
A[m] = covarYX[m] @ covXXinv[m]
b[m] = fitted_target[m] - A[m] @ meanX[m]
cond_cov_inv[m] = np.linalg.inv(covarYY[m] - A[m] @ covarXY[m])
# _gmmmap
T, sddim = source.shape
wseq = px.predict_proba(source)
cseq = np.argmax(wseq, axis=1)
mseq = np.zeros((T, sddim))
covseq = np.zeros((T, sddim, sddim))
for t in range(T):
m = cseq[t]
mseq[t] = fitted_target[m] + A[m] @ (source[t] - meanX[m])
covseq[t] = cond_cov_inv[m]
# _mlpg
T, sddim = mseq.shape
W = construct_static_and_delta_matrix(T, sddim // 2)
D = get_diagonal_precision_matrix(T, sddim, covseq)
WD = W.T @ D
WDW = WD @ W
WDM = WD @ mseq.flatten()
output = scipy.sparse.linalg.spsolve(WDW, WDM, use_umfpack=False).reshape(T, sddim // 2)
return output
# learn cvgvstats
cv_mceps = []
for i in range(len(src_mceplist)):
temp_mcep = src_mceplist[i]
temp_mcep_0th = temp_mcep[:, 0]
temp_mcep = temp_mcep[:, 1:]
sta_mcep = static_delta(temp_mcep)
cvmcep_wopow = np.array(mcepconvert(sta_mcep, init_W, init_jmean, init_meanX,
init_covXX, init_covXY, init_covYX, init_covYY,
fitted_source, fitted_target))
cvmcep = np.c_[temp_mcep_0th, cvmcep_wopow]
cv_mceps.append(cvmcep)
cvgvstats = gv.estimate(cv_mceps)
for i in range(len(src_mceplist)):
cvmcep_wGV = gv.postfilter(cv_mceps[i], targvstats, cvgvstats=cvgvstats)
cvf0 = f0stats.convert(src_f0list[i], srcf0stats, tarf0stats)
wav = synthesizer.synthesis(cvf0, cvmcep_wGV, src_aplist[i], rmcep=src_mceplist[i])
sf.write(output_path + "cv_{}_.wav".format(i), wav, 16000)
for i in range(len(src_mceplist)):
wav = synthesizer.synthesis(src_f0list[i], src_mceplist[i], src_aplist[i])
sf.write(output_path + "mcep_{}_.wav".format(i), wav, 16000)
wav = synthesizer.synthesis(src_f0list[i], src_splist[i], src_aplist[i])
sf.write(output_path + "ansys_{}_.wav".format(i), wav, 16000)
cvf0 = f0stats.convert(src_f0list[0], srcf0stats, tarf0stats)
plt.plot(cvf0)
#plt.plot(src_f0list[0])
cvmcep_wGV = gv.postfilter(cv_mceps[0], srcgvstats, cvgvstats=cvgvstats)
cvf0 = f0stats.convert(src_f0list[0], srcf0stats, tarf0stats)
wav = synthesizer.synthesis(cvf0, cvmcep_wGV, src_aplist[0], rmcep=src_mceplist[0])
sf.write(output_path + "te.wav", wav, 16000)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jsedoc/ConceptorDebias/blob/master/WEAT/WEAT_(Final).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
from itertools import combinations, filterfalse
from sklearn.metrics.pairwise import cosine_similarity
from gensim.models.keyedvectors import KeyedVectors
import pandas as pd
import random
import sys
import os
import pickle
```
# WEAT Algorithm
The Word Embeddings Association Test (WEAT), as proposed by Calikson et. al., is a statistical test analogous to the Implicit Association Test (IAT) which helps quantify human biases in textual data. WEAT uses the cosine similarity between word embeddings which is analogous to the reaction time when subjects are asked to pair two concepts they find similar in the IAT. WEAT considers two sets of target words and two sets of attribute words of equal size. The null hypothesis is that there is no difference between the two sets of target words and the sets of attribute words in terms of their relative similarities measured as the cosine similarity between the embeddings. For example, consider the target sets as words representing *Career* and *Family* and let the two sets of attribute words be *Male* and *Female* in that order. The null hypothesis states that *Career* and *Family* are equally similar (mathematically, in terms of the mean cosine similarity between the word representations) to each of the words in the *Male* and *Female* word lists.
REF: https://gist.github.com/SandyRogers/e5c2e938502a75dcae25216e4fae2da5
## Test Statistic
The WEAT test statistic measures the differential association of the two sets of target words with the attribute.
To ground this, we cast WEAT in our formulation where $\mathcal{X}$ and $\mathcal{Y}$ are two sets of target
words, (concretely, $\mathcal{X}$ might be*Career* words and $\mathcal{Y}$ *Family* words) and $\mathcal{A}$, $\mathcal{B}$ are two sets of attribute words ($\mathcal{A}$ might be ''female'' names and $\mathcal{B}$ ''male'' names) assumed to associate with the bias concept(s). WEAT is then
\begin{align*}
s(\mathcal{X}, &\mathcal{Y}, \mathcal{A}, \mathcal{B}) \\ &= \frac{1}{|\mathcal{X}|}\Bigg[\sum_{x \in \mathcal{X}}{\Big[\sum_{a\in \mathcal{A}}{s(x,a)} - \sum_{b\in \mathcal{B}}{s(x,b)}\Big]} \\ &\hbox{} - \sum_{y \in \mathcal{Y}}{\Big[\sum_{a\in \mathcal{A}}{s(y,a)} - \sum_{b\in \mathcal{B}}{s(y,b)}\Big]}\Bigg],
\end{align*}
where $s(x,y) = \cos(\hbox{vec}(x), \hbox{vec}(y))$ and $\hbox{vec}(x) \in \mathbb{R}^k$ is the $k$-dimensional word embedding for word $x$. We assume that there is no overlap between any of the sets $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{A}$, and $\mathcal{B}$.
Note that for this definition of WEAT, the cardinality of the sets must be equal, so $|\mathcal{A}|=|\mathcal{B}|$ and $|\mathcal{X}|=|\mathcal{Y}|$. Our conceptor formulation given below relaxes this assumption.
```
def swAB(W, A, B):
"""Calculates differential cosine-similarity between word vectors in W, A and W, B
Arguments
W, A, B : n x d matrix of word embeddings stored row wise
"""
WA = cosine_similarity(W,A)
WB = cosine_similarity(W,B)
#Take mean along columns
WAmean = np.mean(WA, axis = 1)
WBmean = np.mean(WB, axis = 1)
return (WAmean - WBmean)
def test_statistic(X, Y, A, B):
"""Calculates test-statistic between the pair of association words and target words
Arguments
X, Y, A, B : n x d matrix of word embeddings stored row wise
Returns
Test Statistic
"""
return (sum(swAB(X, A, B)) - sum(swAB(Y, A, B)))
```
## Effect Size
The ''effect size'' is a normalized measure of how separated the two distributions are.
```
def weat_effect_size(X, Y, A, B, embd):
"""Computes the effect size for the given list of association and target word pairs
Arguments
X, Y : List of association words
A, B : List of target words
embd : Dictonary of word-to-embedding for all words
Returns
Effect Size
"""
Xmat = np.array([embd[w.lower()] for w in X if w.lower() in embd])
Ymat = np.array([embd[w.lower()] for w in Y if w.lower() in embd])
Amat = np.array([embd[w.lower()] for w in A if w.lower() in embd])
Bmat = np.array([embd[w.lower()] for w in B if w.lower() in embd])
XuY = list(set(X).union(Y))
XuYmat = []
for w in XuY:
if w.lower() in embd:
XuYmat.append(embd[w.lower()])
XuYmat = np.array(XuYmat)
d = (np.mean(swAB(Xmat,Amat,Bmat)) - np.mean(swAB(Ymat,Amat,Bmat)))/np.std(swAB(XuYmat, Amat, Bmat))
return d
```
## P-Value
The one-sided P value measures the likelihood that a random permutation of the attribute words would produce at least the observed test statistic
```
def random_permutation(iterable, r=None):
"""Returns a random permutation for any iterable object"""
pool = tuple(iterable)
r = len(pool) if r is None else r
return tuple(random.sample(pool, r))
def weat_p_value(X, Y, A, B, embd, sample = 1000):
"""Computes the one-sided P value for the given list of association and target word pairs
Arguments
X, Y : List of association words
A, B : List of target words
embd : Dictonary of word-to-embedding for all words
sample : Number of random permutations used.
Returns
"""
size_of_permutation = min(len(X), len(Y))
X_Y = X + Y
test_stats_over_permutation = []
Xmat = np.array([embd[w.lower()] for w in X if w.lower() in embd])
Ymat = np.array([embd[w.lower()] for w in Y if w.lower() in embd])
Amat = np.array([embd[w.lower()] for w in A if w.lower() in embd])
Bmat = np.array([embd[w.lower()] for w in B if w.lower() in embd])
if not sample:
permutations = combinations(X_Y, size_of_permutation)
else:
permutations = [random_permutation(X_Y, size_of_permutation) for s in range(sample)]
for Xi in permutations:
Yi = filterfalse(lambda w:w in Xi, X_Y)
Ximat = np.array([embd[w.lower()] for w in Xi if w.lower() in embd])
Yimat = np.array([embd[w.lower()] for w in Yi if w.lower() in embd])
test_stats_over_permutation.append(test_statistic(Ximat, Yimat, Amat, Bmat))
unperturbed = test_statistic(Xmat, Ymat, Amat, Bmat)
is_over = np.array([o > unperturbed for o in test_stats_over_permutation])
return is_over.sum() / is_over.size
```
## Test on sample input
```
"""List of association and target word pairs for the sample test (Instruments, Weapons) vs (Pleasant, Unpleasant)"""
# Instruments
X = ["bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo",
"flute", "horn", "saxophone", "violin"]
# Weapons
Y = ["arrow", "club", "gun", "missile", "spear", "axe", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade",
"mace", "slingshot", "whip"]
# Pleasant
A = ["caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family",
"happy", "laughter", "paradise", "vacation"]
# Unpleasant
B = ["abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten",
"vomit", "agony", "prison"]
"""Download the 'Glove' embeddings if not downloaded"""
!if [ ! -f /content/gensim_glove.840B.300d.txt.bin ]; then gdown https://drive.google.com/uc?id=1Ty2exMyi-XOufY-v81RJfiPvnintHuy2; fi
"""Load the embeddings to a gensim object"""
resourceFile = ''
glove = KeyedVectors.load_word2vec_format(resourceFile + 'gensim_glove.840B.300d.txt.bin', binary=True)
print('The glove embedding has been loaded!')
"""Compute the effect-size and P value"""
print('WEAT d = ', weat_effect_size(X, Y, A, B, glove))
print('WEAT p = ', weat_p_value(X, Y, A, B, glove, 1000))
```
# Load all vectors and compute the conceptor
A conceptor matrix, $C$, is a regularized identity map (in our case, from the original word embeddings to their debiased versions) that minimizes
\begin{equation}
\|Z - CZ\|_F^2 + \alpha^{-2}\|C\|_{F}^2.
\end{equation}
where $\alpha^{-2}$ is a scalar parameter.
Given that many readers will be unfamiliar with conceptors, we reintroduce matrix conceptors.
$C$ has a closed form solution:
\begin{equation}
C = \frac{1}{k} Z Z^{\top} (\frac{1}{k} Z Z^{\top}+\alpha^{-2} I)^{-1}.
\end{equation}
Intuitively, $C$ is a soft projection matrix on the linear subspace where the word embeddings $Z$ have the highest variance. Once $C$ has been learned, it can be 'negated' by subtracting it from the identity matrix and then applied to any word embeddings (e.g., those defined by the lists, $\mathcal{X}$ and $\mathcal{Y}$) to remove the biased subspace.
Conceptors can represent laws of Boolean logic, such as NOT $\neg$, AND $\wedge$ and OR $\vee$. For two conceptors $C$ and $B$, we define the following operations:
\begin{align*}
\neg C:=& I-C, \\
C\wedge B:=& (C^{-1} + B^{-1} - I)^{-1} \\
C \vee B:=&\neg(\neg C \wedge \neg B)
\end{align*}
Given that the conceptor, $C$, represents the subspace of maximum bias, we want to apply the negated conceptor, NOT $C$ to an embedding space remove its bias. We call NOT $C$ the *debiasing conceptor*. More generally, if we have $K$ conceptors, $C_i$ derived from $K$ different word lists, we call NOT $(C_1 \vee ... \vee C_K)$ a debiasing conceptor.
Clone our repository. The repository contains code for computing the conceptors. It also includes word lists needed representing different subspaces.
```
# our code for debiasing -- also includes word lists
!rm -r ConceptorDebias
!git clone https://github.com/jsedoc/ConceptorDebias
!cd ConceptorDebias;
sys.path.append('/content/ConceptorDebias')
from Conceptors.conceptor_fxns import *
```
## Compute the conceptor matrix
```
def process_cn_matrix(subspace, alpha = 2):
"""Returns the conceptor negation matrix
Arguments
subspace : n x d matrix of word vectors from a oarticular subspace
alpha : Tunable parameter
"""
# Compute the conceptor matrix
C,_ = train_Conceptor(subspace, alpha)
# Calculate the negation of the conceptor matrix
negC = NOT(C)
return negC
def apply_conceptor(x, C):
"""Returns the conceptored embeddings
Arguments
x : n x d matrix of all words to be conceptored
C : d x d conceptor matrix
"""
# Post-process the vocab matrix
newX = (C @ x).T
return newX
```
## Load embeddings of all words from the ref. wordlist from a specific embedding
```
def load_all_vectors(embd, wikiWordsPath):
"""Loads all word vectors for all words in the list of words as a matrix
Arguments
embd : Dictonary of word-to-embedding for all words
wikiWordsPath : URL to the path where all embeddings are stored
Returns
all_words_index : Dictonary of words to the row-number of the corresponding word in the matrix
all_words_mat : Matrix of word vectors stored row-wise
"""
all_words_index = {}
all_words_mat = []
with open(wikiWordsPath, "r+") as f_in:
ind = 0
for line in f_in:
word = line.split(' ')[0]
if word in embd:
all_words_index[word] = ind
all_words_mat.append(embd[word])
ind = ind+1
return all_words_index, all_words_mat
def load_subspace_vectors(embd, subspace_words):
"""Loads all word vectors for the particular subspace in the list of words as a matrix
Arguments
embd : Dictonary of word-to-embedding for all words
subspace_words : List of words representing a particular subspace
Returns
subspace_embd_mat : Matrix of word vectors stored row-wise
"""
subspace_embd_mat = []
ind = 0
for word in subspace_words:
if word in embd:
subspace_embd_mat.append(embd[word])
ind = ind+1
return subspace_embd_mat
```
## Load all word lists for the ref. subspace
```
# General word list
!wget https://raw.githubusercontent.com/IlyaSemenov/wikipedia-word-frequency/master/results/enwiki-20190320-words-frequency.txt
!git clone https://github.com/PrincetonML/SIF
# Gender word lists
!git clone https://github.com/uclanlp/gn_glove
!git clone https://github.com/uclanlp/corefBias
!wget https://www.cs.cmu.edu/Groups/AI/areas/nlp/corpora/names/female.txt
!wget https://www.cs.cmu.edu/Groups/AI/areas/nlp/corpora/names/male.txt
from lists.load_word_lists import *
"""Load list of pronouns representing the 'Pronoun' subspace for gender debiasing"""
gender_list_pronouns = WEATLists.W_7_Male_terms + WEATLists.W_7_Female_terms + WEATLists.W_8_Male_terms + WEATLists.W_8_Female_terms
gender_list_pronouns = list(set(gender_list_pronouns))
"""Load an extended list of words representing the gender subspace for gender debiasing"""
gender_list_extended = male_vino_extra + female_vino_extra + male_gnGlove + female_gnGlove
gender_list_extended = list(set(gender_list_extended))
"""Load list of proper nouns representing the 'Proper Noun' subspace for gender debiasing"""
gender_list_propernouns = male_cmu + female_cmu
gender_list_propernouns = list(set(gender_list_propernouns))
"""Load list of all representing the gender subspace for gender debiasing"""
gender_list_all = gender_list_pronouns + gender_list_extended + gender_list_propernouns
gender_list_all = list(set(gender_list_all))
"""Load list of common black and white names for racial debiasing"""
race_list = WEATLists.W_3_Unused_full_list_European_American_names + WEATLists.W_3_European_American_names + WEATLists.W_3_Unused_full_list_African_American_names + WEATLists.W_3_African_American_names + WEATLists.W_4_Unused_full_list_European_American_names + WEATLists.W_4_European_American_names + WEATLists.W_4_Unused_full_list_African_American_names + WEATLists.W_4_African_American_names + WEATLists.W_5_Unused_full_list_European_American_names + WEATLists.W_5_European_American_names + WEATLists.W_5_Unused_full_list_African_American_names + WEATLists.W_5_African_American_names
race_list = list(set(race_list))
```
## Load different embeddings
**Glove**
```
"""Download the 'Glove' embeddings if not downloaded"""
!if [ ! -f /content/gensim_glove.840B.300d.txt.bin ]; then gdown https://drive.google.com/uc?id=1Ty2exMyi-XOufY-v81RJfiPvnintHuy2; fi
"""Load the embeddings to a gensim object"""
resourceFile = ''
if 'glove' not in dir():
glove = KeyedVectors.load_word2vec_format(resourceFile + 'gensim_glove.840B.300d.txt.bin', binary=True)
print('The glove embedding has been loaded!')
"""Sample output of the glove embeddings"""
X = WEATLists.W_5_Unused_full_list_European_American_names
print(X)
a = [glove[w] for w in X if w.lower() in glove]
print(np.array(a).shape)
glove['Brad']
#glove['brad']
```
**Word2ve**c
```
"""Download the 'Word2Vec' embeddings if not downloaded"""
!if test -e /content/GoogleNews-vectors-negative300.bin.gz || test -e /content/GoogleNews-vectors-negative300.bin; then echo 'file already downloaded'; else echo 'starting download'; gdown https://drive.google.com/uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM; fi
!if [ ! -f /content/GoogleNews-vectors-negative300.bin ]; then gunzip GoogleNews-vectors-negative300.bin.gz; fi
"""Load the embeddings to a gensim object"""
resourceFile = ''
if 'word2vec' not in dir():
word2vec = KeyedVectors.load_word2vec_format(resourceFile + 'GoogleNews-vectors-negative300.bin', binary=True)
print('The word2vec embedding has been loaded!')
```
**Fasttex**t
```
"""Download the 'Fasttext' embeddings if not downloaded"""
!if [ ! -f /content/fasttext.bin ]; then gdown https://drive.google.com/uc?id=1Zl6a75Ybf8do9uupmrJWKQMnvqqme4fh; fi
"""Load the embeddings to a gensim object"""
resourceFile = ''
if 'fasttext' not in dir():
fasttext = KeyedVectors.load_word2vec_format(resourceFile + 'fasttext.bin', binary=True)
print('The fasttext embedding has been loaded!')
```
**ELMo**
```
"""Download the 'ELMo' embeddings if not downloaded"""
!if [ ! -f /content/elmo_embeddings_emma_brown.pkl ]; then gdown https://drive.google.com/uc?id=17TK2h3cz7amgm2mCY4QCYy1yh23ZFWDU; fi
"""Load the embeddings to a dictonary"""
data = pickle.load(open("elmo_embeddings_emma_brown.pkl", "rb"))
def pick_embeddings(corpus, sent_embs):
X = []
labels = {}
sents = []
ind = 0
for i, s in enumerate(corpus):
for j, w in enumerate(s):
X.append(sent_embs[i][j])
if w.lower() in labels:
labels[w.lower()].append(ind)
else:
labels[w.lower()] = [ind]
sents.append(s)
ind = ind + 1
return (X, labels, sents)
def get_word_list(path):
word_list = []
with open(path, "r+") as f_in:
for line in f_in:
word = line.split(' ')[0]
word_list.append(word.lower())
return word_list
def load_subspace_vectors_contextual(all_mat, all_index, subspace_list):
subspace_mat = []
for w in subspace_list:
if w.lower() in all_index:
for i in all_index[w.lower()]:
#print(type(i))
subspace_mat.append(all_mat[i])
#subspace_mat = [all_mat[i,:] for i in all_index[w.lower()] for w in subspace_list if w.lower() in all_index]
print("Subspace: ", np.array(subspace_mat).shape)
return subspace_mat
import nltk
from nltk.corpus import brown
nltk.download('brown')
brown_corpus = brown.sents()
elmo = data['brown_embs']
```
**BERT**
```
def load_bert(all_dict, subspace):
"""Loads all embeddings in a matrix and a dictonary of words to row numbers"""
all_mat = all_dict['big_bert_' + subspace + '.pkl']['type_embedings']
words = []
for name in all_dict:
all_mat = np.concatenate((all_mat, all_dict[name]['type_embedings']))
words += all_dict[name]['words']
words = [w.lower() for w in words]
all_words_index = {}
for i,a in enumerate(words):
all_words_index[a] = i
return all_words_index, all_mat
def load_bert_conceptor(all_dict, subspace):
"""Loads the required BERT conceptor matrix"""
if subspace == 'gender_list_pronouns':
cn = all_dict['big_bert_gender_list_pronouns.pkl']['GnegC']
elif subspace == 'gender_list_propernouns':
cn = all_dict['big_bert_gender_list_propernouns.pkl']['GnegC']
elif subspace == 'gender_list_extended':
cn = all_dict['big_bert_gender_list_extended.pkl']['GnegC']
elif subspace == 'gender_list_all':
cn = all_dict['big_bert_gender_list_all.pkl']['GnegC']
elif subspace == 'race_list':
cn = all_dict['big_bert_race_list.pkl']['GnegC']
return cn
"""Load all bert embeddings in a dictonary"""
all_dict = {}
for filename in os.listdir('/home/saketk/bert'):
all_dict[filename] = pickle.load(open(filename, "rb"))
```
**Custom embeddings (From text file)**
```
"""Download your custom embeddings (text file)"""
"""Convert to word2vec format (if in GloVe format)"""
from gensim.scripts.glove2word2vec import glove2word2vec
input_file = 'glove.txt'
output_file = 'word2vec.txt'
glove2word2vec(input_file, output_file)
"""Load embeddings as gensim object"""
custom = KeyedVectors.load_word2vec_format(output_file, binary=False)
```
# WEAT on Conceptored embeddings
## Initialize variables
```
resourceFile = ''
wikiWordsPath = resourceFile + 'SIF/auxiliary_data/enwiki_vocab_min200.txt' # https://github.com/PrincetonML/SIF/blob/master/auxiliary_data/enwiki_vocab_min200.txt
"""Set the embedding to be used"""
embd = 'glove'
"""Set the subspace to be tested on"""
subspace = 'gender_list_all'
"""Load association and target word pairs"""
X = WEATLists.W_8_Science
Y = WEATLists.W_8_Arts
A = WEATLists.W_8_Male_terms
B = WEATLists.W_8_Female_terms
```
## Load the vectors as a matrix
```
curr_embd = eval(embd)
"""Load all embeddings in a matrix of all words in the wordlist"""
if embd == 'elmo':
all_words_mat, all_words_index, _ = pick_embeddings(brown_corpus, curr_embd)
if embd == 'bert':
all_words_index, all_words_mat = load_bert(all_dict, subspace)
else:
all_words_index, all_words_mat = load_all_vectors(curr_embd, wikiWordsPath)
```
## Compute the conceptor
```
"""Load the vectors for the words representing the subspace as a matrix and compute the respetive conceptor matrix"""
if subspace != 'without_conceptor':
subspace_words_list = eval(subspace)
if subspace == 'gender_list_and':
if embd == 'elmo':
subspace_words_mat1 = load_subspace_vectors_contextual(all_words_mat, all_words_index, gender_list_pronouns)
cn1 = process_cn_matrix(np.array(subspace_words_mat1).T, alpha = 8)
subspace_words_mat2 = load_subspace_vectors_contextual(all_words_mat, all_words_index, gender_list_extended)
cn2 = process_cn_matrix(np.array(subspace_words_mat2).T, alpha = 3)
subspace_words_mat3 = load_subspace_vectors_contextual(all_words_mat, all_words_index, gender_list_propernouns)
cn3 = process_cn_matrix(np.array(subspace_words_mat3).T, alpha = 10)
cn = AND(cn1, AND(cn2, cn3))
elif embd == 'bert':
cn1 = load_bert_conceptor(all_dict, gender_list_pronouns)
cn2 = load_bert_conceptor(all_dict, gender_list_extended)
cn3 = load_bert_conceptor(all_dict, gender_list_propernouns)
cn = AND(cn1, AND(cn2, cn3))
else:
subspace_words_mat1 = load_subspace_vectors(curr_embd, gender_list_pronouns)
cn1 = process_cn_matrix(np.array(subspace_words_mat1).T)
subspace_words_mat2 = load_subspace_vectors(curr_embd, gender_list_extended)
cn2 = process_cn_matrix(np.array(subspace_words_mat2).T)
subspace_words_mat3 = load_subspace_vectors(curr_embd, gender_list_propernouns)
cn3 = process_cn_matrix(np.array(subspace_words_mat3).T)
cn = AND(cn1, AND(cn2, cn3))
else:
if embd == 'elmo':
subspace_words_mat = load_subspace_vectors_contextual(all_words_mat, all_words_index, subspace_words_list)
cn = process_cn_matrix(np.array(subspace_words_mat).T, alpha = 6)
elif embd == 'bert':
cn = load_bert_conceptor(all_dict, subspace)
else:
subspace_words_mat = load_subspace_vectors(curr_embd, subspace_words_list)
cn = process_cn_matrix(np.array(subspace_words_mat).T)
```
## Compute conceptored embeddings
```
"""Conceptor all embeddings"""
all_words_cn = apply_conceptor(np.array(all_words_mat).T, np.array(cn))
"""Store all conceptored words in a dictonary"""
all_words = {}
for word, index in all_words_index.items():
if embd == 'elmo':
all_words[word] = np.mean([all_words_cn[i,:] for i in index], axis = 0)
else:
all_words[word] = all_words_cn[index,:]
```
## Calculate WEAT scores
```
d = weat_effect_size(X, Y, A, B, all_words)
p = weat_p_value(X, Y, A, B, all_words, 1000)
print('WEAT d = ', d)
print('WEAT p = ', p)
```
# Hard Debiasing
## Mu et. al. Hard Debiasing
```
from sklearn.decomposition import PCA
def hard_debias(all_words, subspace):
"""Project off the first principal component (of the subspace) from all word vectors
Arguments
all_words : Matrix of word vectors of all words stored row-wise
subspace : Matrix of words representing a particular subspace stored row-wise
Returns
ret : Matrix of debiased word vectors stored row-wise
"""
all_words = np.array(all_words)
subspace = np.array(subspace)
# Compute the first principal component of the subspace matrix
pca = PCA(n_components = 1)
pca.fit(subspace)
pc1 = np.array(pca.components_)
# Project off the first PC from all word vectors
temp = (pc1.T @ (pc1 @ all_words.T)).T
ret = all_words - temp
return ret
```
### Initialize variables
```
resourceFile = ''
wikiWordsPath = resourceFile + 'SIF/auxiliary_data/enwiki_vocab_min200.txt' # https://github.com/PrincetonML/SIF/blob/master/auxiliary_data/enwiki_vocab_min200.txt
"""Set the embedding to be used"""
embd = 'glove'
"""Set the subspace to be tested on"""
subspace = 'gender_list_all'
"""Load association and target word pairs"""
X = WEATLists.W_8_Science
Y = WEATLists.W_8_Arts
A = WEATLists.W_8_Male_terms
B = WEATLists.W_8_Female_terms
```
### Load the vectors as a matrix
```
curr_embd = eval(embd)
"""Load all embeddings in a matrix of all words in the wordlist"""
if embd == 'elmo':
all_words_mat, all_words_index, _ = pick_embeddings(brown_corpus, curr_embd)
if embd == 'bert':
all_words_index, all_words_mat = load_bert(all_dict, subspace)
else:
all_words_index, all_words_mat = load_all_vectors(curr_embd, wikiWordsPath)
```
### Debias the embeddings
```
"""Load the vectors for the words representing the subspace as a matrix and compute the respetive conceptor matrix"""
if subspace != 'without_conceptor' and subspace != 'gender_list_and':
subspace_words_list = eval(subspace)
if subspace != 'without_debiasing':
if embd == 'elmo' or embd == 'bert':
subspace_words_mat = load_subspace_vectors_contextual(all_words_mat, all_words_index, subspace_words_list)
all_words_cn = hard_debias(all_words_mat, subspace_words_mat)
else:
subspace_words_mat = load_subspace_vectors(curr_embd, subspace_words_list)
all_words_cn = hard_debias(all_words_mat, subspace_words_mat)
else:
all_words_cn = all_words_mat
all_words_cn = np.array(all_words_cn)
#Store all conceptored words in a dictonary
all_words = {}
for word, index in all_words_index.items():
if embd == 'elmo' or embd == 'bert':
all_words[word] = np.mean([all_words_cn[i,:] for i in index], axis = 0)
else:
all_words[word] = all_words_cn[index,:]
```
### Calculate WEAT scores
```
d = weat_effect_size(X, Y, A, B, all_words)
p = weat_p_value(X, Y, A, B, all_words, 1000)
print('WEAT d = ', d)
print('WEAT p = ', p)
```
## Bolukbasi hard debiasing
```
"""Helper methods to debias word embeddings as proposed by the original authors"""
def doPCA(pairs, mat, index, num_components = 5):
matrix = []
for a, b in pairs:
center = (mat[index[a.lower()]] + mat[index[b.lower()]])/2
matrix.append(mat[index[a.lower()]] - center)
matrix.append(mat[index[b.lower()]] - center)
matrix = np.array(matrix)
pca = PCA(n_components = num_components)
pca.fit(matrix)
# bar(range(num_components), pca.explained_variance_ratio_)
return pca
def drop(u, v):
return u - v * u.dot(v) / v.dot(v)
def normalize(all_words_mat):
all_words_mat /= np.linalg.norm(all_words_mat, axis=1)[:, np.newaxis]
return all_words_mat
def debias(all_words_mat, all_words_index, gender_specific_words, definitional, equalize):
"""Debiases the word vectors as proposed by the original authors
Arguments
all_words_mat : Matrix of word vectors of all words stored row-wise
all_words_index : Dictonary of words to row number in the matrix
gender_specific_words : List of words defining the subspace
definitional : List of definitional words
equalize : List of tuples defined as the set of equalize pairs (downloaded)
Returns
all_words_mat : Matrix of debiased word vectors stored row-wise
"""
gender_direction = doPCA(definitional, all_words_mat, all_words_index).components_[0]
specific_set = set(gender_specific_words)
for w in list(all_words_index.keys()):
if w not in specific_set:
all_words_mat[all_words_index[w.lower()]] = drop(all_words_mat[all_words_index[w.lower()]], gender_direction)
all_words_mat = normalize(all_words_mat)
candidates = {x for e1, e2 in equalize for x in [(e1.lower(), e2.lower()),
(e1.title(), e2.title()),
(e1.upper(), e2.upper())]}
print(candidates)
for (a, b) in candidates:
if (a.lower() in all_words_index and b.lower() in all_words_index):
y = drop((all_words_mat[all_words_index[a.lower()]] + all_words_mat[all_words_index[b.lower()]]) / 2, gender_direction)
z = np.sqrt(1 - np.linalg.norm(y)**2)
if (all_words_mat[all_words_index[a.lower()]] - all_words_mat[all_words_index[b.lower()]]).dot(gender_direction) < 0:
z = -z
all_words_mat[all_words_index[a.lower()]] = z * gender_direction + y
all_words_mat[all_words_index[b.lower()]] = -z * gender_direction + y
all_words_mat = normalize(all_words_mat)
return all_words_mat
```
### Initialize Variables
```
resourceFile = ''
wikiWordsPath = resourceFile + 'SIF/auxiliary_data/enwiki_vocab_min200.txt' # https://github.com/PrincetonML/SIF/blob/master/auxiliary_data/enwiki_vocab_min200.txt
!git clone https://github.com/tolga-b/debiaswe.git
"""Load definitional and equalize lists"""
%cd debiaswe_tutorial/debiaswe/
# Lets load some gender related word lists to help us with debiasing
with open('./data/definitional_pairs.json', "r") as f:
defs = json.load(f) #gender definitional words
defs_list = []
for pair in defs:
defs_list.append(pair[0])
defs_list.append(pair[1])
with open('./data/equalize_pairs.json', "r") as f:
equalize_pairs = json.load(f)
%cd ../../
!ls
"""Set the embedding to be used"""
embd = 'glove'
"""Set the subspace to be tested on"""
subspace = 'gender_list_all'
"""Load association and target word pairs"""
X = WEATLists.W_8_Science
Y = WEATLists.W_8_Arts
A = WEATLists.W_8_Male_terms
B = WEATLists.W_8_Female_terms
```
### Load the vectors as a matrix
```
curr_embd = eval(embd)
"""Load all embeddings in a matrix of all words in the wordlist"""
all_words_index, all_words_mat = load_all_vectors(curr_embd, wikiWordsPath)
```
### Debias the embeddings
```
"""Load the vectors for the words representing the subspace as a matrix and compute the respetive conceptor matrix"""
if subspace != 'without_conceptor' and subspace != 'gender_list_and':
subspace_words_list = eval(subspace)
if subspace != 'without_debiasing':
all_words_cn = debias(all_words_mat, all_words_index, subspace_words_list, defs, equalize_pairs)
else:
all_words_cn = all_words_mat
all_words_cn = np.array(all_words_cn)
#Store all conceptored words in a dictonary
all_words = {}
for word, index in all_words_index.items():
all_words[word] = all_words_cn[index,:]
```
### Calculate WEAT scores
```
d = weat_effect_size(X, Y, A, B, all_words)
p = weat_p_value(X, Y, A, B, all_words, 1000)
print('WEAT d = ', d)
print('WEAT p = ', p)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.