text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Create and Populate PostgreSQL Output Database for FVS Simulations
We're using a PostgreSQL database to collect outputs from the Forest Vegetation Simulator (FVS) growth-and-yield model to allow for running multiple FVS simulations in parallel. PostgreSQL allows multiple simultaneous connections to the database, so FVS won't have to wait for other running instances of FVS to finish reading/writing.
```
import psycopg2 # for interacting with PostgreSQL database
from FVSoutput_SQL_createDBtables import * # SQL query strings for creating FVS output tables
```
## Getting Started
First, you'll need to create a blank output database you want FVS to direct outputs to. Don't forget to add the output database information to your list of ODBC named data sources so that FVS knows how to find and access it. The rest of this notebook will still work and your output tables will get created, but FVS won't write to your output database if it doesn't find it as a named data source (with 32-bit ODBC driver, suggest using PostgreSQL UNICODE).
## Tables you can create in the output database
The following tables are available to use. All should be pre-prended with "fvs_":
**Base FVS (incl. DB Extension):**
atrtrlist, cases, compute, cutlist, strclass, summary, treelist
**Fire & Fuels Extension (FFE):**
*General:* canprofile, snagdet, down_wood_cov, down_wood_vol, mortality, snagsum
*Fire/Fuels:* burnreport, consumption, potfire, fuels
*Carbon:* carbon, hrv_carbon,
**ECON Extension:**
econharvestvalue, econsummary
**Mountain Pine Beetle (MPB) Impact Model:**
bm_bkp, bm_main, bm_tree, bm_vol
**Climate-FVS:**
climate
**Dwarf Mistletoe:**
dm_spp_sum, dm_stnd_sum
**Western Root Disease:**
rd_sum, rd_det, rd_beetle
```
# We'll use the following helper function to write tables into the FVS Output database.
def create_tables(conn_str, table_SQLs, verbose=False):
'''
Creates tables in a PostgreSQL database to hold FVS Outputs.
===========
Arguments:
conn_str = A string for the database connection.
table_SQLs = A list of valid FVS table names (imported from FVSoutput_SQL_createDBtables.py).
verbose = Will print out the SQL query strings used if set to True (defaults to False).
'''
with psycopg2.connect(conn_string) as conn:
with conn.cursor() as cur:
for SQL in table_SQLs:
if verbose:
print(SQL)
cur.execute(SQL)
print('Created', SQL.split(' ')[2], end='... ')
conn.close()
print('Done.')
```
### Specify the tables you want to be created in the output database
```
my_tables = [fvs_cases, fvs_summary, fvs_treelist]
```
### Specify how to connect to your output database
Specify the dbname, username/owner, and host for your database. If you don't have password authentication handled via other means, you'd also need to provide it here.
```
mydb = "PNWFIADB_FVSOut"
myusername = 'postgres'
myhost = 'localhost'
conn_string = "dbname={dbname} user={user} host={host}".format(dbname=mydb, user=myusername, host=myhost)
```
### Execute the `create_tables` function, then go check out your database!
```
create_tables(conn_string, my_tables)
```
| github_jupyter |
<img src="images/charmander.png" alt="Beginner" width="200">
<img src="images/coffee_machine.jpeg" width="300" style="float: right; margin-top: 30px">
## Introduction to programming
Welcome!
Our goal in this module will be to let you have a taste of what programming is like.
I'm going to assume that until now, all your interactions with a computer have been through a graphical user interface ([GUI](https://en.wikipedia.org/wiki/History_of_the_graphical_user_interface)).
These interfaces completely revolutionized how we interact with computers by simplifying what is an enormously complex machine into an appliance that has only a few functions.
Each function can be executed by pressing a button.
Like it's a coffee machine.
But the computer you are talking to right now is not a coffee machine.
Once you drop the GUI mask and start interacting with the computer itself, its full potential becomes available to you!
We're going to have to start from scratch. So settle in. You've got a lot to learn about programming before we can get to data visualization.
## Your first ever program
Programming code is written in a programming language.
You'll have to learn this language in order to talk with the computer.
Don't worry though, programming languages are designed to be very easy to learn.
For the purposes of this hands-on session, you only need to know a little of the vocabulary and grammar of a programming language called [Python](https://docs.python.org/3/tutorial/).
The text you are reading now is inside a "cell". This notebook contains both text cells like this one, and "code cells" in which you can write programming code and have the computer "execute" it.
The first thing I'm going to teach you is how to use the computer as a calculator. In Python, arithmetic works just as you would expect it to work. Try typing `1 + 1` in the cell below and press `Ctrl` + `Enter` (both keys on your keyboard at the same time) to execute the code.
<div style="border: 3px solid #aaccff; margin: 10px 100px; padding: 10px">
<b>Note about colors</b>
As you type in programming code in the cell below, you will notice the computer will automatically color parts of the text. These colors are purely a visual aid for you, the programmer, and have no meaning in the programming language.
</div>
Did it work? Hopefully the computer answered back with "2". Phenomenal computer power at work ladies and gentlemen! In the Python language, `+`, `-`, `*`, `/` all do what you would expect them to do. For example, to compute the average of 5, 3, 7 and 10, we would write:
```python
(5 + 3 + 7 + 10) / 4
```
just like you would write it in math. Try it in the code cell above if you like.
## Variables
The computer ran your little program, reported the answer back to you, and by now has forgotten all about it.
If we wish for the computer to remember the result of a computation, so we may use it in another computation, we need to give it a name by using the `=` character. Try running the program below and you'll see what I mean:
```
x = 1
y = 2
z = x + y
z * 3
```
In good mathematical tradition, I've named various things `x`, `y` and `z` and re-used them in various lines of the program. You are free to choose whatever names you like, but there are some [rules](https://docs.python.org/3/reference/lexical_analysis.html#identifiers). For example, you cannot name a result `+`, since that would be very confusing. Another important restriction is that names can not have spaces. For example, `the answer` is not a valid name, but `the_answer` is (notice I used an underscore "_" character instead of a space, which is a common thing that programmers do).
Naming things is terribly important in programming, for it bestows meaning. When you get down to it, a computer is just [a pile of carefully arranged sand](https://xkcd.com/1349) with [a rectangle of little lights](https://www.xkcd.com/722). It takes a human to bestow meaning to the patterns of light and currents of electricity flowing through the machine. Do the numbers mean sheep to barter? Nuclear missiles to fire? Thoughts in a persons brain?
Your turn, mighty human. Write an elegant program to solve the following math problem:
> Jan has 5 coins and Mary has 3 coins.
> How much coins do Jan and Mary have together?
## Functions and modules
For our next program, let's do something more difficult and compute the sine of 2 (it should be around 0.9). We can't easily compute that with just `+`, `-`, `*`, and `/`.
Programming languages are inspired a lot by the language of mathematics. In math, we would write the sine of 2 as:
$$\sin 2$$
Where "$\sin$" is a placeholder for "use whatever method you want to compute the sine". More formally, $\sin$ is called a function and 2 is an argument to this function.
Functions are central to programming. They are the verbs of the programming language. Once you've learned a few verbs you can say lots of things!
If numbers are your raw material, functions are the tools you use to manipulate them.
There are **tons** of functions available to you.
So many, that to keep track of them all, they are organized in different "modules".
You can think of a module as a toolbox that has a nice label on it so we know what is inside.
The `sin` function we want to use is kept inside the `math` module.
<img src="images/toolbox.png" alt="Functions are like tools. Modules are like toolboxes">
In order for us to use the `sin` function, we must first "import" it from the `math` module.
In our metaphor, importing a function is like taking a tool from a toolbox and placing it on our workbench.
In the grammar of our programming language, you say:
```python
from math import sin
```
From that point on, the `sin` function is available for you to use. Computing the sine of two is then `sin(2)`.
<img src="images/function.png" alt="sin(2)" width="400">
I've filled in the program for you below. Try running it.
```
from math import sin
sin(2)
```
Your turn. Write a program that computes the cosine of 5. The function to compute the cosine is called `cos` and can also be found inside the `math` module (remember to `import` it first!). You can use the cell below to write your program in:
We can combine functions with variables. Unfortunately, the answer to the cosine of 5 you have so cleverly computed above is already gone with the wind. You can "assign" the result of a function to a variable, just as we did above with numbers.
For example: here is how to assign the sine of 2 to a variable called `my_result`:
```
my_result = sin(2)
```
Try running the above code cell. It doesn't seem to work??!! Nothing happened!
Actually, the program did work, but the computer is keeping silent. As we have seen, the computer will tell you the result that was produced by the last line in your program, but only if you did **not** assign that result to a variable. It was a choice made by programmers that were of the opinion that their computers were being too chatty.
<img src="images/printer.jpeg" width="300" style="float: right; margin-top: 30px">
## Explicitly telling the computer to display something
There is a function that is not part of any module and is always available to you. This function will make the computer display things. It is called `print` and it writes text to the screen. Its name is a leftover from the times when computer monitors did not exist yet and the computer could only print things on paper. Funny how etymology applies to programming languages too.
Anyway, here is an example of the `print` function in action:
```
my_result = sin(2)
print(my_result)
```
Ok, your turn. Write a program that assigns the cosine of 2 to the variable `my_result`. Use the `print` function to display the variable to check that it has changed.
## Working with text
Here is a little program that makes the computer say hello:
```
print('hello')
```
In the Python programming language, literal text needs to always be surrounded by `'` quotation marks. Without quotation marks, the program above would try to display the contents of a variable named `hello`. Try running this example and you'll see what I mean:
```
hello = 'Hello, world!'
print(hello)
```
The `print` function can write multiple things at the same time by giving it more than one argument. To give multiple arguments, put a comma between them, like this:
```
print('The man said:', hello, 'How are you?')
```
As a child, I would write little programs like this:
```
name = input('What is your name?')
age = input('What is your age?')
color = input('What is your favorite color?')
pet = input('What kind of pet would you like?')
job = input('What do you want to be when you grow up?')
print('')
print('There once was a', pet, 'named', name)
print('Every day,', name, 'bought', age, 'bottles of lemonade.')
print('Until one day it turned', color, 'from drinking too much.')
print('It was rushed to the hospital by a', job)
```
To familiarize yourself with manipulating text in a programming language, try modifying the program above to tell a story of your own.
## Loading some EEG data
Finally! We got all the ingredients we need to use [MNE-Python](https://martinos.org/mne), which is a Python module full of functions for analyzing EEG and MEG data.
You now know how to import a module, use its functions, and save the results to variables so you can re-use them.
These concepts allow us to do pretty powerful things.
We're going to instruct the computer to "load" some EEG data.
The EEG data is currently stored as a file on the hard drive, called:
`data/magic-trick-raw.bdf`
The above "file path" may look weird to you if you are used to Windows.
The computer you are talking to is a linux machine.
Paths look a little different.
To "load" a file means to make it available as a variable so we can use it in our programming code.
MNE-Python has a function that we can use to do this for us.
It's called `read_raw_bdf` and it lives in the `mne.io` module.
Executing the cell below will `import` the function for you, so you can use it.
Importing a function will make it available "from now on", also in future cells.
```
from mne.io import read_raw_bdf
```
Now, everything you've learned so far needs to come together. I'm going to leave it up to you to use the function correctly.
Make the computer load the EEG data and put it inside a variable called `raw`, by writing **a single line** of code.
Here are some pointers to help you along:
1. The function you want to call is named: `read_raw_bdf`. It has already been imported.
2. Call it with a single argument: the name of the file to load. Remember the example `sin(2)`
3. The name of the file to load is `data/magic-trick-raw.bdf`
4. The name of the file is text. Remember what you know about text and quotation `'` marks!
5. Assign the result to a variable with the name `raw` using the `=` symbol.
Write your single line of code in the cell below:
If your code above was correct, executing the cell below will display some information about the data you've just loaded:
```
print(raw)
```
## Visualizing the EEG data
Now that the EEG data has been loaded into memory, we can look at it. The data is in it's "raw" form, meaning it has come straight out of the EEG system and nothing has been done to it yet. MNE-Python has a visualization tool for data in it's raw form, called `plot_raw`. In programmer lingo, visualizing data points is referred to as "[plotting](https://en.wikipedia.org/wiki/Plot_(graphics))".
Again, I'll write the code to `import` the `plot_raw` function for you. This time, I'm also going to add some housekeeping code that will instruct the computer to send the graphics to your browser.
```
%matplotlib notebook
print('From now on, all graphics will send to your browser.')
from mne.viz import plot_raw
```
It's up to you to write the code to call the function and give the `raw` variable as argument to this function. Ready? go!
If you wrote the code correctly, you should be looking at a little interface that shows the data collected on all the EEG sensors. Try using the arrow keys and clicking in the figure to explore the data. You can press the `-` key to zoom out, which I recommend doing to get a better look.
## Continue onward to the next level!
Great job making it this far. You've taken your first steps along the path towards becoming a data analyst. If you are keen to learn more, I'd like to invite you to move on to the next level: frequency filtering and cutting epochs.
<center>
<a href="adept.ipynb" alt="To adept level"><img src="images/charmeleon.png" width="200"></a>
<a href="adept.ipynb">Move on to the next level</a>
</center>
| github_jupyter |
```
import numpy as np
import pprint
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.gridworld import GridworldEnv
pp = pprint.PrettyPrinter(indent=2)
env = GridworldEnv()
```
### Helper function used from Policy Eval/Iter
```
def one_step_lookahead(state, V, discount_factor=1.0):
"""
Helper function to calculate the value for all action in a given state.
Args:
state: The state to consider (int)
V: The value to use as an estimator, Vector of length env.nS
Returns:
A vector of length env.nA containing the expected value of each action.
"""
A = np.zeros(env.nA)
for a in range(env.nA):
for prob, next_state, reward, done in env.P[state][a]:
A[a] += prob * (reward + discount_factor * V[next_state])
return A
```
### My take
```
def value_iteration(env, theta=0.0001, discount_factor=1.0):
"""
Value Iteration Algorithm.
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
A tuple (policy, V) of the optimal policy and the optimal value function.
"""
V = np.zeros(env.nS)
policy = np.zeros([env.nS, env.nA])
while True:
delta = 0
# Update each state...
for s in range(env.nS):
v = 0
# Perform a single lookahead to find best action
action_values = one_step_lookahead(s, V)
best_a = np.argmax(action_values)
policy[s][best_a] = 1
# Policy eval but only with best_a
for prob, next_state, reward, done in env.P[s][best_a]:
v += prob * (reward + discount_factor * V[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
print(np.max(action_values), v)
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
return policy, V
```
### Solution
```
def value_iteration_sol(env, theta=0.0001, discount_factor=1.0):
"""
Value Iteration Algorithm.
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
A tuple (policy, V) of the optimal policy and the optimal value function.
"""
V = np.zeros(env.nS)
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(env.nS):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V)
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros([env.nS, env.nA])
for s in range(env.nS):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V)
best_action = np.argmax(A)
# Always take the best action
policy[s, best_action] = 1.0
return policy, V
```
#### Key Takeaways:
- Main difference btw two approach:
- Mine: computed `best_action = np.argmax(A)` (index of max value)
- Solution: computed `best_action_value = np.max(A)` (max value)
- Turns out doing manual policy eval on my approach yielded the same result as simply doing `best_action_value = np.max(A)` in solution.
- I also forgot to manipulate the policy in my approach which I added on line 28
### Testing
```
policy, v = value_iteration(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
# Test the value function
expected_v = np.array([ 0, -1, -2, -3, -1, -2, -3, -2, -2, -3, -2, -1, -3, -2, -1, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
```
| github_jupyter |
```
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =======
```
# ETL with NVTabular
In this notebook we are going to generate synthetic data and then create sequential features with [NVTabular](https://github.com/NVIDIA-Merlin/NVTabular). Such data will be used in the next notebook to train a session-based recommendation model.
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library.
### Import required libraries
```
import os
import glob
import numpy as np
import pandas as pd
import cudf
import cupy as cp
import nvtabular as nvt
```
### Define Input/Output Path
```
INPUT_DATA_DIR = os.environ.get("INPUT_DATA_DIR", "/workspace/data/")
```
## Create a Synthetic Input Data
```
NUM_ROWS = 100000
long_tailed_item_distribution = np.clip(np.random.lognormal(3., 1., NUM_ROWS).astype(np.int32), 1, 50000)
# generate random item interaction features
df = pd.DataFrame(np.random.randint(70000, 80000, NUM_ROWS), columns=['session_id'])
df['item_id'] = long_tailed_item_distribution
# generate category mapping for each item-id
df['category'] = pd.cut(df['item_id'], bins=334, labels=np.arange(1, 335)).astype(np.int32)
df['timestamp/age_days'] = np.random.uniform(0, 1, NUM_ROWS)
df['timestamp/weekday/sin']= np.random.uniform(0, 1, NUM_ROWS)
# generate day mapping for each session
map_day = dict(zip(df.session_id.unique(), np.random.randint(1, 10, size=(df.session_id.nunique()))))
df['day'] = df.session_id.map(map_day)
```
- Visualize couple of rows of the synthetic dataset
```
df.head()
```
## Feature Engineering with NVTabular
Deep Learning models require dense input features. Categorical features are sparse, and need to be represented by dense embeddings in the model. To allow for that, categorical features need first to be encoded as contiguous integers `(0, ..., |C|)`, where `|C|` is the feature cardinality (number of unique values), so that their embeddings can be efficiently stored in embedding layers. We will use NVTabular to preprocess the categorical features, so that all categorical columns are encoded as contiguous integers. Note that in the `Categorify` op we set `start_index=1`, the reason for that we want the encoded null values to start from `1` instead of `0` because we reserve `0` for padding the sequence features.
Here our goal is to create sequential features. In this cell, we are creating temporal features and grouping them together at the session level, sorting the interactions by time. Note that we also trim each feature sequence in a session to a certain length. Here, we use the NVTabular library so that we can easily preprocess and create features on GPU with a few lines.
```
# Categorify categorical features
categ_feats = ['session_id', 'item_id', 'category'] >> nvt.ops.Categorify(start_index=1)
# Define Groupby Workflow
groupby_feats = categ_feats + ['day', 'timestamp/age_days', 'timestamp/weekday/sin']
# Groups interaction features by session and sorted by timestamp
groupby_features = groupby_feats >> nvt.ops.Groupby(
groupby_cols=["session_id"],
aggs={
"item_id": ["list", "count"],
"category": ["list"],
"day": ["first"],
"timestamp/age_days": ["list"],
'timestamp/weekday/sin': ["list"],
},
name_sep="-")
# Select and truncate the sequential features
sequence_features_truncated = (groupby_features['category-list', 'item_id-list', 'timestamp/age_days-list', 'timestamp/weekday/sin-list']) >>nvt.ops.ListSlice(0,20) >> nvt.ops.Rename(postfix = '_trim')
# Filter out sessions with length 1 (not valid for next-item prediction training and evaluation)
MINIMUM_SESSION_LENGTH = 2
selected_features = groupby_features['item_id-count', 'day-first', 'session_id'] + sequence_features_truncated
filtered_sessions = selected_features >> nvt.ops.Filter(f=lambda df: df["item_id-count"] >= MINIMUM_SESSION_LENGTH)
workflow = nvt.Workflow(filtered_sessions)
dataset = nvt.Dataset(df, cpu=False)
# Generating statistics for the features
workflow.fit(dataset)
# Applying the preprocessing and returning an NVTabular dataset
sessions_ds = workflow.transform(dataset)
# Converting the NVTabular dataset to a Dask cuDF dataframe (`to_ddf()`) and then to cuDF dataframe (`.compute()`)
sessions_gdf = sessions_ds.to_ddf().compute()
sessions_gdf.head(3)
```
It is possible to save the preprocessing workflow. That is useful to apply the same preprocessing to other data (with the same schema) and also to deploy the session-based recommendation pipeline to Triton Inference Server.
```
workflow.save('workflow_etl')
```
## Export pre-processed data by day
In this example we are going to split the preprocessed parquet files by days, to allow for temporal training and evaluation. There will be a folder for each day and three parquet files within each day folder: train.parquet, validation.parquet and test.parquet
```
OUTPUT_FOLDER = os.environ.get("OUTPUT_FOLDER",os.path.join(INPUT_DATA_DIR, "sessions_by_day"))
!mkdir -p $OUTPUT_FOLDER
from transformers4rec.data.preprocessing import save_time_based_splits
save_time_based_splits(data=nvt.Dataset(sessions_gdf),
output_dir= OUTPUT_FOLDER,
partition_col='day-first',
timestamp_col='session_id',
)
```
## Checking the preprocessed outputs
```
TRAIN_PATHS = sorted(glob.glob(os.path.join(OUTPUT_FOLDER, "1", "train.parquet")))
gdf = cudf.read_parquet(TRAIN_PATHS[0])
gdf.head()
```
You have just created session-level features to train a session-based recommendation model using NVTabular. Now you can move to the the next notebook,`02-session-based-XLNet-with-PyT.ipynb` to train a session-based recommendation model using [XLNet](https://arxiv.org/abs/1906.08237), one of the state-of-the-art NLP model.
| github_jupyter |
<a href="https://colab.research.google.com/github/chavgova/My-AI/blob/master/emotion_recognition_08.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
IMPORT
```
#this is the copy of another projecct and ill make changes to see how i can make it better
import librosa
import librosa.display
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from matplotlib.pyplot import specgram
from matplotlib.axis import Axis
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Input, Flatten, Dropout, Activation
from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from sklearn.metrics import confusion_matrix
from keras import regularizers
import os
import pandas as pd
from google.colab import drive
import os
path = '/content/drive/My Drive/My_AI/RawData'
mylist = []
mylist = os.listdir(path)
#print(mylist)
print(len(mylist))
```
LABLES
```
import re
feeling_list=[]
dataset = ''
for item in mylist:
file_label = item[6:-16]
try:
file_label = int(file_label)
dataset = 'RAVDESS'
except:
if (item[:1] == 'Y') or (item[:1] == 'O'):
file_label = re.split('_|\.', item)[2]
dataset = 'TESS'
else: dataset = 'SAVEE'
if dataset == 'RAVDESS':
if int(item[18:-4])%2==0: #female
if file_label == 1:
feeling_list.append('female_neutral')
elif file_label == 2:
feeling_list.append('female_calm')
elif file_label == 3:
feeling_list.append('female_happy')
elif file_label == 4:
feeling_list.append('female_sad')
elif file_label == 5:
feeling_list.append('female_angry')
elif file_label == 6:
feeling_list.append('female_fearful')
elif file_label == 7:
feeling_list.append('female_disgust')
elif file_label == 8:
feeling_list.append('female_surprised')
else:
if file_label== 1:
feeling_list.append('male_neutral')
elif file_label == 2:
feeling_list.append('male_calm')
elif file_label == 3:
feeling_list.append('male_happy')
elif file_label == 4:
feeling_list.append('male_sad')
elif file_label == 5:
feeling_list.append('male_angry')
elif file_label == 6:
feeling_list.append('male_fearful')
elif file_label == 7:
feeling_list.append('male_disgust')
elif file_label == 8:
feeling_list.append('male_surprised')
elif dataset == 'TESS':
if file_label == 'neutral': feeling_list.append('female_neutral')
elif file_label == 'angry': feeling_list.append('female_angry')
elif file_label == 'disgust': feeling_list.append('female_disgust')
elif file_label == 'ps': feeling_list.append('female_surprised')
elif file_label == 'happy': feeling_list.append('female_happy')
elif file_label == 'sad': feeling_list.append('female_sad')
elif file_label == 'fear': feeling_list.append('female_fearful')
elif dataset == 'SAVEE':
if item[:1]=='a':
feeling_list.append('male_angry')
elif item[:1]=='f':
feeling_list.append('male_fearful')
elif item[:1]=='h':
feeling_list.append('male_happy')
elif item[:1]=='n':
feeling_list.append('male_neutral')
elif item[:2]=='sa':
feeling_list.append('male_sad')
elif item[:2]=='su':
feeling_list.append('male_surprised')
elif item[:1]=='d':
feeling_list.append('male_disgust')
import pandas as pd
labels = pd.DataFrame(feeling_list)
labels #[1600:1660] #print
```
Getting the features of audio files using librosa
```
import librosa
import numpy as np
def extract_feature(my_file, **kwargs):
mfcc = kwargs.get("mfcc")
chroma = kwargs.get("chroma")
mel = kwargs.get("mel")
contrast = kwargs.get("contrast")
tonnetz = kwargs.get("tonnetz")
X, sample_rate = librosa.core.load(my_file)
if chroma or contrast:
stft = np.abs(librosa.stft(X))
result = np.array([])
if mfcc:
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)
result = np.hstack((result, mfccs)) # 40 values
if chroma:
chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0)
result = np.hstack((result, chroma)) # 12 values
if mel:
mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0)
result = np.hstack((result, mel)) # 128 values
if contrast:
contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0)
result = np.hstack((result, contrast)) # 7 values
if tonnetz:
tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X), sr=sample_rate).T,axis=0)
result = np.hstack((result, tonnetz)) # 6 values
return result
#f = os.fspath('/content/drive/My Drive/My_AI/RawData/03-01-08-01-01-02-01.wav')
#a = extract_feature(f, mel=True, mfcc=True, contrast=True, chroma=True, tonnetz=True)
#print(a, a.shape)
data_frame = pd.DataFrame(columns=['all_features'])
bookmark=0
#mylist = mylist[:100]
for index,y in enumerate(mylist):
all_features_ndarray = extract_feature('/content/drive/My Drive/My_AI/RawData/'+ y, mel=True, mfcc=True, contrast=True, chroma=True, tonnetz=True)
data_frame.loc[bookmark] = [all_features_ndarray]
bookmark=bookmark+1
#df[:5] #print
data_frame
data_frame = pd.DataFrame(data_frame['all_features'].values.tolist())
data_frame[:10]
data_frame_labels = pd.concat([data_frame,labels], axis=1)
data_frame_labels = data_frame_labels.rename(index=str, columns={"0": "label"})
data_frame_labels #print
from sklearn.utils import shuffle
data_frame_labels = shuffle(data_frame_labels)
data_frame_labels
#print
```
SAVE DATASET FEATURES AND LABELS
```
import pickle
with open('/content/drive/My Drive/My_AI/datasets_RAVDESS-TESS-SAVEE_features&labels.pkl', 'wb') as f:
pickle.dump(data_frame_labels, f)
```
LOAD DATASET FEATURES AND LABELS
```
import pickle
with open('/content/drive/My Drive/My_AI/datasets_RAVDESS-TESS-SAVEE_features&labels.pkl', 'rb') as f:
data_frame_labels = pickle.load(f)
```
Dividing the data into test and train
```
data_frame_labels.rename(columns={'0': 'lables'}, inplace=True)
data_frame_labels.columns = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 , 41, 42, 43, 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52, 53, 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67, 68, 69 , 70 , 71 ,72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 ,81 , 82 , 83 ,84 , 85 , 86 , 87 , 88 , 89 ,90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 ,100 ,101 ,102 ,103 ,104, 105 ,106, 107, 108, 109, 110 ,111, 112 ,113, 114, 115 ,116 ,117 ,118, 119, 120, 121, 122, 123 ,124 ,125 ,126, 127, 128 ,129 ,130 ,131 ,132, 133 ,134, 135 ,136 ,137 ,138 ,139, 140 ,141 ,142 ,143, 144 ,145, 146 ,147 ,148, 149, 150, 151, 152, 153 ,154, 155, 156 ,157 ,158 ,159 ,160, 161 ,162, 163, 164 ,165 ,166 ,167 ,168 ,169 ,170, 171, 172 ,173 ,174 ,175, 176, 177 ,178 ,179 ,180, 181, 182, 183, 184, 185 ,186 ,187 ,188 ,189 ,190 ,191 ,192 , 'lables']
print(data_frame_labels)
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_neutral']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_calm']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_fearful']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_surprised']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_happy']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_sad']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_angry']
data_frame_labels = data_frame_labels[data_frame_labels.lables != 'male_disgust']
print(data_frame_labels)
data_frame_labels_set = np.random.rand(len(data_frame_labels)) < 0.8
train = data_frame_labels[data_frame_labels_set]
test = data_frame_labels[~data_frame_labels_set]
test[0:20]
i = 0
tess_count = 0
savee_count = 0
ravdess_count = 0
for i in test:
index = int(test.iloc[i].name)
if index < 1400: tess_count = tess_count + 1
elif index < 1550: savee_count = savee_count + 1
elif index < 2950: tess_count = tess_count + 1
elif index < 3331: savee_count = savee_count + 1
elif index < 4643: ravdess_count = ravdess_count + 1
else: print('hui')
#print(dataset)
#print(test.iloc[i].name) # thats the index of the file (useful for datasets acc prediction visualization)
print("tess")
print(tess_count)
print('ravdess')
print(ravdess_count)
print('savee')
print(savee_count)
trainfeatures = train.iloc[:, :-1]
trainlabel = train.iloc[:, -1:]
testfeatures = test.iloc[:, :-1]
testlabel = test.iloc[:, -1:]
testlabel
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
X_train = np.array(trainfeatures)
y_train = np.array(trainlabel)
X_test = np.array(testfeatures)
y_test = np.array(testlabel)
#print(y_test)
lb = LabelEncoder()
y_train = np_utils.to_categorical(lb.fit_transform(y_train))
y_test = np_utils.to_categorical(lb.fit_transform(y_test))
y_test
X_test
```
Changing dimension for CNN model
```
x_traincnn =np.expand_dims(X_train, axis=2)
x_testcnn= np.expand_dims(X_test, axis=2)
print(x_testcnn)
model = Sequential()
model.add(Conv1D(256, 5,padding='same', input_shape=(193,1)))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same'))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
#model.add(MaxPooling1D(pool_size=(4)))
model.add(Dropout(0.1))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(8))
model.add(Activation('softmax'))
opt = tf.keras.optimizers.Adam(learning_rate=0.0001) ###
model.summary()
model.compile(loss= 'categorical_crossentropy', optimizer = opt, metrics=['accuracy'])
```
Removed the whole training part for avoiding unnecessary long epochs list
```
cnnhistory = model.fit(x_traincnn, y_train, batch_size = 32, epochs = 50, validation_data = (x_testcnn, y_test))
plt.figure()
plt.plot(cnnhistory.history['loss'])
plt.plot(cnnhistory.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.grid(True)
plt.legend(['loss', 'val loss'], loc='upper left')
plt.show()
plt.figure(figsize=(10,5))
plt.plot(cnnhistory.history['loss'], 'm', linewidth=3)
plt.plot(cnnhistory.history['val_loss'], 'y', linewidth=3)
plt.legend(['Loss', 'Validation Loss'], fontsize=13)
plt.xlabel('epochs')
plt.ylabel('loss', fontsize=12)
plt.grid(True)
plt.show()
plt.figure(figsize=(10,6), frameon=True)
plt.plot(cnnhistory.history['accuracy'], 'g', linewidth=3)
plt.plot(cnnhistory.history['val_accuracy'], 'r', linewidth=3)
plt.title('Model Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy', fontsize=12)
plt.legend(['Accuracy', 'Validation Accuracy'], loc = 'upper left', fontsize=13)
plt.grid(True)
plt.show()
tf.keras.utils.plot_model(
model,
to_file="img_model.png",
show_shapes=False,
show_layer_names=True,
rankdir="TB",
expand_nested=False,
dpi=96,
)
dot_img_file = '/content/drive/My Drive/My_AI/img_model_08_FEMALE.png'
tf.keras.utils.plot_model(model, to_file = dot_img_file, show_shapes=True)
```
SAVING THE MODEL
```
model_name = 'Emotion_Voice_Detection_CNN_model_08_FEMALE.h5'
path = '/content/drive/My Drive/My_AI/MY MODELS/'
model_path = os.path.join(path, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
import json
model_json = model.to_json()
with open("/content/drive/My Drive/My_AI/MY MODELS/model_08_FEMALE.json", "w") as json_file:
json_file.write(model_json)
```
LOADING THE MODEL
```
# loading json and creating model
from keras.models import model_from_json
json_file = open('/content/drive/My Drive/My_AI/MY MODELS/model_08_FEMALE.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("/content/drive/My Drive/My_AI/MY MODELS/Emotion_Voice_Detection_CNN_model_08_FEMALE.h5")
print("Loaded model from disk")
# evaluate loaded model on test data
opt = tf.keras.optimizers.Adam(learning_rate=0.0001) ###
loaded_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
score = loaded_model.evaluate(x_testcnn, y_test, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
```
Predicting emotions on the test data
```
import pandas as pd
preds = loaded_model.predict(x_testcnn, batch_size=32, verbose=1)
preds1=preds.argmax(axis=1)
abc = preds1.astype(int).flatten()
predictions = (lb.inverse_transform((abc)))
preddf = pd.DataFrame({'predictedvalues': predictions})
actual=y_test.argmax(axis=1)
abc123 = actual.astype(int).flatten()
actualvalues = (lb.inverse_transform((abc123)))
actualdf = pd.DataFrame({'actualvalues': actualvalues})
finaldf = actualdf.join(preddf)
finaldf[10:70]
finaldf.groupby('actualvalues').count()
finaldf.groupby('predictedvalues').count()
finaldf.to_csv('Predictions_08_FEMALE.csv', index=False)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
classes = finaldf.actualvalues.unique()
classes.sort()
print(classification_report(finaldf.actualvalues, finaldf.predictedvalues, target_names=classes))
import seaborn as sns
def print_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=14):
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Gender recode function
def gender(row):
if row == 'female_disgust' or 'female_fearful' or 'female_happy' or 'female_sad' or 'female_surprised' or 'female_neutral' or 'female_angry' or 'female_calm':
return 'female'
finaldf = pd.read_csv("Predictions_08_FEMALE.csv")
classes = finaldf.actualvalues.unique()
classes.sort()
# Confusion matrix
c = confusion_matrix(finaldf.actualvalues, finaldf.predictedvalues)
#print(accuracy_score(finaldf.actualvalues, finaldf.predictedvalues))
print_confusion_matrix(c, class_names = classes)
data, sampling_rate = librosa.load('/content/drive/My Drive/My_AI/Real Voice samples/GC-angry-H (1).wav')
% pylab inline
import os
import pandas as pd
import librosa
import glob
plt.figure(figsize=(15, 5))
librosa.display.waveplot(data, sr=sampling_rate)
X, sample_rate = librosa.load('/content/drive/My Drive/My_AI/Real Voice samples/GC-angry_.wav', res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5)
sample_rate = np.array(sample_rate)
demo_file = os.fspath('/content/drive/My Drive/My_AI/Real Voice samples/GC-angry_.wav')
features_live = extract_feature(demo_file, mel=True, mfcc=True, contrast=True, chroma=True, tonnetz=True)
features_live = pd.DataFrame(data = features_live)
features_live = features_live.stack().to_frame().T
import torch
features_live_2d = np.expand_dims(features_live, axis=2)
live_preds = loaded_model.predict(features_live_2d, batch_size=32, verbose = 1)
print(live_preds)
all = np.argsort(-live_preds, axis=1)[:, :8]
for i in all:
print((lb.inverse_transform((i))))
print()
print()
best_n = np.argsort(-live_preds)[:, :3]
print(best_n)
print()
print()
for n in best_n:
print(live_preds[0][n])
for i in best_n:
print((lb.inverse_transform((i))))
"""
print('PROBABILITY:')
layer = tf.keras.layers.Softmax()
print(layer(live_preds).numpy())
live_preds = live_preds.argmax(axis = 1)
print(live_preds)
live_preds = live_preds.astype(int).flatten()
live_preds = (lb.inverse_transform((live_preds)))
live_preds
"""
# Angry
#
# Disgust
# Fearful
#
# Neutral
# Sad
# Surprised
me = 1
you = 0
me > you
```
| github_jupyter |
<a href="https://colab.research.google.com/github/justin-bennington/somewhere-ml/blob/main/S2_VQGAN%2BCLIP_Classic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Generate images from text phrases with VQGAN and CLIP (z+quantize method with augmentations)
[How to use VQGAN+CLIP](https://docs.google.com/document/d/1Lu7XPRKlNhBQjcKr8k8qRzUzbBW7kzxb5Vu72GMRn2E/edit)
The original idea behind CLIP came from this article:
https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a
The last version of this notebook was authored by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). The original BigGAN + CLIP method was demonstrated by https://twitter.com/advadnoun. Translated (to Spanish) and added explanations, and modifications by Eleiber # 8347, and the friendly interface was made thanks to Abulafia # 3734.
For a detailed tutorial on how to use it, I recommend [visiting this article](https://tuscriaturas.miraheze.org/w/index.php?title=Ayuda:Generar_imágenes_con_VQGAN%2BCLIP/English), made by Jakeukalane # 2767 and Avengium (Ángel) # 3715.
```
# @title Licensed under the MIT License
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#@title Google Drive Integration (optional)
#@markdown To connect Google Drive, set `root_path` to the relative drive folder path you want outputs to be saved to if you already made a directory, then execute this cell. Leaving the field blank or just not running this will have outputs save to the runtime temp storage.
import os
root_path = "" #@param {type: "string"}
abs_root_path = "/content"
if len(root_path) > 0:
abs_root_path = abs_root_path + "/drive/MyDrive/" + root_path
from google.colab import drive
drive.mount('/content/drive')
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
#@title Make a new folder & set root path to that folder (optional)
#@markdown Saves a step if you don't have a folder in your Google Drive for this. Makes one, sets the root_path to that new folder. You can name it whatever you'd like:
folder_name = "FolderName" #@param {type: "string"}
abs_root_path = "/content"
if len(folder_name) > 0:
path_tmp = abs_root_path + "/drive/MyDrive/" + folder_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created folder & set root path to: " + abs_root_path)
#@markdown Make & assign path to a project subfolder (optional)
project_name = "ProjectName" #@param {type: "string"}
if len(project_name) > 0:
path_tmp = abs_root_path + "/" + project_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created project subfolder & set root path to: " + abs_root_path)
ensureProperRootPath()
# @title Setup, Installing Libraries
# @markdown This cell might take some time due to installing several libraries.
!nvidia-smi
print("Downloading CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
print("Installing Python Libraries for AI")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
print("Installing transformers library...")
!pip install transformers &> /dev/null
print("Installing taming.models...")
!pip install taming.models &> /dev/null
print("Installing libraries for managing metadata...")
!pip install stegano &> /dev/null
!apt install exempi &> /dev/null
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
%reload_ext autoreload
%autoreload &> /dev/null
print("Installing ffmpeg for creating videos...")
!pip install imageio-ffmpeg &> /dev/null
!mkdir steps
print("Installation finished.")
#@title Selection of models to download
#@markdown By default, the notebook downloads Model 16384 from ImageNet. There are others such as ImageNet 1024, COCO-Stuff, WikiArt 1024, WikiArt 16384, FacesHQ or S-FLCKR, which are not downloaded by default, since it would be in vain if you are not going to use them, so if you want to use them, simply select the models to download.
#@markdown WARNING:
#@markdown Not all datasets are licensed for commercial use (i.e. selling your artwork as an NFT).
#@markdown Datasets you can use for non-commercial purposes:
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = True #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
#@markdown Datasets you can use for commercial purposes:
faceshq = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
if imagenet_1024:
# !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
# !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
if imagenet_16384:
# !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
# !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/f/867b05fc8c4841768640/?dl=1'
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/f/274fb24ed38341bfa753/?dl=1'
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
if faceshq:
!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
#if wikiart_1024:
#!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
#!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
# !curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
# !curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt'
!curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml'
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
# @title Loading libraries and definitions
import argparse
import math
from pathlib import Path
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
```
## Implementation tools
Mainly what you will have to modify will be `texts:`, there you can place the text (s) you want to generate (separated with `|`). It is a list because you can put more than one text, and so the AI tries to 'mix' the images, giving the same priority to both texts.
To use an initial image to the model, you just have to upload a file to the Colab environment (in the section on the left), and then modify `initial_image:` putting the exact name of the file. Example: `sample.png`
You can also modify the model by changing the lines that say `model:`. Currently 1024, 16384, WikiArt, S-FLCKR and COCO-Stuff are available. To activate them you have to have downloaded them first, and then you can simply select it.
You can also use `target_images`, which is basically putting one or more images on it that the AI will take as a" target ", fulfilling the same function as using a text input. To put more than one you have to use `|` as a separator.
```
#@title Parameters
prompts = "prompt" #@param {type:"string"}
width = 512#@param {type:"number"}
height = 512#@param {type:"number"}
model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_16384", "coco", "faceshq", "sflckr"]
display_frequency = 50#@param {type:"number"}
initial_image = "None"#@param {type:"string"}
target_images = "None"#@param {type:"string"}
seed = -1#@param {type:"number"}
max_iterations = -1#@param {type:"number"}
input_images = ""
model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR"}
model_name = model_names[model]
if seed == -1:
seed = None
if initial_image == "None":
initial_image = None
if target_images == "None" or not target_images:
target_images = []
else:
target_images = target_images.split("|")
target_images = [image.strip() for image in target_images]
if initial_image or target_images != []:
input_images = True
prompts = [frase.strip() for frase in prompts.split("|")]
if prompts == ['']:
prompts = []
args = argparse.Namespace(
prompts=prompts,
image_prompts=target_images,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[width, height],
init_image=initial_image,
init_weight=0.,
clip_model='ViT-B/32',
vqgan_config=f'{model}.yaml',
vqgan_checkpoint=f'{model}.ckpt',
step_size=0.1,
cutn=64,
cut_pow=1.,
display_freq=display_frequency,
seed=seed,
)
#@title Execution
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
if prompts:
print('Using text prompt:', prompts)
if target_images:
print('Using image prompts:', target_images)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
image = ImgTag(filename=nombrefichero)
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', model_name, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": "VQGAN+CLIP",
"i": i,
"model": model_name,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iterations:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
```
## Generate a video with the results
If you want to generate a video with the images as frames, just click below. You can modify the number of FPS, the initial frame, the last frame, etc.
```
#@title Generate video using ffmpeg
init_frame = 1 # This is the frame where the video will start
last_frame = i # You can change i to the number of the last frame you want to generate. It will raise an error if that number of frames does not exist.
min_fps = 10
max_fps = 30
total_frames = last_frame-init_frame
length = 15 # Desired video runtime in seconds
frames = []
tqdm.write('Generating video...')
for i in range(init_frame,last_frame): #
filename = f"steps/{i:04}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
# Names the video after the prompt if there is one, if not, defaults to video.mp4
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
video_filename = "video"
if len(prompts) > 0:
video_filename = listToString(prompts).replace(" ","_")
video_filename = video_filename[:120]
print("Video filename: "+ video_filename)
video_filename = video_filename + ".mp4"
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', video_filename], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Compressing video...")
p.wait()
print("Video ready")
# @title View video in browser
# @markdown This process may take a little longer. If you don't want to wait, download it by executing the next cell instead of using this cell.
mp4 = open(video_filename,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
# @title Download video
from google.colab import files
files.download(video_filename)
```
| github_jupyter |
### Pulling in NFLWeather.com Data
```
import pandas as pd
import urllib.request
from bs4 import BeautifulSoup
import numpy as np
from time import sleep
from random import randint
column_names = []
urls_list = []
for x, y in [(x,y) for x in ['2018','2019'] for y in range(1,18)]:
column_names.append('_' + x + '_' + str(y))
urls_list.append('http://www.nflweather.com/en/week/' + x + '/week-' + str(y) + '/')
128*5
len(data)/5
for i in range(0,4):
column_names[i] = []
url = urls_list[i]
source = urllib.request.urlopen(url).read()
sleep(randint(10,20))
soup = BeautifulSoup(source, 'html.parser')
my_table2 = soup.find_all('td', class_ = 'text-center')
for tag in my_table2:
column_names[i].append(tag.text.strip())
_2018_2
for (title, url) in zip(column_names, urls_list):
weatherIngestor(title,url)
def weatherIngestor(title,url):
title = []
source = urllib.request.urlopen(url).read()
soup = BeautifulSoup(source, 'html.parser')
list_data = soup.find_all('td', class_ = 'text-center')
for row in list_data:
title.append(row.text)
for (title, url) in zip(column_names, urls_list):
title = []
source = urllib.request.urlopen(url).read()
soup = BeautifulSoup(source, 'html.parser')
list_data = soup.find_all('td', class_ = 'text-center')
for row in list_data:
title.append(row.text)
first_par = soup.find_all('td', class_ = 'text-center')
info = []
for row in first_par:
info.append(row.text)
url = 'http://www.nflweather.com/en/week/2019/week-1/'
df_list = pd.read_html(url)
source = urllib.request.urlopen('http://www.nflweather.com/en/week/2019/week-1/').read()
soup = BeautifulSoup(source, 'html.parser')
#th is the column header
soup.find_all("th")[1]
th_tags = soup.find_all('th')
columns = []
for row in th_tags:
columns.append(row.text)
columns
len(new_info)
len(new_info)
```
1:away
2:home
3:score (drop 'Final:')
4:forecast
5:wind
```
del new_info[4-1::4]
new_info[1::1]
print(new_info[::1])
del new_info[8-1::8]
new_info[0:50]
info
new_info = []
for n in info:
new_info.append(re.sub(r"[\n\t\s]*", "", n))
df = pd.DataFrame(new_info)
for n in df_list:
print(n)
df
new_info
myString = "I want to Remove all white \t spaces, new lines \n and tabs \t"
myString = re.sub(r"[\n\t\s]*", "", myString)
print myString
first_par.next_sibling
try_this = explode(df_list)
testing_weather_df = pd.DataFrame(df_list)
import urllib.request
source = urllib.request.urlopen('http://www.nflweather.com/en/week/2019/week-1/').read()
soup = BeautifulSoup(source, 'lxml')
word_table = soup.find('table', attrs = {'class':'footable table table-condensed toggle-arrow-alt table-hover footable-loaded default breakpoint'})
table_rows = word_table.find_all('tr')
l = []
for tr in table_rows:
td = tr.find_all('td')
row = [tr.text for tr in td]
l.append(row)
l_df = pd.DataFrame(l)
table
source = urll
url = 'http://www.nflweather.com/en/week/2019/week-1/'
web = urllib.request.urlopen(url)
source = BeautifulSoup(web, 'html.parser')
table = source.find('table', class_='footable table table-condensed toggle-arrow-alt table-hover footable-loaded default breakpoint')
abbs = table.find_all('tr')
abbs_list = [i.get_text().strip() for i in abbs]
print(abbs_list)
response = requests.get(url)
if response.status_code == 200:
print(response)
from bs4 import BeautifulSoup
from tabulate import tabulate
r = requests.get(url)
soup = BeautifulSoup(r.content,'lxml')
table = soup.find_all('table')[0]
new_df = pd.read_html(str(table))
print(tabulate(new_df[0], headers='keys', tablefmt='psql'))
new_df
r = requests.get(url)
#files = r.json()
print(r.text)
weather_scrape_df = pd.json_normalize(files, max_level=1)
```
| github_jupyter |
```
%matplotlib inline
```
Reinforcement Learning (DQN) Tutorial
=====================================
**Author**: `Adam Paszke <https://github.com/apaszke>`_
This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent
on the CartPole-v0 task from the `OpenAI Gym <https://gym.openai.com/>`__.
**Task**
The agent has to decide between two actions - moving the cart left or
right - so that the pole attached to it stays upright. You can find an
official leaderboard with various algorithms and visualizations at the
`Gym website <https://gym.openai.com/envs/CartPole-v0>`__.
.. figure:: /_static/img/cartpole.gif
:alt: cartpole
cartpole
As the agent observes the current state of the environment and chooses
an action, the environment *transitions* to a new state, and also
returns a reward that indicates the consequences of the action. In this
task, the environment terminates if the pole falls over too far.
The CartPole task is designed so that the inputs to the agent are 4 real
values representing the environment state (position, velocity, etc.).
However, neural networks can solve the task purely by looking at the
scene, so we'll use a patch of the screen centered on the cart as an
input. Because of this, our results aren't directly comparable to the
ones from the official leaderboard - our task is much harder.
Unfortunately this does slow down the training, because we have to
render all the frames.
Strictly speaking, we will present the state as the difference between
the current screen patch and the previous one. This will allow the agent
to take the velocity of the pole into account from one image.
**Packages**
First, let's import needed packages. Firstly, we need
`gym <https://gym.openai.com/docs>`__ for the environment
(Install using `pip install gym`).
We'll also use the following from PyTorch:
- neural networks (``torch.nn``)
- optimization (``torch.optim``)
- automatic differentiation (``torch.autograd``)
- utilities for vision tasks (``torchvision`` - `a separate
package <https://github.com/pytorch/vision>`__).
```
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
env = gym.make('CartPole-v0').unwrapped
# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
Replay Memory
-------------
We'll be using experience replay memory for training our DQN. It stores
the transitions that the agent observes, allowing us to reuse this data
later. By sampling from it randomly, the transitions that build up a
batch are decorrelated. It has been shown that this greatly stabilizes
and improves the DQN training procedure.
For this, we're going to need two classses:
- ``Transition`` - a named tuple representing a single transition in
our environment
- ``ReplayMemory`` - a cyclic buffer of bounded size that holds the
transitions observed recently. It also implements a ``.sample()``
method for selecting a random batch of transitions for training.
```
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
```
Now, let's define our model. But first, let quickly recap what a DQN is.
DQN algorithm
-------------
Our environment is deterministic, so all equations presented here are
also formulated deterministically for the sake of simplicity. In the
reinforcement learning literature, they would also contain expectations
over stochastic transitions in the environment.
Our aim will be to train a policy that tries to maximize the discounted,
cumulative reward
$R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where
$R_{t_0}$ is also known as the *return*. The discount,
$\gamma$, should be a constant between $0$ and $1$
that ensures the sum converges. It makes rewards from the uncertain far
future less important for our agent than the ones in the near future
that it can be fairly confident about.
The main idea behind Q-learning is that if we had a function
$Q^*: State \times Action \rightarrow \mathbb{R}$, that could tell
us what our return would be, if we were to take an action in a given
state, then we could easily construct a policy that maximizes our
rewards:
\begin{align}\pi^*(s) = \arg\!\max_a \ Q^*(s, a)\end{align}
However, we don't know everything about the world, so we don't have
access to $Q^*$. But, since neural networks are universal function
approximators, we can simply create one and train it to resemble
$Q^*$.
For our training update rule, we'll use a fact that every $Q$
function for some policy obeys the Bellman equation:
\begin{align}Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s'))\end{align}
The difference between the two sides of the equality is known as the
temporal difference error, $\delta$:
\begin{align}\delta = Q(s, a) - (r + \gamma \max_a Q(s', a))\end{align}
To minimise this error, we will use the `Huber
loss <https://en.wikipedia.org/wiki/Huber_loss>`__. The Huber loss acts
like the mean squared error when the error is small, but like the mean
absolute error when the error is large - this makes it more robust to
outliers when the estimates of $Q$ are very noisy. We calculate
this over a batch of transitions, $B$, sampled from the replay
memory:
\begin{align}\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\end{align}
\begin{align}\text{where} \quad \mathcal{L}(\delta) = \begin{cases}
\frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \\
|\delta| - \frac{1}{2} & \text{otherwise.}
\end{cases}\end{align}
Q-network
^^^^^^^^^
Our model will be a convolutional neural network that takes in the
difference between the current and previous screen patches. It has two
outputs, representing $Q(s, \mathrm{left})$ and
$Q(s, \mathrm{right})$ (where $s$ is the input to the
network). In effect, the network is trying to predict the *quality* of
taking each action given the current input.
```
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(448, 2)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
```
Input extraction
^^^^^^^^^^^^^^^^
The code below are utilities for extracting and processing rendered
images from the environment. It uses the ``torchvision`` package, which
makes it easy to compose image transforms. Once you run the cell it will
display an example patch that it extracted.
```
resize = T.Compose([T.ToPILImage(),
T.Resize(40, interpolation=Image.CUBIC),
T.ToTensor()])
# This is based on the code from gym.
screen_width = 600
def get_cart_location():
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
def get_screen():
screen = env.render(mode='rgb_array').transpose(
(2, 0, 1)) # transpose into torch order (CHW)
# Strip off the top and bottom of the screen
screen = screen[:, 160:320]
view_width = 320
cart_location = get_cart_location()
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# Strip off the edges, so that we have a square image centered on a cart
screen = screen[:, :, slice_range]
# Convert to float, rescare, convert to torch tensor
# (this doesn't require a copy)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Resize, and add a batch dimension (BCHW)
return resize(screen).unsqueeze(0).to(device)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
interpolation='none')
plt.title('Example extracted screen')
plt.show()
```
Training
--------
Hyperparameters and utilities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This cell instantiates our model and its optimizer, and defines some
utilities:
- ``select_action`` - will select an action accordingly to an epsilon
greedy policy. Simply put, we'll sometimes use our model for choosing
the action, and sometimes we'll just sample one uniformly. The
probability of choosing a random action will start at ``EPS_START``
and will decay exponentially towards ``EPS_END``. ``EPS_DECAY``
controls the rate of the decay.
- ``plot_durations`` - a helper for plotting the durations of episodes,
along with an average over the last 100 episodes (the measure used in
the official evaluations). The plot will be underneath the cell
containing the main training loop, and will update after every
episode.
```
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
policy_net = DQN().to(device)
target_net = DQN().to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(5000)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * \
math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
with torch.no_grad():
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(2)]], device=device, dtype=torch.long)
episode_durations = []
def plot_durations():
plt.figure(2)
plt.clf()
durations_t = torch.tensor(episode_durations, dtype=torch.float)
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# Take 100 episode averages and plot them too
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
plt.pause(0.001) # pause a bit so that plots are updated
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
```
Training loop
^^^^^^^^^^^^^
Finally, the code for training our model.
Here, you can find an ``optimize_model`` function that performs a
single step of the optimization. It first samples a batch, concatenates
all the tensors into a single one, computes $Q(s_t, a_t)$ and
$V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our
loss. By defition we set $V(s) = 0$ if $s$ is a terminal
state. We also use a target network to compute $V(s_{t+1})$ for
added stability. The target network has its weights kept frozen most of
the time, but is updated with the policy network's weights every so often.
This is usually a set number of steps but we shall use episodes for
simplicity.
```
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.uint8)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken
state_action_values = policy_net(state_batch).gather(1, action_batch)
# Compute V(s_{t+1}) for all next states.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
```
Below, you can find the main training loop. At the beginning we reset
the environment and initialize the ``state`` Tensor. Then, we sample
an action, execute it, observe the next screen and the reward (always
1), and optimize our model once. When the episode ends (our model
fails), we restart the loop.
Below, `num_episodes` is set small. You should download
the notebook and run lot more epsiodes.
```
num_episodes = 10
for i_episode in range(num_episodes):
# Initialize the environment and state
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
# Select and perform an action
action = select_action(state)
_, reward, done, _ = env.step(action.item())
reward = torch.tensor([reward], device=device)
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
if done:
episode_durations.append(t + 1)
plot_durations()
break
# Update the target network
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
print('Complete')
env.render()
env.close()
# plt.ioff()
plt.show()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 케라스와 텐서플로 허브를 사용한 영화 리뷰 텍스트 분류하기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이 노트북은 영화 리뷰(review) 텍스트를 *긍정*(positive) 또는 *부정*(negative)으로 분류합니다. 이 예제는 *이진*(binary)-또는 클래스(class)가 두 개인- 분류 문제입니다. 이진 분류는 머신러닝에서 중요하고 널리 사용됩니다.
이 튜토리얼에서는 텐서플로 허브(TensorFlow Hub)와 케라스(Keras)를 사용한 기초적인 전이 학습(transfer learning) 애플리케이션을 보여줍니다.
여기에서는 [인터넷 영화 데이터베이스](https://www.imdb.com/)(Internet Movie Database)에서 수집한 50,000개의 영화 리뷰 텍스트를 담은 [IMDB 데이터셋](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb)을 사용하겠습니다. 25,000개 리뷰는 훈련용으로, 25,000개는 테스트용으로 나뉘어져 있습니다. 훈련 세트와 테스트 세트의 클래스는 *균형*이 잡혀 있습니다. 즉 긍정적인 리뷰와 부정적인 리뷰의 개수가 동일합니다.
이 노트북은 텐서플로에서 모델을 만들고 훈련하기 위한 고수준 파이썬 API인 [tf.keras](https://www.tensorflow.org/guide/keras)와 전이 학습 라이브러리이자 플랫폼인 [텐서플로 허브](https://www.tensorflow.org/hub)를 사용합니다. `tf.keras`를 사용한 고급 텍스트 분류 튜토리얼은 [MLCC 텍스트 분류 가이드](https://developers.google.com/machine-learning/guides/text-classification/)를 참고하세요.
```
import numpy as np
import tensorflow as tf
!pip install tensorflow-hub
!pip install tfds-nightly
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("버전: ", tf.__version__)
print("즉시 실행 모드: ", tf.executing_eagerly())
print("허브 버전: ", hub.__version__)
print("GPU", "사용 가능" if tf.config.experimental.list_physical_devices("GPU") else "사용 불가능")
```
## IMDB 데이터셋 다운로드
IMDB 데이터셋은 [imdb reviews](https://www.tensorflow.org/datasets/catalog/imdb_reviews) 또는 [텐서플로 데이터셋](https://www.tensorflow.org/datasets)(TensorFlow datasets)에 포함되어 있습니다. 다음 코드는 IMDB 데이터셋을 컴퓨터(또는 코랩 런타임)에 다운로드합니다:
```
# 훈련 세트를 6대 4로 나눕니다.
# 결국 훈련에 15,000개 샘플, 검증에 10,000개 샘플, 테스트에 25,000개 샘플을 사용하게 됩니다.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
```
## 데이터 탐색
잠시 데이터 형태를 알아 보죠. 이 데이터셋의 샘플은 전처리된 정수 배열입니다. 이 정수는 영화 리뷰에 나오는 단어를 나타냅니다. 레이블(label)은 정수 0 또는 1입니다. 0은 부정적인 리뷰이고 1은 긍정적인 리뷰입니다.
처음 10개의 샘플을 출력해 보겠습니다.
```
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
```
처음 10개의 레이블도 출력해 보겠습니다.
```
train_labels_batch
```
## 모델 구성
신경망은 층을 쌓아서 만듭니다. 여기에는 세 개의 중요한 구조적 결정이 필요합니다:
* 어떻게 텍스트를 표현할 것인가?
* 모델에서 얼마나 많은 층을 사용할 것인가?
* 각 층에서 얼마나 많은 *은닉 유닛*(hidden unit)을 사용할 것인가?
이 예제의 입력 데이터는 문장으로 구성됩니다. 예측할 레이블은 0 또는 1입니다.
텍스트를 표현하는 한 가지 방법은 문장을 임베딩(embedding) 벡터로 바꾸는 것입니다. 그러면 첫 번째 층으로 사전 훈련(pre-trained)된 텍스트 임베딩을 사용할 수 있습니다. 여기에는 두 가지 장점이 있습니다.
* 텍스트 전처리에 대해 신경 쓸 필요가 없습니다.
* 전이 학습의 장점을 이용합니다.
* 임베딩은 고정 크기이기 때문에 처리 과정이 단순해집니다.
이 예제는 [텐서플로 허브](https://www.tensorflow.org/hub)에 있는 **사전 훈련된 텍스트 임베딩 모델**인 [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)을 사용하겠습니다.
테스트해 볼 수 있는 사전 훈련된 모델이 세 개 더 있습니다:
* [google/tf2-preview/gnews-swivel-20dim-with-oov/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) - [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)와 동일하지만 어휘 사전(vocabulary)의 2.5%가 OOV 버킷(bucket)으로 변환되었습니다. 이는 해당 문제의 어휘 사전과 모델의 어휘 사전이 완전히 겹치지 않을 때 도움이 됩니다.
* [google/tf2-preview/nnlm-en-dim50/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1) - 더 큰 모델입니다. 차원 크기는 50이고 어휘 사전의 크기는 1백만 개 이하입니다.
* [google/tf2-preview/nnlm-en-dim128/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1) - 훨씬 더 큰 모델입니다. 차원 크기는 128이고 어휘 사전의 크기는 1백만 개 이하입니다.
먼저 문장을 임베딩시키기 위해 텐서플로 허브 모델을 사용하는 케라스 층을 만들어 보죠. 그다음 몇 개의 샘플을 입력하여 테스트해 보겠습니다. 입력 텍스트의 길이에 상관없이 임베딩의 출력 크기는 `(num_examples, embedding_dimension)`가 됩니다.
```
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
```
이제 전체 모델을 만들어 보겠습니다:
```
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
```
순서대로 층을 쌓아 분류기를 만듭니다:
1. 첫 번째 층은 텐서플로 허브 층입니다. 이 층은 사전 훈련된 모델을 사용하여 하나의 문장을 임베딩 벡터로 매핑합니다. 여기서 사용하는 사전 훈련된 텍스트 임베딩 모델([google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1))은 하나의 문장을 토큰(token)으로 나누고 각 토큰의 임베딩을 연결하여 반환합니다. 최종 차원은 `(num_examples, embedding_dimension)`입니다.
2. 이 고정 크기의 출력 벡터는 16개의 은닉 유닛(hidden unit)을 가진 완전 연결 층(`Dense`)으로 주입됩니다.
3. 마지막 층은 하나의 출력 노드를 가진 완전 연결 층입니다. `sigmoid` 활성화 함수를 사용하므로 확률 또는 신뢰도 수준을 표현하는 0~1 사이의 실수가 출력됩니다.
이제 모델을 컴파일합니다.
### 손실 함수와 옵티마이저
모델이 훈련하려면 손실 함수(loss function)과 옵티마이저(optimizer)가 필요합니다. 이 예제는 이진 분류 문제이고 모델이 확률을 출력하므로(출력층의 유닛이 하나이고 `sigmoid` 활성화 함수를 사용합니다), `binary_crossentropy` 손실 함수를 사용하겠습니다.
다른 손실 함수를 선택할 수 없는 것은 아닙니다. 예를 들어 `mean_squared_error`를 선택할 수 있습니다. 하지만 일반적으로 `binary_crossentropy`가 확률을 다루는데 적합합니다. 이 함수는 확률 분포 간의 거리를 측정합니다. 여기에서는 정답인 타깃 분포와 예측 분포 사이의 거리입니다.
나중에 회귀(regression) 문제(예를 들어 주택 가격을 예측하는 문제)에 대해 살펴 볼 때 평균 제곱 오차(mean squared error) 손실 함수를 어떻게 사용하는지 알아 보겠습니다.
이제 모델이 사용할 옵티마이저와 손실 함수를 설정해 보죠:
```
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## 모델 훈련
이 모델을 512개의 샘플로 이루어진 미니배치(mini-batch)에서 20번의 에포크(epoch) 동안 훈련합니다. `x_train`과 `y_train` 텐서에 있는 모든 샘플에 대해 20번 반복한다는 뜻입니다. 훈련하는 동안 10,000개의 검증 세트에서 모델의 손실과 정확도를 모니터링합니다:
```
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=20,
validation_data=validation_data.batch(512),
verbose=1)
```
## 모델 평가
모델의 성능을 확인해 보죠. 두 개의 값이 반환됩니다. 손실(오차를 나타내는 숫자이므로 낮을수록 좋습니다)과 정확도입니다.
```
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
```
이 예제는 매우 단순한 방식을 사용하므로 87% 정도의 정확도를 달성했습니다. 고급 방법을 사용한 모델은 95%에 가까운 정확도를 얻습니다.
## 더 읽을거리
문자열 입력을 다루는 조금 더 일반적인 방법과 훈련 과정에서 정확도나 손실 변화에 대한 자세한 분석은 [여기](https://www.tensorflow.org/tutorials/keras/basic_text_classification)를 참고하세요.
| github_jupyter |
# Quantum Gates in Qiskit
Start by some typical setup and definition of useful functions, which you are encouraged to look at.
Then, head to the [exercises start](#Exercises-Start-Here) to start coding!
```
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
# Choose the drawer you like best:
from qiskit.tools.visualization import matplotlib_circuit_drawer as draw
#from qiskit.tools.visualization import circuit_drawer as draw
from qiskit import IBMQ
IBMQ.load_accounts() # make sure you have setup your token locally to use this
%matplotlib inline
```
## Utils for visualizing experimental results
```
import matplotlib.pyplot as plt
def show_results(D):
# D is a dictionary with classical bits as keys and count as value
# example: D = {'000': 497, '001': 527}
plt.bar(range(len(D)), list(D.values()), align='center')
plt.xticks(range(len(D)), list(D.keys()))
plt.show()
```
## Utils for executing circuits
```
from qiskit import Aer
# See a list of available local simulators
print("Aer backends: ", Aer.backends())
# see a list of available remote backends (these are freely given by IBM)
print("IBMQ Backends: ", IBMQ.backends())
```
### Execute locally
```
# execute circuit and either display a histogram of the results
def execute_locally(qc, draw_circuit=False):
# Compile and run the Quantum circuit on a simulator backend
backend_sim = Aer.get_backend('qasm_simulator')
job_sim = execute(qc, backend_sim)
result_sim = job_sim.result()
result_counts = result_sim.get_counts(qc)
# Print the results
print("simulation: ", result_sim, result_counts)
if draw_circuit: # draw the circuit
draw(qc)
else: # or show the results
show_results(result_counts)
```
### Execute remotely
```
from qiskit.backends.ibmq import least_busy
import time
# Compile and run on a real device backend
def execute_remotely(qc, draw_circuit=False):
if draw_circuit: # draw the circuit
draw(qc)
try:
# select least busy available device and execute.
least_busy_device = least_busy(IBMQ.backends(simulator=False))
print("Running on current least busy device: ", least_busy_device)
# running the job
job_exp = execute(qc, backend=least_busy_device, shots=1024, max_credits=10)
lapse, interval = 0, 10
while job_exp.status().name != 'DONE':
print('Status @ {} seconds'.format(interval * lapse))
print(job_exp.status())
time.sleep(interval)
lapse += 1
print(job_exp.status())
exp_result = job_exp.result()
result_counts = exp_result.get_counts(qc)
# Show the results
print("experiment: ", exp_result, result_counts)
if not draw_circuit: # show the results
show_results(result_counts)
except:
print("All devices are currently unavailable.")
```
## Building the circuit
```
def new_circuit(size):
# Create a Quantum Register with size qubits
qr = QuantumRegister(size)
# Create a Classical Register with size bits
cr = ClassicalRegister(size)
# Create a Quantum Circuit acting on the qr and cr register
return qr, cr, QuantumCircuit(qr, cr)
```
---
<h1 align="center">Exercises Start Here</h1>
Make sure you ran all the above cells in order, as the following exercises use functions defined and imported above.
## Adding Gates
### Hadamard
This gate is required to make superpositions.
**TASK:** Create a new circuit with 2 qubits using `new_circuit` (very useful to reconstruct your circuit in Jupyter)
**TASK:** Add a Hadamard on the _least important_ qubit
```
# H gate on qubit 0
```
**TASK:** Perform a measurement on that qubit to the first bit in the register
```
# measure the specific qubit
```
**TASK:** check the result using `execute_locally` test both `True` and `False` for the `draw_circuit` option
The result should be something like `COMPLETED {'00': 516, '01': 508}`.
**TASK:** What does this mean?
> That we got our superposition as expected, approximately 50% of the experiments yielded 0 and the other 50% yielded 1.
---
### X Gate (Pauli-X)
This gate is also referred to as a bit-flip.
**TASK:** Create a new circuit with 2 qubits using `new_circuit` (very useful to reconstruct your circuit in Jupyter)
**TASK:** Add an X gate on the _most important_ qubit
```
# H gate on qubit 1
```
**TASK:** Perform a measurement on that qubit to the first bit in the register
```
# measure the specific qubit
```
**TASK:** check the result using `execute_locally` test both `True` and `False` for the `draw_circuit` option
## Free flow
At this stage you are encouraged to repeat (and tweek as you wish) the above tasks for the Hadamard and X gates, especially on single qubit gates.
---
### CNOT (Controlled NOT, Controlled X gate)
This gate uses a control qubit and a target qubit to
**TASK:** Create a new circuit with 2 qubits using `new_circuit` (very useful to reconstruct your circuit in Jupyter)
**TASK:** Add a CNOT gate with the _least important_ qubit as the control and the other as the target
```
# CNOT gate
```
**TASK:** Perform a measurement on the qubits
```
# measure the specific qubit
```
**TASK:** check the result using `execute_locally` test both `True` and `False` for the `draw_circuit` option
**TASK:** Since a single CNOT does not seem very powerful, go ahead and add a hadamard gate to the two qubits (before the CNOT gate) and redo the experiment (you can try this by using a single Hadamard on each qubit as well).
```
# H gate on 2 qubits
# CNOT gate
# measure
```
## Free flow: Changing the direction of a CNOT gate
Check this [application of the CNOT](https://github.com/Qiskit/ibmqx-user-guides/blob/master/rst/full-user-guide/004-Quantum_Algorithms/061-Basic_Circuit_Identities_and_Larger_Circuits.rst#changing-the-direction-of-a-cnot-gate) and try to replicate it using Qiskit!
Try to replicate it using the unitary transformations as well, pen and paper is better suited for this.

A CNOT equals Hadamards on both qubits an oposite CNOT and two new Hadamards!
## Free flow: Swapping the states of qubits with a CNOT gate
Check this [application of the CNOT](https://github.com/Qiskit/ibmqx-user-guides/blob/master/rst/full-user-guide/004-Quantum_Algorithms/061-Basic_Circuit_Identities_and_Larger_Circuits.rst#swapping-the-states-of-qubits) and try to replicate it using Qiskit!
Try to replicate it using the unitary transformations as well, pen and paper is better suited for this.

Three CNOT gates allow 2 qubits to swap their original values, can you do this with 2 classical bits??
## Executing on a remote device
If you do this, you may have to wait for some time (usually a few minutes), depending on the current demand of the devices
**TASK:** Create a circuit that simply measures 5 qubits and run it on a remote device using `execute_remotely`!
```
# measure
# execute_remotely(circuit)
```
**TASK:** Comment on the results
>
**Important:** Once you get the results, you may see that, in fact, most of the iterations resulted in `00000`, but you will also see that there will be a few hits on other bit configurations (typically mostly composed of `0`s, like `00001` or `00010`) this is due to **experimental error** on the quantum device and is a concern to take into account when deploying into real devices!!
| github_jupyter |
This script -
- Filters only ependymoma samples
- adds a disease group based on primary site to ependymoma samples
- Matches up DNA and RNA BSID's from pbta-histologies file
- Creates a notebook that can be used in the next script
```
# Importing modules
import argparse
import pandas as pd
# Reading in the input pbta-histologies file
pbta_histologies = pd.read_csv("/Users/kogantit/Documents/OpenPBTA/OpenPBTA-analysis/data/pbta-histologies.tsv", sep="\t")
```
This function takes in the primary site and categorizes the sample into "supratentirial" or "infratentorial"
Two primary sites categorized as "None" -
1) Ventricles
2) Other locations NOS
```
def group_disease(primay_site):
if "Posterior Fossa" in primay_site:
return "infratentorial"
elif "Optic" in primay_site:
return "infratentorial"
elif "Spinal" in primay_site:
return "infratentorial"
elif "Tectum" in primay_site:
return "infratentorial"
elif "Spine" in primay_site:
return "infratentorial"
elif "Frontal Lobe" in primay_site:
return "supratentorial"
elif "Parietal Lobe" in primay_site:
return "supratentorial"
elif "Occipital Lobe" in primay_site:
return "supratentorial"
elif "Temporal Lobe" in primay_site:
return "supratentorial"
else:
return "None"
# Reading the pbta-histologies table and filtering for ependymoma samples
EP = pbta_histologies[pbta_histologies["integrated_diagnosis"]=="Ependymoma"]
# Filtering inly RNA samples and retrieving only BSID, primarysite, PTID, sampleID and expstrategy columns
EP_rnaseq_samples = EP[EP["experimental_strategy"] == "RNA-Seq"][["Kids_First_Biospecimen_ID", "primary_site",
"Kids_First_Participant_ID", "sample_id", "experimental_strategy"]]
EP_rnaseq_samples.head()
# Adding a disease group column to the. above DF based on the group_disease function -
EP_rnaseq_samples["disease_group"] = [group_disease(primary) for primary in EP_rnaseq_samples["primary_site"]]
EP_rnaseq_samples.head()
# List with only RNA sample PTIDs
EP_rnasamplenames_PTIDs = list(EP_rnaseq_samples["Kids_First_Participant_ID"])
# Filtering for DNA samples from EP(dataframe from pbta-histologies and only ependymoma samples)
all_WGS = EP[EP["experimental_strategy"]=="WGS"]
# Only retrieving DNA samples PTID, if it is listed in RNA PTID's (basically filtering for ependymoma DNA samples)
WGSPT = all_WGS[all_WGS["Kids_First_Participant_ID"].isin(EP_rnasamplenames_PTIDs)]
# Create a new DF for DNA samples with BSID, PTID and sample_ID
WGS_dnaseqsamples = WGSPT[["Kids_First_Biospecimen_ID", "Kids_First_Participant_ID", "sample_id"]]
WGS_dnaseqsamples.head()
# Renaming the column name so they don;t conflict in merge step
EP_rnaseq_samples = EP_rnaseq_samples.rename(columns={"Kids_First_Biospecimen_ID":"Kids_First_Biospecimen_ID_RNA"})
WGS_dnaseqsamples = WGS_dnaseqsamples.rename(columns={"Kids_First_Biospecimen_ID":"Kids_First_Biospecimen_ID_DNA"})
```
- Merging both the dataframes based on "sample_id" (I found this was the only unique ID between both DNA and RNA samples, participantID's are repeated across multiple RNA and DNA. Example: Two RNA BSID's can have the same participant ID)
- Every RNA and DNA BSID combo with the same sample_id also has the same participant ID. SO I am renaming the column name from "Kids_First_Participant_ID_x" to "Kids_First_Participant_ID" for simplicity
- Some RNA samples have missing corresponding DNA BSID's (which is the reason for left join based on RNA table)
```
EP_rnaseq_WGS = EP_rnaseq_samples.merge(WGS_dnaseqsamples, on = "sample_id", how = "left")
EP_rnaseq_WGS = EP_rnaseq_WGS.rename(columns={"Kids_First_Participant_ID_x":"Kids_First_Participant_ID"})
EP_rnaseq_WGS.fillna('NA', inplace=True)
EP_rnaseq_WGS.head()
```
| github_jupyter |
# HydroTrend
* Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/hydrotrend.ipynb
* Package installation command: `$ conda install notebook pymt_hydrotrend`
* Command to download a local copy:
`$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/hydrotrend.ipynb`
HydroTrend is a 2D hydrological water balance and transport model that simulates water discharge and sediment load at a river outlet. You can read more about the model, find references or download the C source code at: https://csdms.colorado.edu/wiki/Model:HydroTrend.
This notebook has been created by Irina Overeem, September 18, 2019.
### River Sediment Supply Modeling
This notebook is meant to give you a better understanding of what HydroTrend is capable of. In this example we are using a theoretical river basin of ~1990 km<sup>2</sup>, with 1200m of relief and a river length of
~100 km. All parameters that are shown by default once the HydroTrend Model is loaded are based
on a present-day, temperate climate. Whereas these runs are not meant to be specific, we are
using parameters that are realistic for the [Waiapaoa River][map_of_waiapaoa] in New Zealand. The Waiapaoa River
is located on North Island and receives high rain and has erodible soils, so the river sediment
loads are exceptionally high. It has been called the *"dirtiest small river in the world"*.
A more detailed description of applying HydroTrend to the Waipaoa basin, New Zealand has been published in WRR: [hydrotrend_waipaoa_paper].
[map_of_waiapaoa]: https://www.google.com/maps/place/Waipaoa+River/@-38.5099042,177.7668002,71814m/data=!3m1!1e3!4m5!3m4!1s0x6d65def908624859:0x2a00ef6165e1dfa0!8m2!3d-38.5392405!4d177.8843782
[hydrotrend_presentation]: https://csdms.colorado.edu/wiki/File:SedimentSupplyModeling02_2013.ppt
[hydrotrend_waipaoa_paper]: http://dx.doi.org/10.1029/2006WR005570
### Run HydroTrend Simulations with pymt
Now we will be using the capability of the Python Modeling Tool, pymt. Pymt is a Python toolkit for running and coupling Earth surface models.
https://csdms.colorado.edu/wiki/PyMT
```
# To start, import numpy and matplotlib.
import matplotlib.pyplot as plt
import numpy as np
# Then we import the package
import pymt.models
hydrotrend = pymt.models.Hydrotrend()
import pymt
pymt.__version__
```
## Learn about the Model Input
<br>
HydroTrend will now be activated in PyMT. You can find information on the model, the developer, the papers that describe the moel in more detail etc.
Importantly you can scroll down a bit to the Parameters list, it shows what parameters the model uses to control the simulations. The list is alphabetical and uses precisely specified 'Standard Names'.
Note that every parameter has a 'default' value, so that when you do not list it in the configure command, you will run with these values.
```
# Get basic information about the HydroTrend model
help(hydrotrend)
```
### Exercise 1: Explore the Hydrotrend base-case river simulation
For this case study, first we will create a subdirectory in which the basecase (BC) simulation will be implemented.
Then we specify for how long we will run a simulation: for 100 years at daily time-step.
This means you run Hydrotrend for 36,500 days total.
This is also the line of code where you would add other input parameters with their values.
```
# Set up Hydrotrend model by indicating the number of years to run
config_file, config_folder = hydrotrend.setup("_hydrotrendBC", run_duration=100)
```
With the cat command you can print character by character one of the two input files that HydroTrend uses.
HYDRO0.HYPS: This first file specifies the River Basin Hysometry - the surface area per elevation zone. The hypsometry captures the geometric characteristics of the river basin, how high is the relief, how much uplands are there versus lowlands, where would the snow fall elevation line be etcetera. <br>
HYDRO.IN: This other file specifies the basin and climate input data.
```
cat _hydrotrendBC/HYDRO0.HYPS
cat _hydrotrendBC/HYDRO.IN
#In pymt one can always find out what output a model generates by using the .output_var_names method.
hydrotrend.output_var_names
# Now we initialize the model with the configure file and in the configure folder
hydrotrend.initialize(config_file, config_folder)
# this line of code lists time parameters, when, how long and at what timestep will the model simulation work?
hydrotrend.start_time, hydrotrend.time, hydrotrend.end_time, hydrotrend.time_step, hydrotrend.time_units
# this code declares numpy arrays for several important parameters we want to save.
n_days = int(hydrotrend.end_time)
q = np.empty(n_days) #river discharge at the outlet
qs = np.empty(n_days)# sediment load at the outlet
cs = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet
qb = np.empty(n_days) # bedload at the outlet
# here we have coded up the time loop using i as the index
# we update the model with one timestep at the time, untill we reach the end time
# for each time step we also get the values for the output parameters we wish to
for i in range(n_days):
hydrotrend.update()
q[i] = hydrotrend.get_value("channel_exit_water__volume_flow_rate")
qs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_flow_rate")
cs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_concentration")
qb[i] = hydrotrend.get_value("channel_exit_water_sediment~bedload__mass_flow_rate")
# We can plot the simulated output timeseries of Hydrotrend, for example the river discharge
plt.plot(q)
plt.title('HydroTrend simulation of 100 year river discharge, Waiapaoa River')
plt.ylabel('river discharge in m3/sec')
plt.show
# Or you can plot a subset of the simulated daily timeseries using the index
#for example the first year
plt.plot(q[0:365], 'black')
# compare with the last year
plt.plot(q[-366:-1],'grey')
plt.title('HydroTrend simulation of first and last year discharge, Waiapaoa River')
plt.show()
# Of course, it is important to calculate statistical properties of the simulated parameters
print(q.mean())
hydrotrend.get_var_units("channel_exit_water__volume_flow_rate")
```
## <font color = green> Assignment 1 </font>
Calculate mean water discharge Q, mean suspended load Qs, mean sediment concentration Cs, and mean bedload Qb for this 100 year simulation of the river dynamics of the Waiapaoa River.
Note all values are reported as daily averages. What are the units?
```
# your code goes here
```
## <font color = green> Assignment 2 </font>
Identify the highest flood event for this simulation. Is this the 100-year flood? Please list a definition of a 100 year flood, and discuss whether the modeled extreme event fits this definition.
Plot the year of Q-data which includes the flood.
```
# here you can calculate the maximum river discharge.
# your code to determine which day and which year encompass the maximum discharge go here
# Hint: you will want to determine the ndex of htis day first, look into the numpy.argmax and numpy.argmin
# as a sanity check you can see whether the plot y-axis seems to go up to the maximum you had calculated in the previous step
# as a sanity check you can look in the plot of all the years to see whether the timing your code predicts is correct
# type your explanation about the 100 year flood here.
```
## <font color = green> Assignment 3 </font>
Calculate the mean annual sediment load for this river system.
Then compare the annual load of the Waiapaoha river to the Mississippi River. <br>
To compare the mean annual load to other river systems you will need to calculate its sediment yield.
Sediment Yield is defined as sediment load normalized for the river drainage area;
so it can be reported in T/km2/yr.
```
# your code goes here
# you will have to sum all days of the individual years, to get the annual loads, then calculate the mean over the 100 years.
# one possible trick is to use the .reshape() method
# plot a graph of the 100 years timeseries of the total annual loads
# take the mean over the 100 years
#your evaluation of the sediment load of the Waiapaoha River and its comparison to the Mississippi River goes here.
#Hint: use the following paper to read about the Mississippi sediment load (Blum, M, Roberts, H., 2009. Drowning of the Mississippi Delta due to insufficient sediment supply and global sea-level rise, Nature Geoscience).
```
### HydroTrend Exercise 2: How does a river system respond to climate change; two simple scenarios for the coming century.
Now we will look at changing climatic conditions in a small river basin. We'll change temperature and precipitation regimes and compare discharge and sediment load characteristics to the original basecase. And we will look at the are potential implications of changes in the peak events.
Modify the mean annual temperature T, the mean annual precipitation P. You can specify trends over time, by modifying the parameter ‘change in mean annual temperature’ or ‘change in mean annual precipitation’. HydroTrend runs at daily timestep, and thus can deal with seasonal variations in temperature and precipitation for a basin. The model ingests monthly mean input values for these two climate parameters and their monthly standard deviations, ideally the values would be derived from analysis of a longterm record of daily climate data. You can adapt seasonal trends by using the monthly values.
## <font color = green> Assignment 4 </font>
What happens to river discharge, suspended load and bedload if the mean annual temperature in this specific river basin increases by 4 °C over the next 50 years? In this assignment we set up a new simulation for a warming climate.
```
# Set up a new run of the Hydrotrend model
# Create a new config file a different folder for input and output files, indicating the number of years to run, and specify the change in mean annual temparture parameter
hydrotrendHT = pymt.models.Hydrotrend()
config_file, config_folder = hydrotrendHT.setup("_hydrotrendhighT", run_duration=50, change_in_mean_annual_temperature=0.08)
# intialize the new simulation
hydrotrendHT.initialize(config_file, config_folder)
# the code for the timeloop goes here
# I use the abbrevation HT for 'High Temperature' scenario
n_days = int(hydrotrendHT.end_time)
q_HT = np.empty(n_days) #river discharge at the outlet
qs_HT = np.empty(n_days)# sediment load at the outlet
cs_HT = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet
qb_HT = np.empty(n_days) # bedload at the outlet
for i in range(n_days):
hydrotrendHT.update()
q_HT[i] = hydrotrendHT.get_value("channel_exit_water__volume_flow_rate")
qs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_flow_rate")
cs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_concentration")
qb_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~bedload__mass_flow_rate")
# your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here
# print out these same parameters for the basecase for comparison
```
## <font color = green> Assignment 5 </font>
So what is the effect of a warming basin temperature?
How much increase or decrease of river discharge do you see after 50 years? <br>
How is the mean suspended load affected? <br>
How does the mean bedload change? <br>
What happens to the peak event; look at the maximum sediment load event of the last 5 years of the simulation?
```
# type your answers here
```
## <font color = green> Assignment 6 </font>
What happens to river discharge, suspended load and bedload if the mean annual precipitation would increase by 50% in this specific river basin over the next 50 years? Create a new simulation folder, High Precipitation, HP, and set up a run with a trend in future precipitation.
```
# Set up a new run of the Hydrotrend model
# Create a new config file indicating the number of years to run, and specify the change in mean annual precipitation parameter
# initialize the new simulation
# your code for the timeloop goes here
## your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here
```
## <font color = green> Assignment 7 </font>
In addition, climate model predictions indicate that perhaps precipitation intensity and variability could increase. How would you possibly model this? Discuss how you would modify your input settings for precipitation.
```
#type your answer here
```
### Exercise 3: How do humans affect river sediment loads?
Here we will look at the effect of human in a river basin. Humans can accelerate erosion
processes, or reduce the sediment loads traveling through a river system. Both concepts can
be simulated, first run 3 simulations systematically increasing the anthropogenic factor (0.5-8.0 is the range).
## <font color = green> Assignment 8 </font>
Describe in your own words the meaning of the human-induced erosion factor, (Eh). This factor is parametrized as the “Antropogenic” factor in HydroTrend. Read more about this in: Syvitski & Milliman, 2007, Geology, Geography, and Humans Battle for Dominance over the Delivery of Fluvial Sediment to the Coastal Ocean. 2007, 115, p. 1–19.
```
# your explanation goes here, can you list two reasons why this factor would be unsuitable or it would fall short?
```
## <font color = green> Bonus Assignment 9 </font>
Model a scenario of a drinking water supply reservoir to be planned in the coastal area of the basin. The reservoir would have 800 km 2of contributing drainage area and be 3 km long, 200m wide and 100m deep. Set up a simulation with these parameters.
```
# Set up a new 50 year of the Hydrotrend model
# Create a new directory, and a config file indicating the number of years to run, and specify different reservoir parameters
# initialize the new simulation
# your code for the timeloop and update loop goes here
# plot a bar graph comparing Q mean, Qs mean, Qmax, Qs Max, Qb mean and Qbmax for the basecase run and the reservoir run
# Describe how such a reservoir affects the water and sediment load at the coast (i.e. downstream of the reservoir)?
```
## <font color = green> Bonus Assignment 10 </font>
Set up a simulation for a different river basin.
This means you would need to change the HYDRO0.HYPS file and change some climatic parameters.
There are several hypsometric files packaged with HydroTrend, you can use one of those, but are welcome to do something different!
```
# write a short motivation and description of your scenario
# make a 2 panel plot using the subplot functionality of matplotlib
# One panel would show the hypsometry of the Waiapohoa and the other panel the hypsometry of your selected river basin
# Set up a new 50 year of the Hydrotrend model
# Create a new directory for this different basin
# initialize the new simulation
# your code for the timeloop and update loop goes here
# plot a line graph comparing Q mean, Qs mean, for the basecase run and the new river basin run
```
## <font color = green> ALL DONE! </font>
| github_jupyter |
```
import os
import seaborn as sns
from scipy import stats
from tqdm import tqdm
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from xgboost import XGBClassifier
import nltk
nltk.download('stopwords')
from nltk import word_tokenize
nltk.download('punkt')
import re
import string
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
import lightgbm as lgb
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.max_columns = 100
from plotly import tools
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
import warnings
warnings.filterwarnings("ignore")
train_df = pd.read_excel('Data_Train_food.xlsx', encoding='latin-1')
test_df = pd.read_excel('Data_Test_food.xlsx', encoding='latin-1')
train_df.head(10)
test_df.head(10)
train_df.shape
train_df.columns
train_df.isnull().sum()
train_df.describe(include='all')
train_df.info()
```
# For Locations and Cuisines
```
#A function to find the maximun number of features in a single cell
def max_features_in_single_row(train_df, test_df, delimiter):
max_info = 0
item_lis = list(train_df.append(test_df))
for i in item_lis:
if len(i.split("{}".format(delimiter))) > max_info:
max_info = len(i.split("{}".format(delimiter)))
print("\n","-"*35)
print("Max_Features in One Observation = ", max_info)
return max_info
#This function splits a column in to n features where n is the maximum number of features in a single cell
def feature_splitter(feat, name, delimiter, max_info):
item_lis = list()
extracted_features = {}
for i in range(max_info):
extracted_features['{}_Feature_{}'.format(name, i+1)] = []
print("-"*35)
print("Features Dictionary : ", extracted_features)
#tqdm is a graphics module that helps us see the progress bar
for i in tqdm(range(len(item_lis))):feat
for j in range(max_info):
try:
extracted_features['{}_Feature_{}'.format(name,j+1)].append(item_lis[i].split("{}".format(delimiter))[j].lower().strip())
except:
extracted_features['{}_Feature_{}'.format(name, j+1)].append(np.nan)
return extracted_features
max_features_in_single_row(train_df['Location'], test_df['Location'],',')
max_features_in_single_row(train_df['Cuisines'], test_df['Cuisines'],',')
```
# Ratings, Votes and ‘Reviews
```
#A function to find all the non numeric values
def non_numerals(series):
non_numerals = []
for i in series.unique():
try :
i = float(i)
except:
non_numerals.append(i)
return non_numerals
non_numerals(train_df['Rating'])
non_numerals(test_df['Rating'])
non_numerals(train_df['Votes'])
non_numerals(test_df['Votes'])
non_numerals(train_df['Reviews'])
non_numerals(test_df['Reviews'])
# A function to replace the non-numeric values
def replace_nn_with(series, type_, fill_with = None, method = 'mean'):
nn = non_numerals(series)
print('-'*30)
print('-'*30)
print("Non Numerals in column ",series.name," : ",nn)
series = series.replace(nn, np.nan, inplace = False)
nulls = series.isnull().sum()
if fill_with:
series.fillna(fill_with, inplace = True)
print("Filling Non Numerals with {}".format(fill_with))
else:
series = series.replace(nn, np.nan, inplace = False)
if method == 'mean' :
rep = series.astype(float).mean()
print("Filling Non Numerals with MEAN = ", rep)
elif method =='median' :
rep = series.astype(float).median()
print("Filling Non Numerals with MEDIAN = ", rep)
elif method =='min' :
rep = series.astype(float).min()
print("Filling Non Numerals with MINIMUM = ", rep)
else:
print('Please pass a valid method as a string -- ("mean" or "median" or "min")')
return 0
series.fillna(rep, inplace = True)
try:
series = series.astype(type_)
print(nulls, ": observations replaced")
return series
except:
series = series.astype(float)
print(nulls, ": observations replaced")
series = series.astype(type_)
return series
replace_nn_with(train_df['Rating'], float, fill_with = 3.611078, method = 'mean')
replace_nn_with(train_df['Votes'], float, fill_with = '244', method = 'mean')
replace_nn_with(train_df['Reviews'], float, fill_with = 123, method = 'mean')
train_df['Rating'] = train_df['Rating'].apply(lambda x: x.replace('-', '3.6'))
train_df['Rating'] = train_df['Rating'].apply(lambda x: x.replace('NEW', '3.6'))
train_df['Rating'] = train_df['Rating'].apply(lambda x: x.replace('Opening Soon', '3.6'))
train_df['Rating'] = train_df['Rating'].apply(lambda x: x.replace('Temporarily Closed', '3.6'))
train_df['Votes'] = train_df['Votes'].apply(lambda x: x.replace('-', '244'))
train_df['Reviews'] = train_df['Reviews'].apply(lambda x: x.replace('-', '123'))
replace_nn_with(test_df['Rating'], float, fill_with = None, method = 'mean')
replace_nn_with(test_df['Votes'], float, fill_with =None, method = 'mean')
replace_nn_with(test_df['Reviews'], float, fill_with =None, method = 'mean')
test_df['Rating'] = test_df['Rating'].apply(lambda x: x.replace('-', '3.6'))
test_df['Rating'] = test_df['Rating'].apply(lambda x: x.replace('NEW', '3.6'))
test_df['Rating'] = test_df['Rating'].apply(lambda x: x.replace('Opening Soon', '3.6'))
test_df['Votes'] = test_df['Votes'].apply(lambda x: x.replace('-', '226'))
test_df['Reviews'] = test_df['Reviews'].apply(lambda x: x.replace('-', '111'))
train_df.head()
train_df['Average_Cost'] = train_df['Average_Cost'].apply(lambda x: x.replace('₹', ''))
test_df['Average_Cost'] = test_df['Average_Cost'].apply(lambda x: x.replace('₹', ''))
import re
trim_function = lambda x : re.findall("^\s*(.*?)\s*$",str(x))[0]
remove_commas = lambda x: re.sub("[^\d]", "", str(x))
train_df['Average_Cost']= train_df['Average_Cost'].apply(trim_function).apply(remove_commas)
train_df['Average_Cost']=train_df['Average_Cost'].replace(r'^\s*$', np.nan, regex=True)
train_df['Average_Cost']= train_df['Average_Cost'].fillna(train_df['Average_Cost'].value_counts().idxmax())
train_df['Average_Cost']= train_df['Average_Cost'].astype(int)
test_df['Average_Cost']= test_df['Average_Cost'].apply(trim_function).apply(remove_commas).astype(int)
train_df['Minimum_Order'] = train_df['Minimum_Order'].apply(lambda x: x.replace('₹', ''))
test_df['Minimum_Order'] = test_df['Minimum_Order'].apply(lambda x: x.replace('₹', ''))
import re
trim_function = lambda x : re.findall("^\s*(.*?)\s*$",str(x))[0]
remove_commas = lambda x: re.sub("[^\d]", "", str(x))
train_df['Minimum_Order']= train_df['Minimum_Order'].apply(trim_function).apply(remove_commas).astype(int)
test_df['Minimum_Order']= test_df['Minimum_Order'].apply(trim_function).apply(remove_commas).astype(int)
train_df['Rating']= train_df['Rating'].astype(float)
train_df['Votes']= train_df['Votes'].astype(float)
train_df['Reviews']= train_df['Reviews'].astype(float)
test_df['Rating']= test_df['Rating'].astype(float)
test_df['Votes']= test_df['Votes'].astype(float)
test_df['Reviews']= test_df['Reviews'].astype(float)
#Number of words in Location
train_df["sync_num_words"]=train_df['Location'].apply(lambda x :len(str(x).split()))
test_df["sync_num_words"]=test_df['Location'].apply(lambda x :len(str(x).split()))
#Number of unique words in Location
train_df["Syn_num_unique_words"] = train_df['Location'].apply(lambda x: len(set(str(x).split())))
test_df["Syn_num_unique_words"] = test_df['Location'].apply(lambda x: len(set(str(x).split())))
#Number of characters in Location
train_df["Syn_num_chars"] = train_df['Location'].apply(lambda x: len(str(x)))
test_df["Syn_num_chars"] = test_df['Location'].apply(lambda x: len(str(x)))
## Number of stopwords in the Location ##
train_df["Syn_num_stopwords"] = train_df['Location'].apply(lambda x: len([w for w in str(x).lower().split() if w in stop_words]))
test_df["Syn_num_stopwords"] = test_df['Location'].apply(lambda x: len([w for w in str(x).lower().split() if w in stop_words]))
## Number of punctuations in the Location ##
train_df["Syn_num_punctuations"] =train_df['Location'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
test_df["Syn_num_punctuations"] =test_df['Location'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
## Number of title case words in the Location ##
train_df["Syn_num_words_upper"] = train_df['Location'].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
test_df["Syn_num_words_upper"] = test_df['Location'].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
## Number of title case words in the Location ##
train_df["Syn_num_words_title"] = train_df['Location'].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
test_df["Syn_num_words_title"] = test_df['Location'].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
## Average length of the words in the Location ##
train_df["mean_word_len"] = train_df['Location'].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
test_df["mean_word_len"] = test_df['Location'].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
def clean_text(text):
text=text.lower()
text = re.sub(r'@[a-zA-Z0-9_]+', '', text)
text = re.sub(r'https?://[A-Za-z0-9./]+', '', text)
text = re.sub(r'www.[^ ]+', '', text)
text = re.sub(r'[a-zA-Z0-9]*www[a-zA-Z0-9]*com[a-zA-Z0-9]*', '', text)
text = re.sub(r'[^a-zA-Z]', ' ', text)
text = [token for token in text.split() if len(token) > 2]
text = ' '.join(text)
return text
train_df['Location'] = train_df['Location'].apply(clean_text)
test_df['Location'] = test_df['Location'].apply(clean_text)
def clean_text(text):
text=text.lower()
text = re.sub(r'@[a-zA-Z0-9_]+', '', text)
text = re.sub(r'https?://[A-Za-z0-9./]+', '', text)
text = re.sub(r'www.[^ ]+', '', text)
text = re.sub(r'[a-zA-Z0-9]*www[a-zA-Z0-9]*com[a-zA-Z0-9]*', '', text)
text = re.sub(r'[^a-zA-Z]', ' ', text)
text = [token for token in text.split() if len(token) > 2]
text = ' '.join(text)
return text
train_df['Cuisines'] = train_df['Cuisines'].apply(clean_text)
test_df['Cuisines'] = test_df['Cuisines'].apply(clean_text)
## Number of words in the Cuisines ##
train_df["Cuisines_num_words"] = train_df["Cuisines"].apply(lambda x: len(str(x).split()))
test_df["Cuisines_num_words"] = test_df["Cuisines"].apply(lambda x: len(str(x).split()))
## Number of unique words in the Title ##
train_df["Cuisines_num_unique_words"] = train_df["Cuisines"].apply(lambda x: len(set(str(x).split())))
test_df["Cuisines_num_unique_words"] = test_df["Cuisines"].apply(lambda x: len(set(str(x).split())))
## Number of characters in the Title ##
train_df["Cuisines_num_chars"] = train_df["Cuisines"].apply(lambda x: len(str(x)))
test_df["Cuisines_num_chars"] = test_df["Cuisines"].apply(lambda x: len(str(x)))
## Number of stopwords in the Title ##
train_df["Cuisines_num_stopwords"] = train_df["Cuisines"].apply(lambda x: len([w for w in str(x).lower().split() if w in stop_words]))
test_df["Cuisines_num_stopwords"] = test_df["Cuisines"].apply(lambda x: len([w for w in str(x).lower().split() if w in stop_words]))
## Number of punctuations in the Title ##
train_df["Cuisines_num_punctuations"] =train_df['Cuisines'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
test_df["Cuisines_num_punctuations"] =test_df['Cuisines'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
## Number of title case words in the Title ##
train_df["Cuisines_num_words_upper"] = train_df["Cuisines"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
test_df["Cuisines_num_words_upper"] = test_df["Cuisines"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
## Number of title case words in the Title ##
train_df["Cuisines_num_words_title"] = train_df["Cuisines"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
test_df["Cuisines_num_words_title"] = test_df["Cuisines"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
## Average length of the words in the Title ##
train_df["mean_word_len_Cuisines"] = train_df["Cuisines"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
test_df["mean_word_len_Cuisines"] = test_df["Cuisines"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
train_df.head(10)
test_df.head(10)
```
# Data Preprocessing
```
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer(ngram_range=(1, 1), lowercase=False)
train_route = tf.fit_transform(train_df['Cuisines'])
test_route = tf.transform(test_df['Cuisines'])
train_route = pd.DataFrame(data=train_route.toarray(), columns=tf.get_feature_names())
test_route = pd.DataFrame(data=test_route.toarray(), columns=tf.get_feature_names())
train_route.head()
train_df = pd.concat([train_df, train_route], axis=1)
train_df.drop('Cuisines', axis=1, inplace=True)
test_df = pd.concat([test_df, test_route], axis=1)
test_df.drop('Cuisines', axis=1, inplace=True)
train_df.head()
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer(ngram_range=(1, 1), lowercase=False)
train_route = tf.fit_transform(train_df['Location'])
test_route = tf.transform(test_df['Location'])
train_route = pd.DataFrame(data=train_route.toarray(), columns=tf.get_feature_names())
test_route = pd.DataFrame(data=test_route.toarray(), columns=tf.get_feature_names())
train_route.head()
train_df = pd.concat([train_df, train_route], axis=1)
train_df.drop('Location', axis=1, inplace=True)
test_df = pd.concat([test_df, test_route], axis=1)
test_df.drop('Location', axis=1, inplace=True)
train_df.head()
train_df.drop('Restaurant', axis=1, inplace=True)
test_df.drop('Restaurant', axis=1, inplace=True)
duplicate_columns = test_df.columns[test_df.columns.duplicated()]
duplicate_columns_1 = train_df.columns[train_df.columns.duplicated()]
duplicate_columns_1
train_df.drop('north', axis=1, inplace=True)
test_df.drop('north', axis=1, inplace=True)
X = train_df.drop(labels=['Delivery_Time'], axis=1)
y = train_df['Delivery_Time'].values
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(X, y, test_size=0.25, random_state=1)
X_test = test_df
from xgboost import XGBClassifier
xgb = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, gamma=0,
learning_rate=0.1, max_delta_step=0, max_depth=3,
min_child_weight=1, missing=None, n_estimators=100, n_jobs=1,
nthread=None, objective='multi:softprob', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=1, verbosity=1)
#Training the classifier
xgb.fit(X_train,y_train)
#Evaluating the score on validation set
xgb.score(X_cv,y_cv)
#Predicting for test set
Predictions = xgb.predict(X_test)
df_sub = pd.DataFrame(data=Predictions, columns=['Delivery_Time'])
writer = pd.ExcelWriter('Delivery_Time_1.xlsx', engine='xlsxwriter')
df_sub.to_excel(writer,sheet_name='Sheet1', index=False)
writer.save()
```
| github_jupyter |
# Computer Vision
## Outline
Digital imaging has enabled a global culture of visual communications. The ubquitous smartphones of the current era enable image capture, editing, and instant global transmission for literally billions of people. That technology can also be brought to bear in the laboratory.
**Need a Diagram**: Staging/Lighting --> Image Formation (Lenses) --> Image Capture (Sensors) --> Image Processing and Segmentation (Computer Vision) --> Machine Learning
## Light, Perception, and Color
Light, color, and human perception.
## Lenses and Image Formation
Introduction to imaging and lenses
* https://coinimaging.com/photo_articles.html
Finite versus Infinite Conjugation
Fourier Optics
## Image Sensors
Raspberry Pi Cameras
* [Raspberry Pi Camera Module 2](https://www.raspberrypi.com/products/camera-module-v2/) A fixed focus color camera baased on the 8 MP [Sony IMX219 sensor](https://github.com/rellimmot/Sony-IMX219-Raspberry-Pi-V2-CMOS). Ships with fixed focus lens and CSI cable.
* [Raspberry Pi HQ Camera]()
[Arducam](https://www.arducam.com/) produces a range of camera modules, lenses, and accessories for Raspberry Pi and other single board computers and microcontrollers.
* [Arducam High-Resolution Autofocus Camera](https://www.arducam.com/16mp-autofocus-camera-for-raspberry-pi/) is a 16 MP camera based on the Sony IMX519 sensor, mounted in the small format of the Raspberry Pi Camera V2, and equipped with an autofocus lens. The IMX519 is a back-side illuminated, stacked sensor, used in recent generation smartphones.
Other Third Party Devices
* [Weewoday OV5647 1080p](https://www.amazon.com/Pieces-Megapixels-Sensor-Compatible-Raspberry/dp/B08M9RY3DQ/) Similar to Raspberry Pi Version 1 camera.
Thermal Imaging
* [Grove - Thermal Imaging Camera - MLX90641 BCB 16x12 IR Array with 55° FOV](https://www.seeedstudio.com/Grove-Thermal-Imaging-Camera-MLX90641-BCB-16x12-IR-Array-with-55-FOV-p-5265.html)
## Macro/Micro Photography
Concepts
* Magnification
Conventional Macro
* Triplet achromats. Infinite conjugation achromatic triplets are available as commercial at low-cost from Raynox.
* [Mitakon Zhongyi 20mm f/2 4.5x Super Macro Lens](https://www.bhphotovideo.com/c/product/1307521-REG/mitakon_zhongyi_mtk20mf2ai_20mm_f_2_4_5x_super.html)
* [Venus Optics Laowa 25mm f/2.8 2.5-5X Ultra Macro Lens](https://www.bhphotovideo.com/c/product/1526143-REG/venus_optics_ve2528nz_laowa_25mm_f_2_8_2_5_5x.html)
* [Yasuhara Nanoha Macro Lens 5:1](https://www.bhphotovideo.com/c/product/852859-REG/YASUHARA_YA24_NAN5M_Nanoha_Macro_Lens_5_1.html)
* [Venus Optics Laowa 24mm f/14 Probe Lens](https://www.bhphotovideo.com/c/product/1430596-REG/venus_optics_laowa_24mm_f_14_probe.html)
Microscope Objectives
* [Edmund Scientific. Understanding Microscopes and Objectives](https://www.edmundoptics.com/knowledge-center/application-notes/microscopy/understanding-microscopes-and-objectives/)
* [MJKZZ. Using Microscope Objectives For Beginners](https://www.mjkzz.com/single-post/2016/08/10/Using-Microscope-Objectives-For-Beginners)
* MJKZZ Microscope to Filter Thread adapters (https://www.mjkzz.com/product-page/4x-plan-infinite-microscope-objective-with-rms-adapter)
* [Hayear Lenses and Accessories](https://www.amazon.com/stores/page/B8FE1052-BA43-4FA2-99E8-3AC4A9A05A29)
| github_jupyter |
```
# write your code here
from sklearn.externals.joblib import Memory
from sklearn.datasets import load_svmlight_file
import numpy as np
import random
# 读取数据集
mem=Memory('./mycache1')
@mem.cache
def get_data(filename):
data=load_svmlight_file(filename, n_features=123)
return data[0],data[1]
# 划分数据集
x_train, y_train = get_data('../data/a9a')
x_test, y_test = get_data('../data/a9a.t')
x_train = x_train.toarray()
x_test = x_test.toarray()
# 参数初始化
w = np.random.rand(123) # 权重初始化
b = random.uniform(0, 1) # 偏置设置为0和1之间的浮点数
vw = np.zeros(123) # 动量
vb = 0.
yitaw = 0.1 # 学习率,即梯度的加权值
yitab = 0.1
mu = 0.1 # 动量的加权值
# 求梯度
def grad(times, w, b):
gradw = np.zeros(123)
gradb = 0.
for i in range(0, times):
index = random.randint(0, x_train.shape[0] - 1)
if y_train[index] * ( np.dot( x_train[index], w ) + b ) < 1:
gradw += -1 * ( y_train[index] * x_train[index] )
gradb += -1 * y_train[index]
gradw /= times
gradb /= times
return w + gradw, gradb # 返回梯度
# 求准确率
accuracy=[] # 正确分类的验证集样本数占总验证机样本数的比例
def assess():
right=0.
for i in range(0, x_test.shape[0]):
if np.dot( x_test[i] , w ) + b> 0:
if ( y_test[i] == +1 ):
right+=1.
else:
if ( y_test[i] == -1 ):
right+=1.
accuracy.append(right / x_test.shape[0])
# 随机梯度下降过程
ranges = range(0, 100)
loss_nag = []
for e in ranges:
# NAG
gradw, gradb = grad(2**4, w - mu * vw, b - mu * vb)
# 更新w
#vw_prev = vw
#vw = mu * vw - yitaw * gradw
#w += -1 * mu * vw_prev + (1+mu) * vw
vw = mu * vw + yitaw * gradw
w -= vw
# 更新b
#vb_prev = vb
#vb = mu * vb - yitab * gradb
#b += -1 * mu * vb_prev + (1+mu) * vb
vb = mu * vb + yitab * gradb
b -= vb
# 计算loss
label_validate=[]
for i in range(0, x_test.shape[0]):
if np.dot( x_test[i], w ) + b >= 0:
label_validate.append(1.)
else:
label_validate.append(-1.)
cur=0.
for i in range(0, x_test.shape[0]):
cur += max(0, 1.-y_test[i]*label_validate[i])
loss_nag.append(cur / x_test.shape[0])
assess()
# 可视化实验结果
import matplotlib.pyplot as plt
figure1,=plt.plot(ranges, loss_nag)
plt.xlabel('Iteration')
plt.ylabel('Hinge Loss')
plt.xticks(np.linspace(0, 99, 10))
plt.legend(handles=[figure1], labels=['NAG LOSS'], loc='right')
plt.show()
from sklearn.externals.joblib import Memory
from sklearn.datasets import load_svmlight_file
import numpy as np
import random
# 读取数据集
mem=Memory('./mycache1')
@mem.cache
def get_data(filename):
data=load_svmlight_file(filename, n_features=123)
return data[0], data[1]
# 划分数据集
x_train, y_train = get_data('../data/a9a')
x_test, y_test = get_data('../data/a9a.t')
x_train = x_train.toarray()
x_test = x_test.toarray()
x_train=np.c_[ x_train, np.ones(x_train.shape[0]) ] # 在样本数据最后添加一整列的1,代替参数b
x_test=np.c_[ x_test, np.ones(x_test.shape[0]) ] # 在样本数据最后添加一整列的1,代替参数b
# 参数初始化
mu = 0.9 # 动量的加权值
# NAG初始化
w_nag = np.random.rand(124) # 权重初始化
v = np.zeros(124) # 动量
yita_nag = 0.01 # 学习率,即梯度的加权值
# RMSP初始化
w_rmsp = np.random.rand(124)
yita_rmsp = 0.01
capital_g_rmsp = np.zeros(124)
epsilon = 1e-8
# AdaDelta初始化
w_delta = np.random.rand(124)
capital_g_delta = np.random.rand(124) # 不能全零初始化
gama_delta = 0.95
delta_t = np.random.rand(124)
delta_w = np.zeros(124)
# Adam初始化
w_adam = np.random.rand(124)
yita_adam = 0.1
beta = 0.9
gama_adam = 0.999
moment = np.random.rand(124)
capital_g_adam = np.random.rand(124)
# 求梯度
def grad(times, w):
gradw = np.zeros(124)
for i in range(0, times):
index = random.randint(0, x_train.shape[0] - 1)
if y_train[index] * ( np.dot( x_train[index], w ) ) < 1:
gradw += -1 * ( y_train[index] * x_train[index] )
return ( w + gradw ) / times # 返回梯度
# 求准确率
def assess(w):
right=0.
for i in range(0, x_test.shape[0]):
if np.dot( x_test[i] , w ) > 0:
if ( y_test[i] == +1 ):
right+=1.
else:
if ( y_test[i] == -1 ):
right+=1.
return right / x_test.shape[0]
# 计算loss
def getLoss(w):
label_validate=[]
for i in range(0, x_test.shape[0]):
if np.dot( x_test[i], w ) >= 0:
label_validate.append(1.)
else:
label_validate.append(-1.)
cur=0.
for i in range(0, x_test.shape[0]):
cur += max(0, 1.-y_test[i]*label_validate[i])
return cur / x_test.shape[0]
# 随机梯度下降过程
ranges = range(0, 200)
loss_nag = []
loss_rmsp = []
loss_delta = []
loss_adam = []
accuracy_nag = []
accuracy_rmsp = []
accuracy_delta = []
accuracy_adam = []
precision_nag = [] # 正确预测为正类的数目占所有预测为正类的比例
recall_nag = [] # 正确预测为正类的数目占所有正类的比例
f1_nag = [] # 精确率和召回率的调和均值
precision_rmsp = []
recall_rmsp = []
f1_rmsp = []
for e in ranges:
# NAG
gradw = grad(2**4, w_nag - mu * v)
# 更新w
v = mu * v + yita_nag * gradw
w_nag -= v
# 计算loss
loss = getLoss(w_nag)
loss_nag.append(loss)
# 计算准确率
accuracy = assess(w_nag)
accuracy_nag.append(accuracy)
# RMSP
gradw = grad(2**4, w_rmsp)
capital_g_rmsp = mu * capital_g_rmsp + (1-mu) * ( gradw * gradw)
w_rmsp -= ( yita_rmsp / np.sqrt( capital_g_rmsp + epsilon ) ) * gradw
loss = getLoss(w_rmsp)
loss_rmsp.append(loss)
accuracy = assess(w_rmsp)
accuracy_rmsp.append(accuracy)
# AdaDelta
gradw = grad(2**10, w_delta) # 提高样本数目,减少震荡
capital_g_delta = gama_delta * capital_g_delta + (1-gama_delta) * gradw * gradw
delta_w = -1 * ( np.sqrt(delta_t + epsilon) / np.sqrt(capital_g_delta + epsilon) ) * gradw
w_delta += delta_w
delta_t = gama_delta * delta_t + (1-gama_delta) * delta_w * delta_w
loss = getLoss(w_delta)
loss_delta.append(loss)
accuracy = assess(w_delta)
accuracy_delta.append(accuracy)
# Adam
gradw = grad(2**4, w_adam)
moment = beta * moment + (1-beta) * gradw
capital_g_adam = gama_adam * capital_g_adam + (1-gama_adam) * gradw * gradw
alpha = yita_adam * ( np.sqrt(1-gama_adam**(e+1) ) / (1-beta**(e+1)) )
w_adam -= alpha * moment / np.sqrt(capital_g_adam + epsilon)
loss = getLoss(w_adam)
loss_adam.append(loss)
accuracy = assess(w_adam)
accuracy_adam.append(accuracy)
# 可视化实验结果
import matplotlib.pyplot as plt
figure1,=plt.plot(ranges, loss_nag)
figure2,=plt.plot(ranges, loss_rmsp)
figure3,=plt.plot(ranges, loss_delta)
figure4,=plt.plot(ranges, loss_adam)
plt.xlabel('Iteration')
plt.ylabel('Hinge Loss')
plt.legend(handles=[figure1, figure2, figure3, figure4], labels=['NAG', 'RMSProp', 'AdaDelta', 'Adam'], loc='right')
plt.show()
# 可视化实验结果,准确率
import matplotlib.pyplot as plt
figure1,=plt.plot(ranges,accuracy_nag)
figure2,=plt.plot(ranges,accuracy_rmsp)
figure3,=plt.plot(ranges,accuracy_delta)
figure4,=plt.plot(ranges,accuracy_adam)
plt.xlabel('Iteration')
plt.ylabel('Evaluation')
plt.legend(handles=[figure1, figure2, figure3, figure4], labels=['NAG', 'RMSProp', 'AdaDelta','Adam'], loc='best')
plt.show()
print(accuracy_nag[199])
print(accuracy_rmsp[199])
print(accuracy_delta[199])
print(accuracy_adam[199])
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# The Sequential model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/sequential_model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/sequential_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/sequential_model.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/sequential_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Setup
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
## When to use a Sequential model
A `Sequential` model is appropriate for **a plain stack of layers**
where each layer has **exactly one input tensor and one output tensor**.
Schematically, the following `Sequential` model:
```
# Define Sequential model with 3 layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
# Call model on a test input
x = tf.ones((3, 3))
y = model(x)
```
is equivalent to this function:
```
# Create 3 layers
layer1 = layers.Dense(2, activation="relu", name="layer1")
layer2 = layers.Dense(3, activation="relu", name="layer2")
layer3 = layers.Dense(4, name="layer3")
# Call layers on a test input
x = tf.ones((3, 3))
y = layer3(layer2(layer1(x)))
```
A Sequential model is **not appropriate** when:
- Your model has multiple inputs or multiple outputs
- Any of your layers has multiple inputs or multiple outputs
- You need to do layer sharing
- You want non-linear topology (e.g. a residual connection, a multi-branch
model)
## Creating a Sequential model
You can create a Sequential model by passing a list of layers to the Sequential
constructor:
```
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
)
```
Its layers are accessible via the `layers` attribute:
```
model.layers
```
You can also create a Sequential model incrementally via the `add()` method:
```
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu"))
model.add(layers.Dense(3, activation="relu"))
model.add(layers.Dense(4))
```
Note that there's also a corresponding `pop()` method to remove layers:
a Sequential model behaves very much like a list of layers.
```
model.pop()
print(len(model.layers)) # 2
```
Also note that the Sequential constructor accepts a `name` argument, just like
any layer or model in Keras. This is useful to annotate TensorBoard graphs
with semantically meaningful names.
```
model = keras.Sequential(name="my_sequential")
model.add(layers.Dense(2, activation="relu", name="layer1"))
model.add(layers.Dense(3, activation="relu", name="layer2"))
model.add(layers.Dense(4, name="layer3"))
```
## Specifying the input shape in advance
Generally, all layers in Keras need to know the shape of their inputs
in order to be able to create their weights. So when you create a layer like
this, initially, it has no weights:
```
layer = layers.Dense(3)
layer.weights # Empty
```
It creates its weights the first time it is called on an input, since the shape
of the weights depends on the shape of the inputs:
```
# Call layer on a test input
x = tf.ones((1, 4))
y = layer(x)
layer.weights # Now it has weights, of shape (4, 3) and (3,)
```
Naturally, this also applies to Sequential models. When you instantiate a
Sequential model without an input shape, it isn't "built": it has no weights
(and calling
`model.weights` results in an error stating just this). The weights are created
when the model first sees some input data:
```
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
) # No weights at this stage!
# At this point, you can't do this:
# model.weights
# You also can't do this:
# model.summary()
# Call the model on a test input
x = tf.ones((1, 4))
y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
```
Once a model is "built", you can call its `summary()` method to display its
contents:
```
model.summary()
```
However, it can be very useful when building a Sequential model incrementally
to be able to display the summary of the model so far, including the current
output shape. In this case, you should start your model by passing an `Input`
object to your model, so that it knows its input shape from the start:
```
model = keras.Sequential()
model.add(keras.Input(shape=(4,)))
model.add(layers.Dense(2, activation="relu"))
model.summary()
```
Note that the `Input` object is not displayed as part of `model.layers`, since
it isn't a layer:
```
model.layers
```
A simple alternative is to just pass an `input_shape` argument to your first
layer:
```
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu", input_shape=(4,)))
model.summary()
```
Models built with a predefined input shape like this always have weights (even
before seeing any data) and always have a defined output shape.
In general, it's a recommended best practice to always specify the input shape
of a Sequential model in advance if you know what it is.
## A common debugging workflow: `add()` + `summary()`
When building a new Sequential architecture, it's useful to incrementally stack
layers with `add()` and frequently print model summaries. For instance, this
enables you to monitor how a stack of `Conv2D` and `MaxPooling2D` layers is
downsampling image feature maps:
```
model = keras.Sequential()
model.add(keras.Input(shape=(250, 250, 3))) # 250x250 RGB images
model.add(layers.Conv2D(32, 5, strides=2, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
# Can you guess what the current output shape is at this point? Probably not.
# Let's just print it:
model.summary()
# The answer was: (40, 40, 32), so we can keep downsampling...
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(2))
# And now?
model.summary()
# Now that we have 4x4 feature maps, time to apply global max pooling.
model.add(layers.GlobalMaxPooling2D())
# Finally, we add a classification layer.
model.add(layers.Dense(10))
```
Very practical, right?
## What to do once you have a model
Once your model architecture is ready, you will want to:
- Train your model, evaluate it, and run inference. See our
[guide to training & evaluation with the built-in loops](
https://www.tensorflow.org/guide/keras/train_and_evaluate/)
- Save your model to disk and restore it. See our
[guide to serialization & saving](https://www.tensorflow.org/guide/keras/save_and_serialize/).
- Speed up model training by leveraging multiple GPUs. See our
[guide to multi-GPU and distributed training](/guides/distributed_training).
## Feature extraction with a Sequential model
Once a Sequential model has been built, it behaves like a [Functional API
model](https://www.tensorflow.org/guide/keras/functional/). This means that every layer has an `input`
and `output` attribute. These attributes can be used to do neat things, like
quickly
creating a model that extracts the outputs of all intermediate layers in a
Sequential model:
```
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=[layer.output for layer in initial_model.layers],
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
```
Here's a similar example that only extract features from one layer:
```
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=initial_model.get_layer(name="my_intermediate_layer").output,
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
```
## Transfer learning with a Sequential model
Transfer learning consists of freezing the bottom layers in a model and only training
the top layers. If you aren't familiar with it, make sure to read our [guide
to transfer learning](https://www.tensorflow.org/guide/keras/transfer_learning/).
Here are two common transfer learning blueprint involving Sequential models.
First, let's say that you have a Sequential model, and you want to freeze all
layers except the last one. In this case, you would simply iterate over
`model.layers` and set `layer.trainable = False` on each layer, except the
last one. Like this:
```python
model = keras.Sequential([
keras.Input(shape=(784))
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(10),
])
# Presumably you would want to first load pre-trained weights.
model.load_weights(...)
# Freeze all layers except the last one.
for layer in model.layers[:-1]:
layer.trainable = False
# Recompile and train (this will only update the weights of the last layer).
model.compile(...)
model.fit(...)
```
Another common blueprint is to use a Sequential model to stack a pre-trained
model and some freshly initialized classification layers. Like this:
```python
# Load a convolutional base with pre-trained weights
base_model = keras.applications.Xception(
weights='imagenet',
include_top=False,
pooling='avg')
# Freeze the base model
base_model.trainable = False
# Use a Sequential model to add a trainable classifier on top
model = keras.Sequential([
base_model,
layers.Dense(1000),
])
# Compile & train
model.compile(...)
model.fit(...)
```
If you do transfer learning, you will probably find yourself frequently using
these two patterns.
That's about all you need to know about Sequential models!
To find out more about building models in Keras, see:
- [Guide to the Functional API](https://www.tensorflow.org/guide/keras/functional/)
- [Guide to making new Layers & Models via subclassing](
https://www.tensorflow.org/guide/keras/custom_layers_and_models/)
| github_jupyter |
# Mean Shift
Let’s implement Mean Shift from scratch. First, we’ll have to define a Mean Shift object.
```
import numpy as np
from utils.kernels import RBF
from utils.distances import euclidean
class MeanShift:
def __init__(self, bandwidth=1, tol=1E-7):
self.bandwidth = bandwidth
self.tol = 1 - tol
self.kernel = RBF(gamma=self.bandwidth)
```
The bandwidth parameter is there to parameterize the Radial Basis Function kernel. Now, let’s assumed that we have a trained model. This means that we have centers representing our clusters. Assigning a new point to a cluster comes down to assigning the point to its closest cluster. In this case we will use the Euclidean distance.
```
def _compute_labels(self, X, centers):
labels = []
for x in X:
distances = np.array([euclidean(x, center) for center in centers])
label = np.argmin(distances)
labels.append(label)
_, labels = np.unique(labels, return_inverse=True)
return np.array(labels, dtype=np.int)
def predict(self, X):
labels = self._compute_labels(X, self.cluster_centers_)
return labels
```
Now, let’s look at how we can train our model. Given some data, we first start by creating a center for each point in the data. Then, until convergence, we shift and merge centers.
```
def fit(self, X):
for labels, centers in self._fit(X):
self.labels_ = labels
self.cluster_centers_ = centers
return self
def _fit(self, X):
old_centers = np.array([])
new_centers = X
labels = -np.ones(len(X)) # -1 represents an "orphan"
while not self._has_converged(old_centers, new_centers):
yield labels, new_centers
old_centers = new_centers
new_centers = []
for center in old_centers:
shifted_center = self._shift(center, X)
new_centers.append(shifted_center)
new_centers = self._merge_centers(new_centers)
labels = self._compute_labels(X, new_centers)
```
An important function is the <i><b>_shift</i></b> function. To shift a center, we calculate the density values between the center and all points. Then, the new center is created by taking a weighted average of the data points. The difference in position between the old and new center is what is referred to as the __shift__.
```
def _shift(self, x, X):
densities = [self.kernel(x, x_) for x_ in X]
shifted_center = np.average(X, weights=densities, axis=0)
return shifted_center
```
Since all centers will eventually converge, some centers might need to be merged to speed up computation. Also, because of computer arithmetic, centers will rarely be exactly at the same position. Therefore, we redefine each center as the average of all centers that are within a certain high-density region around it. This way, we end up with identical centers, which we merge.
```
def _merge_centers(self, centers):
centers = np.unique(centers, axis=0)
new_centers = []
for c in centers:
distances = np.array([self.kernel(c, c_) for c_ in centers])
new_centers.append(np.mean(centers[distances > self.tol], axis=0))
centers = np.unique(new_centers, axis=0)
return centers
```
In our case, we define convergence as the moment where the shifted centers are “close enough” to the old centers.
```
def _has_converged(self, old, new):
if len(old) == len(new):
for i in range(len(new)):
if self.kernel(old[i], new[i]) < 1.0:
return False
return True
else:
return False
```
| github_jupyter |
# Minimum EigenValue For 4x4 Matrix
```
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import QuantumCircuit, execute, Aer
import pennylane as qml
import numpy as np
from numpy import pi
simulator = Aer.get_backend('qasm_simulator')
from scipy.linalg import expm, sinm, cosm
```
# Matrix and Decomposition
```
M= np.array(
[[3+4.7, 3., 6., 2.],
[ 3., 1+4.7, -4., 3.],
[ 6., -4., 5+4.7, -2.],
[ 2.,3., -2., 8+4.7]])
M
#M= np.matrix('1 0 0 0 ;0 0 -1 0;0 -1 0 0;0 0 0 1')
coeffs, obs_list = qml.utils.decompose_hamiltonian(M)
H = qml.Hamiltonian(coeffs, obs_list)
print(H)
coef(H)
def coef(H):
coeff = [0]*16
h = str(H)
for i in range (len(h)):
p = ""
if h[i] =="(":
j = i+1
while h[j]!=")" :
p=p+h[j]
j+=1
if h[j+3] == "I" and h[j+6] == "I":
coeff[0] = float(p)
if h[j+3] == "Z" and h[j+6] == "Z":
coeff[1] = float(p)
if h[j+3] == "X" and h[j+6] == "X":
coeff[2] = float(p)
if h[j+3] == "Y" and h[j+6] == "Y":
coeff[3] = float(p)
if h[j+3] == "I" and h[j+6] == "X":
coeff[4] = float(p)
if h[j+3] == "X" and h[j+6] == "I":
coeff[5] = float(p)
if h[j+3] == "I" and h[j+6] == "Y":
coeff[6] = float(p)
if h[j+3] == "Y" and h[j+6] == "I":
coeff[7] = float(p)
if h[j+3] == "I" and h[j+6] == "Z":
coeff[8] = float(p)
if h[j+3] == "Z" and h[j+6] == "I":
coeff[9] = float(p)
if h[j+3] == "X" and h[j+6] == "Y":
coeff[10] = float(p)
if h[j+3] == "X" and h[j+6] == "Y":
coeff[11] = float(p)
if h[j+3] == "X" and h[j+6] == "Z":
coeff[12] = float(p)
if h[j+3] == "Z" and h[j+6] == "X":
coeff[13] = float(p)
if h[j+3] == "Y" and h[j+6] == "Z":
coeff[14] = float(p)
if h[j+3] == "Z" and h[j+6] == "Y":
coeff[15] = float(p)
return(coeff)
```
# Ansatz
```
def Operational_circuit(params):
qc = QuantumCircuit(2)
#qc.h(range(2))
qc.rx(params[0],0)
qc.rx(params[1],1)
qc.ry(params[2],0)
qc.ry(params[3],1)
qc.crx(params[8],0,1)
qc.ry(params[4],0)
qc.ry(params[5],1)
qc.rx(params[6],0)
qc.rx(params[7],1)
return qc
Operational_circuit_try = Operational_circuit([1,2,3,4,5,6,7,8,9])
Operational_circuit_try.draw()
```
# Measurments for different components of H
# 1. ZZ
```
def measure_zz_circuit(given_circuit):
zz_meas = given_circuit.copy()
zz_meas.measure_all()
return zz_meas
zz_meas = measure_zz_circuit(Operational_circuit_try)
zz_meas.draw()
def measure_zz(given_circuit, num_shots = 10000):
zz_meas = measure_zz_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
zz = measure_zz(Operational_circuit_try)
print("<ZZ> =", str(zz))
```
# 2. XX
```
def measure_xx_circuit(given_circuit):
xx_meas = given_circuit.copy()
xx_meas.h(0)
xx_meas.h(1)
xx_meas.measure_all()
return xx_meas
xx_meas = measure_xx_circuit(Operational_circuit_try)
xx_meas.draw()
def measure_xx(given_circuit, num_shots = 10000):
xx_meas = measure_xx_circuit(given_circuit)
result = execute(xx_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(xx_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
xx = counts['00'] + counts['11'] - counts['01'] - counts['10']
xx = xx / total_counts
return xx
xx = measure_xx(Operational_circuit_try)
print("<XX> =", str(xx))
```
# 3. YY
```
def measure_yy_circuit(given_circuit):
yy_meas = given_circuit.copy()
yy_meas.rz(-pi/2,0)
yy_meas.rz(-pi/2,1)
yy_meas.h(0)
yy_meas.h(1)
yy_meas.measure_all()
return yy_meas
yy_meas = measure_yy_circuit(Operational_circuit_try)
yy_meas.draw()
def measure_yy(given_circuit, num_shots = 10000):
yy_meas = measure_yy_circuit(given_circuit)
result = execute(yy_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(yy_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
yy = counts['00'] + counts['11'] - counts['01'] - counts['10']
yy = yy / total_counts
return yy
yy = measure_yy(Operational_circuit_try)
print("<yy> =", str(yy))
```
# 4. IX
```
def measure_ix_circuit(given_circuit):
ix_meas = given_circuit.copy()
ix_meas.h(1)
ix_meas.measure_all()
return ix_meas
def measure_ix(given_circuit, num_shots = 10000):
ix_meas = measure_ix_circuit(given_circuit)
result = execute(ix_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(ix_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
ix = counts['00'] - counts['11'] + counts['01'] - counts['10']
ix = ix / total_counts
return ix
ix_meas = measure_ix_circuit(Operational_circuit_try)
ix_meas.draw()
ix = measure_ix(Operational_circuit_try)
print("<IX> =", str(ix))
```
# 5. XI
```
def measure_xi_circuit(given_circuit):
xi_meas = given_circuit.copy()
xi_meas.h(0)
xi_meas.measure_all()
return xi_meas
def measure_xi(given_circuit, num_shots = 10000):
xi_meas = measure_xi_circuit(given_circuit)
result = execute(xi_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(xi_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
xi = counts['00'] - counts['11'] - counts['01'] + counts['10']
xi = xi / total_counts
return xi
xi = measure_xi(Operational_circuit_try)
print("<XI> =", str(xi))
A = measure_xi_circuit(Operational_circuit_try)
A.draw()
```
# 6. IY
```
def measure_iy_circuit(given_circuit):
iy_meas = given_circuit.copy()
iy_meas.rz(-pi/2,1)
iy_meas.h(1)
iy_meas.measure_all()
return iy_meas
def measure_iy(given_circuit, num_shots = 10000):
iy_meas = measure_iy_circuit(given_circuit)
result = execute(iy_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(iy_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
iy = counts['00'] - counts['11'] + counts['01'] - counts['10']
iy = iy / total_counts
return iy
iy = measure_iy(Operational_circuit_try)
print("<IY> =", str(iy))
A = measure_iy_circuit(Operational_circuit_try)
A.draw()
```
# 7. YI
```
def measure_yi_circuit(given_circuit):
yi_meas = given_circuit.copy()
yi_meas.rz(-pi/2,0)
yi_meas.h(0)
yi_meas.measure_all()
return yi_meas
def measure_yi(given_circuit, num_shots = 10000):
yi_meas = measure_yi_circuit(given_circuit)
result = execute(yi_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(yi_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
yi = counts['00'] - counts['11'] - counts['01'] + counts['10']
yi = yi / total_counts
return yi
yi = measure_yi(Operational_circuit_try)
print("<YI> =", str(yi))
A = measure_yi_circuit(Operational_circuit_try)
A.draw()
```
# 8. IZ
```
def measure_iz(given_circuit, num_shots = 10000):
zz_meas = measure_zz_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] - counts['11']+ counts['01'] - counts['10']
zz = zz / total_counts
return zz
iz = measure_iz(Operational_circuit_try)
print("<IZ> =", str(iz))
```
# 9. ZI
```
def measure_zi(given_circuit, num_shots = 10000):
zz_meas = measure_zz_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] - counts['11']- counts['01'] + counts['10']
zz = zz / total_counts
return zz
zi = measure_zi(Operational_circuit_try)
print("<ZI> =", str(zi))
```
# 10. XY
```
def measure_xy_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.rz(-pi/2,1)
xy_meas.h(1)
xy_meas.h(0)
xy_meas.measure_all()
return xy_meas
def measure_xy(given_circuit, num_shots = 10000):
zz_meas = measure_xy_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
xy = measure_xy(Operational_circuit_try)
print("<XY> =", str(xy))
A = measure_xy_circuit(Operational_circuit_try)
A.draw()
```
# 11. YX
```
def measure_yx_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.rz(-pi/2,0)
xy_meas.h(0)
xy_meas.h(1)
xy_meas.measure_all()
return xy_meas
def measure_yx(given_circuit, num_shots = 10000):
zz_meas = measure_yx_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
yx = measure_yx(Operational_circuit_try)
print("<YX> =", str(yx))
A = measure_yx_circuit(Operational_circuit_try)
A.draw()
```
# 12. XZ
```
def measure_xz_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.h(0)
xy_meas.measure_all()
return xy_meas
def measure_xz(given_circuit, num_shots = 10000):
zz_meas = measure_xz_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
xz = measure_xz(Operational_circuit_try)
print("<XZ> =", str(xz))
A = measure_xz_circuit(Operational_circuit_try)
A.draw()
```
# 13. ZX
```
def measure_zx_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.h(1)
xy_meas.measure_all()
return xy_meas
def measure_zx(given_circuit, num_shots = 10000):
zz_meas = measure_zx_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
zx = measure_zx(Operational_circuit_try)
print("<ZX> =", str(zx))
A = measure_zx_circuit(Operational_circuit_try)
A.draw()
```
# 14. YZ
```
def measure_yz_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.rz(-pi/2,0)
xy_meas.h(0)
xy_meas.measure_all()
return xy_meas
def measure_yz(given_circuit, num_shots = 10000):
zz_meas = measure_yz_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
yz = measure_yz(Operational_circuit_try)
print("<YZ> =", str(yz))
A = measure_yz_circuit(Operational_circuit_try)
A.draw()
```
# 15. ZY
```
def measure_zy_circuit(given_circuit):
xy_meas = given_circuit.copy()
xy_meas.rz(-pi/2,1)
xy_meas.h(1)
xy_meas.measure_all()
return xy_meas
def measure_zy(given_circuit, num_shots = 10000):
zz_meas = measure_zy_circuit(given_circuit)
result = execute(zz_meas, backend = simulator, shots = num_shots).result()
counts = result.get_counts(zz_meas)
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_counts = counts['00'] + counts['11'] + counts['01'] + counts['10']
zz = counts['00'] + counts['11'] - counts['01'] - counts['10']
zz = zz / total_counts
return zz
zy = measure_zy(Operational_circuit_try)
print("<ZY> =", str(zy))
A = measure_zy_circuit(Operational_circuit_try)
A.draw()
```
# Cost_Function
```
def eigen_value(params):
num_shots = 100000
zz = measure_zz(Operational_circuit(params), num_shots = num_shots)
xx = measure_xx(Operational_circuit(params), num_shots = num_shots)
yy = measure_yy(Operational_circuit(params), num_shots = num_shots)
ix = measure_ix(Operational_circuit(params), num_shots = num_shots)
xi = measure_xi(Operational_circuit(params), num_shots = num_shots)
iy = measure_iy(Operational_circuit(params), num_shots = num_shots)
yi = measure_yi(Operational_circuit(params), num_shots = num_shots)
iz = measure_iz(Operational_circuit(params), num_shots = num_shots)
zi = measure_zi(Operational_circuit(params), num_shots = num_shots)
xy = measure_xy(Operational_circuit(params), num_shots = num_shots)
yx = measure_yx(Operational_circuit(params), num_shots = num_shots)
xz = measure_xz(Operational_circuit(params), num_shots = num_shots)
zx = measure_zx(Operational_circuit(params), num_shots = num_shots)
yz = measure_yz(Operational_circuit(params), num_shots = num_shots)
zy = measure_zy(Operational_circuit(params), num_shots = num_shots)
coeff = coef(H)
ev = (coeff[0]*1 + coeff[1]*zz + coeff[2]*xx+coeff[3]*yy+coeff[4]*ix+coeff[5]*xi+coeff[6]*iy+coeff[7]*yi+coeff[8]*iz+
coeff[9]*zi+coeff[10]*xy+coeff[11]*yx+coeff[12]*xz+coeff[13]*zx+coeff[14]*yz+coeff[15]*zy)
return ev
params = [0,0,0,0,0,0,0,0,0]
ev = eigen_value(params) #Try energy with params = [0,0,0,0,0,0]
print("The Eigen_Value of the trial state is", str(ev))
```
# Finding Lowest Eignevalue
```
from qiskit.aqua.components.optimizers import COBYLA,SLSQP,SPSA
optimizer = COBYLA(500,0.0001)
ret = optimizer.optimize(num_vars=9, objective_function=eigen_value, initial_point=params)
eigen_value(ret[0])
ret[1]
ret
```
# Eigen State
```
backend_state = Aer.get_backend('statevector_simulator')
result = execute(Operational_circuit(ret[0]),backend_state).result()
out_state = result.get_statevector()
out_state
```
# Initial State
```
expm(1j*M)
had = [0.5+0.j, 0.5+0.j, 0.5+0.j, 0.5+0.j]
np.dot(expm(1j*M),had)
```
| github_jupyter |
```
# Import libraries and modules
import sys
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
print(np.__version__)
np.set_printoptions(threshold = np.inf)
# tf.enable_eager_execution()
tf.executing_eagerly()
```
# Create data
## Create data generator
```
simple_data_genertator = False
percent_sequence_before_anomaly = 70.0
percent_sequence_after_anomaly = 0.0
def create_time_series_normal_parameters():
normal_frequency_noise_scale = 1.0
normal_frequence_noise_shift = 1.0
normal_amplitude_noise_scale = 1.0
normal_amplitude_noise_shift = 1.0
normal_noise_noise_scale = 1.0
if simple_data_genertator == True:
normal_freq = 1.0
normal_ampl = 1.0
else:
normal_freq = (np.random.random() * normal_frequency_noise_scale) + normal_frequence_noise_shift
normal_ampl = np.random.random() * normal_amplitude_noise_scale + normal_amplitude_noise_shift
return {"normal_freq": normal_freq, "normal_ampl": normal_ampl, "normal_noise_noise_scale": normal_noise_noise_scale}
def create_time_series_normal(number_of_sequences, sequence_length, normal_freq, normal_ampl, normal_noise_noise_scale):
# Normal parameters
if simple_data_genertator == True:
sequence = np.stack(arrays = [np.sin(np.arange(0, sequence_length) * normal_freq) * normal_ampl for _ in range(number_of_sequences)], axis = 0)
else:
sequence = np.stack(arrays = [np.sin(np.arange(0, sequence_length) * normal_freq) * normal_ampl + [np.random.random() * normal_noise_noise_scale for i in range(sequence_length)] for _ in range(number_of_sequences)], axis = 0)
return sequence
def create_time_series_with_anomaly(number_of_sequences, sequence_length, percent_sequence_before_anomaly, percent_sequence_after_anomaly, normal_freq, normal_ampl, normal_noise_noise_scale):
sequence_length_before_anomaly = int(sequence_length * percent_sequence_before_anomaly / 100.0)
sequence_length_after_anomaly = int(sequence_length * percent_sequence_after_anomaly / 100.0)
sequence_length_anomaly = sequence_length - sequence_length_before_anomaly - sequence_length_after_anomaly
# Anomalous parameters
anomalous_amplitude_multipler_min = 8.0
anomalous_amplitude_multipler_max = 20.0
if simple_data_genertator == True:
sequence_with_anomaly = np.stack(arrays = [np.sin(np.arange(0, sequence_length) * normal_freq) * normal_ampl for _ in range(number_of_sequences)], axis = 0)
else:
sequence_with_anomaly = create_time_series_normal(number_of_sequences, sequence_length, normal_freq, normal_ampl, normal_noise_noise_scale)
sequence_with_anomaly[:, sequence_length_before_anomaly:sequence_length_before_anomaly + sequence_length_anomaly] *= ((anomalous_amplitude_multipler_max - anomalous_amplitude_multipler_min) * np.random.random_sample([number_of_sequences, sequence_length_anomaly]) + anomalous_amplitude_multipler_min) * (np.random.randint(2, size = [number_of_sequences, sequence_length_anomaly]) * -2 + 1)
return sequence_with_anomaly
test_normal_parameters = create_time_series_normal_parameters()
import seaborn as sns
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 5):
sns.tsplot(create_time_series_normal(1, 10, test_normal_parameters["normal_freq"], test_normal_parameters["normal_ampl"], test_normal_parameters["normal_noise_noise_scale"]).reshape(-1), color=flatui[i%len(flatui)] )
import seaborn as sns
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 10):
sns.tsplot(create_time_series_with_anomaly(1, 10, percent_sequence_before_anomaly, percent_sequence_after_anomaly, test_normal_parameters["normal_freq"], test_normal_parameters["normal_ampl"], test_normal_parameters["normal_noise_noise_scale"]).reshape(-1), color=flatui[i%len(flatui)] )
```
## Create training and evaluation data
```
number_of_training_normal_sequences = 6400
number_of_validation_normal_1_sequences = 640
number_of_validation_normal_2_sequences = 640
number_of_validation_anomalous_sequences = 640
number_of_test_normal_sequences = 640
number_of_test_anomalous_sequences = 640
sequence_length = 10
number_of_tags = 5
tag_columns = ["tag_{0}".format(tag) for tag in range(0, number_of_tags)]
tag_data_list = [create_time_series_normal_parameters() for tag in range(0, number_of_tags)]
tag_data_list
# Create training set using normal sequences
training_normal_sequences_list = [create_time_series_normal(number_of_training_normal_sequences, sequence_length, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
training_normal_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = training_normal_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_training_normal_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
np.random.shuffle(training_normal_sequences_array)
print("training_normal_sequences_array.shape = \n{}".format(training_normal_sequences_array.shape))
# Create validation sets
# Create set vn1 of normal sequences which will be used for early stopping during training as well as using the error vectors to learn mu and sigma for mahalanobis distance
validation_normal_1_sequences_list = [create_time_series_normal(number_of_validation_normal_1_sequences, sequence_length, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
validation_normal_1_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = validation_normal_1_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_validation_normal_1_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
print("validation_normal_1_sequences_array.shape = \n{}".format(validation_normal_1_sequences_array.shape))
# Create set vn2 of normal sequences which will be used for tuning the anomaly thresholds
validation_normal_2_sequences_list = [create_time_series_normal(number_of_validation_normal_2_sequences, sequence_length, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
validation_normal_2_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = validation_normal_2_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_validation_normal_2_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
print("validation_normal_2_sequences_array.shape = \n{}".format(validation_normal_2_sequences_array.shape))
# Create set va of anomalous sequences which will be used for tuning the anomaly thresholds
validation_anomalous_sequences_list = [create_time_series_with_anomaly(number_of_validation_anomalous_sequences, sequence_length, percent_sequence_before_anomaly, percent_sequence_after_anomaly, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
validation_anomalous_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = validation_anomalous_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_validation_anomalous_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
print("validation_anomalous_sequences_array.shape = \n{}".format(validation_anomalous_sequences_array.shape))
# Create test sets
# Create set tn of normal sequences which will be used for testing model
test_normal_sequences_list = [create_time_series_normal(number_of_test_normal_sequences, sequence_length, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
test_normal_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = test_normal_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_test_normal_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
print("test_normal_sequences_array.shape = \n{}".format(test_normal_sequences_array.shape))
# Create set ta of anomalous sequences which will be used for testing model
test_anomalous_sequences_list = [create_time_series_with_anomaly(number_of_test_anomalous_sequences, sequence_length, percent_sequence_before_anomaly, percent_sequence_after_anomaly, tag["normal_freq"], tag["normal_ampl"], tag["normal_noise_noise_scale"]) for tag in tag_data_list]
test_anomalous_sequences_array = np.stack(arrays = list(map(lambda i: np.stack(arrays = list(map(lambda j: np.array2string(a = test_anomalous_sequences_list[i][j], separator = ',').replace('[', '').replace(']', '').replace(' ', '').replace('\n', ''), np.arange(0, number_of_test_anomalous_sequences))), axis = 0), np.arange(0, number_of_tags))), axis = 1)
print("test_anomalous_sequences_array.shape = \n{}".format(test_anomalous_sequences_array.shape))
# Combine vn2 and va sets for tuning anomaly thresholds
labeled_validation_normal_2_sequences_array = np.concatenate(seq = [validation_normal_2_sequences_array, np.zeros(shape = [validation_normal_2_sequences_array.shape[0], 1], dtype = np.int64)], axis = 1)
labeled_validation_anomalous_sequences_array = np.concatenate(seq = [validation_anomalous_sequences_array, np.ones(shape = [validation_anomalous_sequences_array.shape[0], 1], dtype = np.int64)], axis = 1)
labeled_validation_mixed_sequences_array = np.concatenate(seq = [labeled_validation_normal_2_sequences_array, labeled_validation_anomalous_sequences_array], axis = 0)
np.random.shuffle(labeled_validation_mixed_sequences_array)
print("labeled_validation_mixed_sequences_array.shape = \n{}".format(labeled_validation_mixed_sequences_array.shape))
# Combine tn and ta sets for testing model
labeled_test_normal_sequences_array = np.concatenate(seq = [test_normal_sequences_array, np.zeros(shape = [test_normal_sequences_array.shape[0], 1], dtype = np.int64)], axis = 1)
labled_test_anomalous_sequences_array = np.concatenate(seq = [test_anomalous_sequences_array, np.ones(shape = [test_anomalous_sequences_array.shape[0], 1], dtype = np.int64)], axis = 1)
labeled_test_mixed_sequences_array = np.concatenate(seq = [labeled_test_normal_sequences_array, labled_test_anomalous_sequences_array], axis = 0)
np.random.shuffle(labeled_test_mixed_sequences_array)
print("labeled_test_mixed_sequences_array.shape = \n{}".format(labeled_test_mixed_sequences_array.shape))
np.savetxt(fname = "data/training_normal_sequences.csv", X = training_normal_sequences_array, fmt = '%s', delimiter = ";")
np.savetxt(fname = "data/validation_normal_1_sequences.csv", X = validation_normal_1_sequences_array, fmt = '%s', delimiter = ";")
np.savetxt(fname = "data/labeled_validation_mixed_sequences.csv", X = labeled_validation_mixed_sequences_array, fmt = '%s', delimiter = ";")
np.savetxt(fname = "data/labeled_test_mixed_sequences.csv", X = labeled_test_mixed_sequences_array, fmt = '%s', delimiter = ";")
!head -3 data/training_normal_sequences.csv
!head -3 data/validation_normal_1_sequences.csv
!head -3 data/labeled_validation_mixed_sequences.csv
!head -3 data/labeled_test_mixed_sequences.csv
```
# Local Development
```
# Set logging to be level of INFO
tf.logging.set_verbosity(tf.logging.INFO)
# Determine CSV and label columns
UNLABELED_CSV_COLUMNS = tag_columns
LABEL_COLUMN = "anomalous_sequence_flag"
LABELED_CSV_COLUMNS = UNLABELED_CSV_COLUMNS + [LABEL_COLUMN]
# Set default values for each CSV column
UNLABELED_DEFAULTS = [[""] for _ in UNLABELED_CSV_COLUMNS]
LABELED_DEFAULTS = UNLABELED_DEFAULTS + [[0.0]]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename, mode, batch_size, params):
def _input_fn():
def decode_csv(value_column, sequence_length):
def convert_sequences_from_strings_to_floats(features, column_list):
def split_and_convert_string(string_tensor):
# Split string tensor into a sparse tensor based on delimiter
split_string = tf.string_split(source = tf.expand_dims(input = string_tensor, axis = 0), delimiter = ",")
# Converts the values of the sparse tensor to floats
converted_tensor = tf.string_to_number(split_string.values, out_type = tf.float64)
# Create a new sparse tensor with the new converted values, because the original sparse tensor values are immutable
new_sparse_tensor = tf.SparseTensor(indices = split_string.indices, values = converted_tensor, dense_shape = split_string.dense_shape)
# Create a dense tensor of the float values that were converted from text csv
dense_floats = tf.sparse_tensor_to_dense(sp_input = new_sparse_tensor, default_value = 0.0)
dense_floats_vector = tf.squeeze(input = dense_floats, axis = 0)
return dense_floats_vector
for column in column_list:
features[column] = split_and_convert_string(features[column])
features[column].set_shape([sequence_length])
return features
if mode == tf.estimator.ModeKeys.TRAIN or (mode == tf.estimator.ModeKeys.EVAL and params["evaluation_mode"] != "tune_anomaly_thresholds"):
columns = tf.decode_csv(records = value_column, record_defaults = UNLABELED_DEFAULTS, field_delim = ";")
features = dict(zip(UNLABELED_CSV_COLUMNS, columns))
features = convert_sequences_from_strings_to_floats(features, UNLABELED_CSV_COLUMNS)
return features
else:
columns = tf.decode_csv(records = value_column, record_defaults = LABELED_DEFAULTS, field_delim = ";")
features = dict(zip(LABELED_CSV_COLUMNS, columns))
labels = tf.cast(x = features.pop(LABEL_COLUMN), dtype = tf.float64)
features = convert_sequences_from_strings_to_floats(features, LABELED_CSV_COLUMNS[0:-1])
return features, labels
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(filenames = file_list) # Read text file
# Decode the CSV file into a features dictionary of tensors
dataset = dataset.map(map_func = lambda x: decode_csv(x, params["sequence_length"]))
# Determine amount of times to repeat file based on if we are training or evaluating
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
else:
num_epochs = 1 # end-of-input after this
# Repeat files num_epoch times
dataset = dataset.repeat(count = num_epochs)
# Group the data into batches
dataset = dataset.batch(batch_size = batch_size)
# Determine if we should shuffle based on if we are training or evaluating
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
# Create a iterator and then pull the next batch of features from the example queue
batched_dataset = dataset.make_one_shot_iterator().get_next()
return batched_dataset
return _input_fn
def try_out_input_function():
with tf.Session() as sess:
fn = read_dataset(
filename = "data/labeled_validation_mixed_sequences.csv",
mode = tf.estimator.ModeKeys.EVAL,
batch_size = 8,
params = {"sequence_length": sequence_length,
"evaluation_mode": "tune_anomaly_thresholds"})
features = sess.run(fn())
print("try_out_input_function: features = \n{}".format(features))
# print("try_out_input_function: features[tag_0].shape = {}".format(features["tag_0"].shape))
# try_out_input_function()
# This function updates the count of records used
def update_count(count_a, count_b):
return count_a + count_b
# This function updates the variables when the number_of_rows equals 1
def singleton_batch_variable_updating(inner_size, reshaped, count_variable, mean_variable, covariance_matrix_variable):
# This function updates the mean vector incrementally
def update_mean_incremental(count_a, mean_a, value_b):
return (mean_a * tf.cast(x = count_a, dtype = tf.float64) + tf.squeeze(input = value_b, axis = 0)) / tf.cast(x = count_a + 1, dtype = tf.float64)
# This function updates the covariance matrix incrementally
def update_covariance_incremental(count_a, mean_a, cov_a, value_b, mean_ab, sample_covariance):
if sample_covariance == True:
cov_ab = (cov_a * tf.cast(x = count_a - 1, dtype = tf.float64) + tf.matmul(a = value_b - mean_a, b = value_b - mean_ab, transpose_a = True)) / tf.cast(x = count_a, dtype = tf.float64)
else:
cov_ab = (cov_a * tf.cast(x = count_a, dtype = tf.float64) + tf.matmul(a = value_b - mean_a, b = value_b - mean_ab, transpose_a = True)) / tf.cast(x = count_a + 1, dtype = tf.float64)
return cov_ab
# Calculate new combined mean to use for incremental covariance matrix calculation
mean_ab = update_mean_incremental(count_a = count_variable,
mean_a = mean_variable,
value_b = reshaped) # time_shape = (number_of_features,), features_shape = (sequence_length,)
# Update running variables from single example
count_tensor = update_count(count_a = count_variable, count_b = 1) # time_shape = (), features_shape = ()
mean_tensor = mean_ab # time_shape = (number_of_features,), features_shape = (sequence_length,)
if inner_size == 1:
covariance_matrix_tensor = tf.zeros_like(tensor = covariance_matrix_variable, dtype = tf.float64)
else:
covariance_matrix_tensor = update_covariance_incremental(count_a = count_variable,
mean_a = mean_variable,
cov_a = covariance_matrix_variable,
value_b = reshaped,
mean_ab = mean_ab,
sample_covariance = True) # time_shape = (number_of_features, number_of_features), features_shape = (sequence_length, sequence_length)
# Assign values to variables, use control dependencies around return to enforce the mahalanobis variables to be assigned, the control order matters, hence the separate contexts
with tf.control_dependencies(control_inputs = [tf.assign(ref = covariance_matrix_variable, value = covariance_matrix_tensor)]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = mean_variable, value = mean_tensor)]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = count_variable, value = count_tensor)]):
return tf.identity(input = covariance_matrix_variable), tf.identity(input = mean_variable), tf.identity(input = count_variable)
# This function updates the variables when the number_of_rows does NOT equal 1
def non_singleton_batch_variable_updating(current_batch_size, inner_size, reshaped, count_variable, mean_variable, covariance_matrix_variable):
# This function updates the mean vector using a batch of data
def update_mean_batch(count_a, mean_a, count_b, mean_b):
return (mean_a * tf.cast(x = count_a, dtype = tf.float64) + mean_b * tf.cast(x = count_b, dtype = tf.float64)) / tf.cast(x = count_a + count_b, dtype = tf.float64)
# This function updates the covariance matrix using a batch of data
def update_covariance_batch(count_a, mean_a, cov_a, count_b, mean_b, cov_b, sample_covariance):
mean_diff = tf.expand_dims(input = mean_a - mean_b, axis = 0)
if sample_covariance == True:
cov_ab = (cov_a * tf.cast(x = count_a - 1, dtype = tf.float64) + cov_b * tf.cast(x = count_b - 1, dtype = tf.float64) + tf.matmul(a = mean_diff, b = mean_diff, transpose_a = True) * tf.cast(x = count_a * count_b, dtype = tf.float64) / tf.cast(x = count_a + count_b, dtype = tf.float64)) / tf.cast(x = count_a + count_b - 1, dtype = tf.float64)
else:
cov_ab = (cov_a * tf.cast(x = count_a, dtype = tf.float64) + cov_b * tf.cast(x = count_b, dtype = tf.float64) + tf.matmul(a = mean_diff, b = mean_diff, transpose_a = True) * tf.cast(x = count_a * count_b, dtype = tf.float64) / tf.cast(x = count_a + count_b, dtype = tf.float64)) / tf.cast(x = count_a + count_b, dtype = tf.float64)
return cov_ab
# Find statistics of batch
number_of_rows = current_batch_size * inner_size
reshaped_mean = tf.reduce_mean(input_tensor = reshaped, axis = 0) # time_shape = (number_of_features,), features_shape = (sequence_length,)
reshaped_centered = reshaped - reshaped_mean # time_shape = (current_batch_size * sequence_length, number_of_features), features_shape = (current_batch_size * number_of_features, sequence_length)
if inner_size > 1:
reshaped_covariance_matrix = tf.matmul(a = reshaped_centered, # time_shape = (number_of_features, number_of_features), features_shape = (sequence_length, sequence_length)
b = reshaped_centered,
transpose_a = True) / tf.cast(x = number_of_rows - 1, dtype = tf.float64)
# Update running variables from batch statistics
count_tensor = update_count(count_a = count_variable, count_b = number_of_rows) # time_shape = (), features_shape = ()
mean_tensor = update_mean_batch(count_a = count_variable,
mean_a = mean_variable,
count_b = number_of_rows,
mean_b = reshaped_mean) # time_shape = (number_of_features,), features_shape = (sequence_length,)
if inner_size == 1:
covariance_matrix_tensor = tf.zeros_like(tensor = covariance_matrix_variable, dtype = tf.float64)
else:
covariance_matrix_tensor = update_covariance_batch(count_a = count_variable,
mean_a = mean_variable,
cov_a = covariance_matrix_variable,
count_b = number_of_rows,
mean_b = reshaped_mean,
cov_b = reshaped_covariance_matrix,
sample_covariance = True) # time_shape = (number_of_features, number_of_features), features_shape = (sequence_length, sequence_length)
# Assign values to variables, use control dependencies around return to enforce the mahalanobis variables to be assigned, the control order matters, hence the separate contexts
with tf.control_dependencies(control_inputs = [tf.assign(ref = covariance_matrix_variable, value = covariance_matrix_tensor)]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = mean_variable, value = mean_tensor)]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = count_variable, value = count_tensor)]):
return tf.identity(input = covariance_matrix_variable), tf.identity(input = mean_variable), tf.identity(input = count_variable)
# Create our model function to be used in our custom estimator
def pca_anomaly_detection(features, labels, mode, params):
print("\npca_anomaly_detection: features = \n{}".format(features))
print("pca_anomaly_detection: labels = \n{}".format(labels))
print("pca_anomaly_detection: mode = \n{}".format(mode))
print("pca_anomaly_detection: params = \n{}".format(params))
# 0. Get input sequence tensor into correct shape
# Get dynamic batch size in case there was a partially filled batch
current_batch_size = tf.shape(input = features[UNLABELED_CSV_COLUMNS[0]], out_type = tf.int64)[0]
# Get the number of features
number_of_features = len(UNLABELED_CSV_COLUMNS)
# Stack all of the features into a 3-D tensor
X = tf.stack(values = [features[key] for key in UNLABELED_CSV_COLUMNS], axis = 2) # shape = (current_batch_size, sequence_length, number_of_features)
# Reshape into a 2-D tensors
# Time based
X_time = tf.reshape(tensor = X, shape = [current_batch_size * params["sequence_length"], number_of_features]) # shape = (current_batch_size * sequence_length, number_of_features)
# Features based
X_transposed = tf.transpose(a = X, perm = [0, 2, 1]) # shape = (current_batch_size, number_of_features, sequence_length)
X_features = tf.reshape(tensor = X_transposed, shape = [current_batch_size * number_of_features, params["sequence_length"]]) # shape = (current_batch_size * number_of_features, sequence_length)
################################################################################
# Variables for calculating error distribution statistics
with tf.variable_scope(name_or_scope = "pca_variables", reuse = tf.AUTO_REUSE):
# Time based
pca_time_count_variable = tf.get_variable(name = "pca_time_count_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
pca_time_mean_variable = tf.get_variable(name = "pca_time_mean_variable", # shape = (number_of_features,)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features],
dtype = tf.float64),
trainable = False)
pca_time_covariance_matrix_variable = tf.get_variable(name = "pca_time_covariance_matrix_variable", # shape = (number_of_features, number_of_features)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features, number_of_features],
dtype = tf.float64),
trainable = False)
pca_time_eigenvalues_variable = tf.get_variable(name = "pca_time_eigenvalues_variable", # shape = (number_of_features,)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features],
dtype = tf.float64),
trainable = False)
pca_time_eigenvectors_variable = tf.get_variable(name = "pca_time_eigenvectors_variable", # shape = (number_of_features, number_of_features)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features, number_of_features],
dtype = tf.float64),
trainable = False)
# Features based
pca_features_count_variable = tf.get_variable(name = "pca_features_count_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
pca_features_mean_variable = tf.get_variable(name = "pca_features_mean_variable", # shape = (sequence_length,)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"]],
dtype = tf.float64),
trainable = False)
pca_features_covariance_matrix_variable = tf.get_variable(name = "pca_features_covariance_matrix_variable", # shape = (sequence_length, sequence_length)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"], params["sequence_length"]],
dtype = tf.float64),
trainable = False)
pca_features_eigenvalues_variable = tf.get_variable(name = "pca_features_eigenvalues_variable", # shape = (sequence_length,)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"]],
dtype = tf.float64),
trainable = False)
pca_features_eigenvectors_variable = tf.get_variable(name = "pca_features_eigenvectors_variable", # shape = (sequence_length, sequence_length)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"], params["sequence_length"]],
dtype = tf.float64),
trainable = False)
# Variables for calculating error distribution statistics
with tf.variable_scope(name_or_scope = "mahalanobis_distance_variables", reuse = tf.AUTO_REUSE):
# Time based
absolute_error_count_batch_time_variable = tf.get_variable(name = "absolute_error_count_batch_time_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
absolute_error_mean_batch_time_variable = tf.get_variable(name = "absolute_error_mean_batch_time_variable", # shape = (number_of_features,)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features],
dtype = tf.float64),
trainable = False)
absolute_error_covariance_matrix_batch_time_variable = tf.get_variable(name = "absolute_error_covariance_matrix_batch_time_variable", # shape = (number_of_features, number_of_features)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features, number_of_features],
dtype = tf.float64),
trainable = False)
absolute_error_inverse_covariance_matrix_batch_time_variable = tf.get_variable(name = "absolute_error_inverse_covariance_matrix_batch_time_variable", # shape = (number_of_features, number_of_features)
dtype = tf.float64,
initializer = tf.zeros(shape = [number_of_features, number_of_features],
dtype = tf.float64),
trainable = False)
# Features based
absolute_error_count_batch_features_variable = tf.get_variable(name = "absolute_error_count_batch_features_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
absolute_error_mean_batch_features_variable = tf.get_variable(name = "absolute_error_mean_batch_features_variable", # shape = (sequence_length,)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"]],
dtype = tf.float64),
trainable = False)
absolute_error_covariance_matrix_batch_features_variable = tf.get_variable(name = "absolute_error_covariance_matrix_batch_features_variable", # shape = (sequence_length, sequence_length)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"], params["sequence_length"]],
dtype = tf.float64),
trainable = False)
absolute_error_inverse_covariance_matrix_batch_features_variable = tf.get_variable(name = "absolute_error_inverse_covariance_matrix_batch_features_variable", # shape = (sequence_length, sequence_length)
dtype = tf.float64,
initializer = tf.zeros(shape = [params["sequence_length"], params["sequence_length"]],
dtype = tf.float64),
trainable = False)
# Variables for automatically tuning anomaly thresholds
with tf.variable_scope(name_or_scope = "mahalanobis_distance_threshold_variables", reuse = tf.AUTO_REUSE):
# Time based
true_positives_at_thresholds_time_variable = tf.get_variable(name = "true_positives_at_thresholds_time_variable", # shape = (number_of_batch_time_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_time_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
false_negatives_at_thresholds_time_variable = tf.get_variable(name = "false_negatives_at_thresholds_time_variable", # shape = (number_of_batch_time_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_time_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
false_positives_at_thresholds_time_variable = tf.get_variable(name = "false_positives_at_thresholds_time_variable", # shape = (number_of_batch_time_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_time_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
true_negatives_at_thresholds_time_variable = tf.get_variable(name = "true_negatives_at_thresholds_time_variable", # shape = (number_of_batch_time_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_time_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
time_anomaly_threshold_variable = tf.get_variable(name = "time_anomaly_threshold_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
# Features based
true_positives_at_thresholds_features_variable = tf.get_variable(name = "true_positives_at_thresholds_features_variable", # shape = (number_of_batch_features_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_features_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
false_negatives_at_thresholds_features_variable = tf.get_variable(name = "false_negatives_at_thresholds_features_variable", # shape = (number_of_batch_features_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_features_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
false_positives_at_thresholds_features_variable = tf.get_variable(name = "false_positives_at_thresholds_features_variable", # shape = (number_of_batch_features_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_features_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
true_negatives_at_thresholds_features_variable = tf.get_variable(name = "true_negatives_at_thresholds_features_variable", # shape = (number_of_batch_features_anomaly_thresholds,)
dtype = tf.int64,
initializer = tf.zeros(shape = [params["number_of_batch_features_anomaly_thresholds"]],
dtype = tf.int64),
trainable = False)
features_anomaly_threshold_variable = tf.get_variable(name = "features_anomaly_threshold_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
# Variables for automatically tuning anomaly thresholds
with tf.variable_scope(name_or_scope = "anomaly_threshold_eval_variables", reuse = tf.AUTO_REUSE):
# Time based
true_positives_at_threshold_eval_time_variable = tf.get_variable(name = "true_positives_at_threshold_eval_time_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
false_negatives_at_threshold_eval_time_variable = tf.get_variable(name = "false_negatives_at_threshold_eval_time_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
false_positives_at_threshold_eval_time_variable = tf.get_variable(name = "false_positives_at_threshold_eval_time_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
true_negatives_at_threshold_eval_time_variable = tf.get_variable(name = "true_negatives_at_threshold_eval_time_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
accuracy_at_threshold_eval_time_variable = tf.get_variable(name = "accuracy_at_threshold_eval_time_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
precision_at_threshold_eval_time_variable = tf.get_variable(name = "precision_at_threshold_eval_time_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
recall_at_threshold_eval_time_variable = tf.get_variable(name = "recall_at_threshold_eval_time_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
f_beta_score_at_threshold_eval_time_variable = tf.get_variable(name = "f_beta_score_at_threshold_eval_time_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
# Features based
true_positives_at_threshold_eval_features_variable = tf.get_variable(name = "true_positives_at_threshold_eval_features_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
false_negatives_at_threshold_eval_features_variable = tf.get_variable(name = "false_negatives_at_threshold_eval_features_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
false_positives_at_threshold_eval_features_variable = tf.get_variable(name = "false_positives_at_threshold_eval_features_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
true_negatives_at_threshold_eval_features_variable = tf.get_variable(name = "true_negatives_at_threshold_eval_features_variable", # shape = ()
dtype = tf.int64,
initializer = tf.zeros(shape = [],
dtype = tf.int64),
trainable = False)
accuracy_at_threshold_eval_features_variable = tf.get_variable(name = "accuracy_at_threshold_eval_features_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
precision_at_threshold_eval_features_variable = tf.get_variable(name = "precision_at_threshold_eval_features_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
recall_at_threshold_eval_features_variable = tf.get_variable(name = "recall_at_threshold_eval_features_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
f_beta_score_at_threshold_eval_features_variable = tf.get_variable(name = "f_beta_score_at_threshold_eval_features_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [],
dtype = tf.float64),
trainable = False)
dummy_variable = tf.get_variable(name = "dummy_variable", # shape = ()
dtype = tf.float64,
initializer = tf.zeros(shape = [], dtype = tf.float64),
trainable = True)
# Now branch off based on which mode we are in
predictions_dict = None
loss = None
train_op = None
eval_metric_ops = None
export_outputs = None
# 3. Loss function, training/eval ops
if mode == tf.estimator.ModeKeys.TRAIN and params["evaluation_mode"] == "reconstruction":
with tf.variable_scope(name_or_scope = "pca_variables", reuse = tf.AUTO_REUSE):
# Check if batch is a singleton or not, very important for covariance math
# Time based ########################################
singleton_condition = tf.equal(x = current_batch_size * params["sequence_length"], y = 1) # shape = ()
pca_time_covariance_matrix_variable, pca_time_mean_variable, pca_time_count_variable = \
tf.cond(pred = singleton_condition,
true_fn = lambda: singleton_batch_variable_updating(params["sequence_length"], X_time, pca_time_count_variable, pca_time_mean_variable, pca_time_covariance_matrix_variable),
false_fn = lambda: non_singleton_batch_variable_updating(current_batch_size, params["sequence_length"], X_time, pca_time_count_variable, pca_time_mean_variable, pca_time_covariance_matrix_variable))
pca_time_eigenvalues_tensor, pca_time_eigenvectors_tensor = tf.linalg.eigh(tensor = pca_time_covariance_matrix_variable) # shape = (number_of_features,) & (number_of_features, number_of_features)
# Features based ########################################
singleton_batch_features_condition = tf.equal(x = current_batch_size * number_of_features, y = 1) # shape = ()
pca_features_covariance_matrix_variable, pca_features_mean_variable, pca_features_count_variable = \
tf.cond(pred = singleton_batch_features_condition,
true_fn = lambda: singleton_batch_variable_updating(number_of_features, X_features, pca_features_count_variable, pca_features_mean_variable, pca_features_covariance_matrix_variable),
false_fn = lambda: non_singleton_batch_variable_updating(current_batch_size, number_of_features, X_features, pca_features_count_variable, pca_features_mean_variable, pca_features_covariance_matrix_variable))
pca_features_eigenvalues_tensor, pca_features_eigenvectors_tensor = tf.linalg.eigh(tensor = pca_features_covariance_matrix_variable) # shape = (sequence_length,) & (sequence_length, sequence_length)
# Lastly use control dependencies around loss to enforce the mahalanobis variables to be assigned, the control order matters, hence the separate contexts
with tf.control_dependencies(control_inputs = [pca_time_covariance_matrix_variable, pca_features_covariance_matrix_variable]):
with tf.control_dependencies(control_inputs = [pca_time_mean_variable, pca_features_mean_variable]):
with tf.control_dependencies(control_inputs = [pca_time_count_variable, pca_features_count_variable]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = pca_time_eigenvalues_variable, value = pca_time_eigenvalues_tensor),
tf.assign(ref = pca_time_eigenvectors_variable, value = pca_time_eigenvectors_tensor),
tf.assign(ref = pca_features_eigenvalues_variable, value = pca_features_eigenvalues_tensor),
tf.assign(ref = pca_features_eigenvectors_variable, value = pca_features_eigenvectors_tensor)]):
loss = tf.reduce_sum(input_tensor = tf.zeros(shape = (), dtype = tf.float64) * dummy_variable)
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = tf.train.get_global_step(),
learning_rate = params["learning_rate"],
optimizer = "SGD")
else:
# Time based
X_time_centered = X_time - pca_time_mean_variable # shape = (current_batch_size * sequence_length, number_of_features)
X_time_projected = tf.matmul(a = X_time_centered, b = pca_time_eigenvectors_variable[:, -params["k_principal_components"]:]) # shape = (current_batch_size * sequence_length, params["k_principal_components"])
X_time_reconstructed = tf.matmul(a = X_time_projected, b = pca_time_eigenvectors_variable[:, -params["k_principal_components"]:], transpose_b = True) # shape = (current_batch_size * sequence_length, number_of_features)
X_time_abs_reconstruction_error = tf.abs(x = X_time_centered - X_time_reconstructed) # shape = (current_batch_size * sequence_length, number_of_features)
# Features based
X_features_centered = X_features - pca_features_mean_variable # shape = (current_batch_size * number_of_features, sequence_length)
X_features_projected = tf.matmul(a = X_features_centered, b = pca_features_eigenvectors_variable[:, -params["k_principal_components"]:]) # shape = (current_batch_size * number_of_features, params["k_principal_components"])
X_features_reconstructed = tf.matmul(a = X_features_projected, b = pca_features_eigenvectors_variable[:, -params["k_principal_components"]:], transpose_b = True) # shape = (current_batch_size * number_of_features, sequence_length)
X_features_abs_reconstruction_error = tf.abs(x = X_features_centered - X_features_reconstructed) # shape = (current_batch_size * number_of_features, sequence_length)
if mode == tf.estimator.ModeKeys.TRAIN and params["evaluation_mode"] == "calculate_error_distribution_statistics":
################################################################################
with tf.variable_scope(name_or_scope = "mahalanobis_distance_variables", reuse = tf.AUTO_REUSE):
# Time based ########################################
singleton_batch_time_condition = tf.equal(x = current_batch_size * params["sequence_length"], y = 1) # shape = ()
covariance_batch_time_variable, mean_batch_time_variable, count_batch_time_variable = \
tf.cond(pred = singleton_batch_time_condition,
true_fn = lambda: singleton_batch_variable_updating(params["sequence_length"], X_time_abs_reconstruction_error, absolute_error_count_batch_time_variable, absolute_error_mean_batch_time_variable, absolute_error_covariance_matrix_batch_time_variable),
false_fn = lambda: non_singleton_batch_variable_updating(current_batch_size, params["sequence_length"], X_time_abs_reconstruction_error, absolute_error_count_batch_time_variable, absolute_error_mean_batch_time_variable, absolute_error_covariance_matrix_batch_time_variable))
# Features based ########################################
singleton_batch_features_condition = tf.equal(x = current_batch_size * number_of_features, y = 1) # shape = ()
covariance_batch_features_variable, mean_batch_features_variable, count_batch_features_variable = \
tf.cond(pred = singleton_batch_features_condition,
true_fn = lambda: singleton_batch_variable_updating(number_of_features, X_features_abs_reconstruction_error, absolute_error_count_batch_features_variable, absolute_error_mean_batch_features_variable, absolute_error_covariance_matrix_batch_features_variable),
false_fn = lambda: non_singleton_batch_variable_updating(current_batch_size, number_of_features, X_features_abs_reconstruction_error, absolute_error_count_batch_features_variable, absolute_error_mean_batch_features_variable, absolute_error_covariance_matrix_batch_features_variable))
# Lastly use control dependencies around loss to enforce the mahalanobis variables to be assigned, the control order matters, hence the separate contexts
with tf.control_dependencies(control_inputs = [covariance_batch_time_variable, covariance_batch_features_variable]):
with tf.control_dependencies(control_inputs = [mean_batch_time_variable, mean_batch_features_variable]):
with tf.control_dependencies(control_inputs = [count_batch_time_variable, count_batch_features_variable]):
# Time based
absolute_error_inverse_covariance_matrix_batch_time_tensor = \
tf.matrix_inverse(input = covariance_batch_time_variable + \
tf.eye(num_rows = tf.shape(input = covariance_batch_time_variable)[0],
dtype = tf.float64) * params["eps"]) # shape = (number_of_features, number_of_features)
# Features based
absolute_error_inverse_covariance_matrix_batch_features_tensor = \
tf.matrix_inverse(input = covariance_batch_features_variable + \
tf.eye(num_rows = tf.shape(input = covariance_batch_features_variable)[0],
dtype = tf.float64) * params["eps"]) # shape = (sequence_length, sequence_length)
with tf.control_dependencies(control_inputs = [tf.assign(ref = absolute_error_inverse_covariance_matrix_batch_time_variable, value = absolute_error_inverse_covariance_matrix_batch_time_tensor),
tf.assign(ref = absolute_error_inverse_covariance_matrix_batch_features_variable, value = absolute_error_inverse_covariance_matrix_batch_features_tensor)]):
loss = tf.reduce_sum(input_tensor = tf.zeros(shape = (), dtype = tf.float64) * dummy_variable)
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = tf.train.get_global_step(),
learning_rate = params["learning_rate"],
optimizer = "SGD")
elif mode == tf.estimator.ModeKeys.EVAL and params["evaluation_mode"] != "tune_anomaly_thresholds":
# Reconstruction loss on evaluation set
loss = tf.losses.mean_squared_error(labels = X_time_centered, predictions = X_time_reconstructed)
if params["evaluation_mode"] == "reconstruction": # if reconstruction during train_and_evaluate
# Reconstruction eval metrics
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(labels = X_time_centered, predictions = X_time_reconstructed),
"mae": tf.metrics.mean_absolute_error(labels = X_time_centered, predictions = X_time_reconstructed)
}
elif mode == tf.estimator.ModeKeys.PREDICT or ((mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL) and params["evaluation_mode"] == "tune_anomaly_thresholds"):
def mahalanobis_distance(error_vectors_reshaped, mean_vector, inverse_covariance_matrix, final_shape):
error_vectors_reshaped_centered = error_vectors_reshaped - mean_vector # time_shape = (current_batch_size * sequence_length, number_of_features), features_shape = (current_batch_size * number_of_features, sequence_length)
mahalanobis_right_matrix_product = tf.matmul(a = inverse_covariance_matrix, # time_shape = (number_of_features, current_batch_size * sequence_length), features_shape = (sequence_length, current_batch_size * number_of_features)
b = error_vectors_reshaped_centered,
transpose_b = True)
mahalanobis_distance_vectorized = tf.matmul(a = error_vectors_reshaped_centered, # time_shape = (current_batch_size * sequence_length, current_batch_size * sequence_length), features_shape = (current_batch_size * number_of_features, current_batch_size * number_of_features)
b = mahalanobis_right_matrix_product)
mahalanobis_distance_flat = tf.diag_part(input = mahalanobis_distance_vectorized) # time_shape = (current_batch_size * sequence_length,), features_shape = (current_batch_size * number_of_features,)
mahalanobis_distance_final_shaped = tf.reshape(tensor = mahalanobis_distance_flat, shape = [-1, final_shape]) # time_shape = (current_batch_size, sequence_length), features_shape = (current_batch_size, number_of_features)
mahalanobis_distance_final_shaped_abs = tf.abs(x = mahalanobis_distance_final_shaped) # time_shape = (current_batch_size, sequence_length), features_shape = (current_batch_size, number_of_features)
return mahalanobis_distance_final_shaped_abs
with tf.variable_scope(name_or_scope = "mahalanobis_distance_variables", reuse = tf.AUTO_REUSE):
# Time based
mahalanobis_distance_batch_time = mahalanobis_distance(error_vectors_reshaped = X_time_abs_reconstruction_error, # shape = (current_batch_size, sequence_length)
mean_vector = absolute_error_mean_batch_time_variable,
inverse_covariance_matrix = absolute_error_inverse_covariance_matrix_batch_time_variable,
final_shape = params["sequence_length"])
# Features based
mahalanobis_distance_batch_features = mahalanobis_distance(error_vectors_reshaped = X_features_abs_reconstruction_error, # shape = (current_batch_size, number_of_features)
mean_vector = absolute_error_mean_batch_features_variable,
inverse_covariance_matrix = absolute_error_inverse_covariance_matrix_batch_features_variable,
final_shape = number_of_features)
if mode != tf.estimator.ModeKeys.PREDICT:
labels_normal_mask = tf.equal(x = labels, y = 0)
labels_anomalous_mask = tf.equal(x = labels, y = 1)
if mode == tf.estimator.ModeKeys.TRAIN:
def update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, number_of_thresholds, anomaly_thresholds, mahalanobis_distance, true_positives_at_thresholds_variable, false_negatives_at_thresholds_variable, false_positives_at_thresholds_variable, true_negatives_at_thresholds_variable):
mahalanobis_distance_over_thresholds = tf.map_fn(fn = lambda anomaly_threshold: mahalanobis_distance > anomaly_threshold,
elems = anomaly_thresholds,
dtype = tf.bool) # time_shape = (number_of_batch_time_anomaly_thresholds, current_batch_size, sequence_length), features_shape = (number_of_batch_features_anomaly_thresholds, current_batch_size, number_of_features)
mahalanobis_distance_any_over_thresholds = tf.reduce_any(input_tensor = mahalanobis_distance_over_thresholds, axis = 2) # time_shape = (number_of_batch_time_anomaly_thresholds, current_batch_size), features_shape = (number_of_batch_features_anomaly_thresholds, current_batch_size)
predicted_normals = tf.equal(x = mahalanobis_distance_any_over_thresholds, y = False) # time_shape = (number_of_batch_time_anomaly_thresholds, current_batch_size), features_shape = (number_of_batch_features_anomaly_thresholds, current_batch_size)
predicted_anomalies = tf.equal(x = mahalanobis_distance_any_over_thresholds, y = True) # time_shape = (number_of_batch_time_anomaly_thresholds, current_batch_size), features_shape = (number_of_batch_features_anomaly_thresholds, current_batch_size)
# Calculate confusion matrix of current batch
true_positives = tf.reduce_sum(input_tensor = tf.cast(x = tf.map_fn(fn = lambda threshold: tf.logical_and(x = labels_anomalous_mask, y = predicted_anomalies[threshold, :]),
elems = tf.range(start = 0, limit = number_of_thresholds, dtype = tf.int64),
dtype = tf.bool),
dtype = tf.int64),
axis = 1) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
false_negatives = tf.reduce_sum(input_tensor = tf.cast(x = tf.map_fn(fn = lambda threshold: tf.logical_and(x = labels_anomalous_mask, y = predicted_normals[threshold, :]),
elems = tf.range(start = 0, limit = number_of_thresholds, dtype = tf.int64),
dtype = tf.bool),
dtype = tf.int64),
axis = 1) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
false_positives = tf.reduce_sum(input_tensor = tf.cast(x = tf.map_fn(fn = lambda threshold: tf.logical_and(x = labels_normal_mask, y = predicted_anomalies[threshold, :]),
elems = tf.range(start = 0, limit = number_of_thresholds, dtype = tf.int64),
dtype = tf.bool),
dtype = tf.int64),
axis = 1) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
true_negatives = tf.reduce_sum(input_tensor = tf.cast(x = tf.map_fn(fn = lambda threshold: tf.logical_and(x = labels_normal_mask, y = predicted_normals[threshold, :]),
elems = tf.range(start = 0, limit = number_of_thresholds, dtype = tf.int64),
dtype = tf.bool),
dtype = tf.int64),
axis = 1) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
with tf.control_dependencies(control_inputs = [tf.assign_add(ref = true_positives_at_thresholds_variable, value = true_positives), tf.assign_add(ref = false_negatives_at_thresholds_variable, value = false_negatives), tf.assign_add(ref = false_positives_at_thresholds_variable, value = false_positives), tf.assign_add(ref = true_negatives_at_thresholds_variable, value = true_negatives)]):
return tf.identity(input = true_positives_at_thresholds_variable), tf.identity(input = false_negatives_at_thresholds_variable), tf.identity(input = false_positives_at_thresholds_variable), tf.identity(input = true_negatives_at_thresholds_variable)
with tf.variable_scope(name_or_scope = "mahalanobis_distance_variables", reuse = tf.AUTO_REUSE):
# Time based
time_anomaly_thresholds = tf.linspace(start = tf.constant(value = params["min_batch_time_anomaly_threshold"], dtype = tf.float64), # shape = (number_of_batch_time_anomaly_thresholds,)
stop = tf.constant(value = params["max_batch_time_anomaly_threshold"], dtype = tf.float64),
num = params["number_of_batch_time_anomaly_thresholds"])
true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable = \
update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, params["number_of_batch_time_anomaly_thresholds"], time_anomaly_thresholds, mahalanobis_distance_batch_time, true_positives_at_thresholds_time_variable, false_negatives_at_thresholds_time_variable, false_positives_at_thresholds_time_variable, true_negatives_at_thresholds_time_variable)
# Features based
features_anomaly_thresholds = tf.linspace(start = tf.constant(value = params["min_batch_features_anomaly_threshold"], dtype = tf.float64), # shape = (number_of_batch_features_anomaly_thresholds,)
stop = tf.constant(value = params["max_batch_features_anomaly_threshold"], dtype = tf.float64),
num = params["number_of_batch_features_anomaly_thresholds"])
true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable = \
update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, params["number_of_batch_features_anomaly_thresholds"], features_anomaly_thresholds, mahalanobis_distance_batch_features, true_positives_at_thresholds_features_variable, false_negatives_at_thresholds_features_variable, false_positives_at_thresholds_features_variable, true_negatives_at_thresholds_features_variable)
# Reconstruction loss on evaluation set
with tf.control_dependencies(control_inputs = [true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable, true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable]):
def calculate_composite_classification_metrics(anomaly_thresholds, true_positives, false_negatives, false_positives, true_negatives):
accuracy = tf.cast(x = true_positives + true_negatives, dtype = tf.float64) / tf.cast(x = true_positives + false_negatives + false_positives + true_negatives, dtype = tf.float64) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
precision = tf.cast(x = true_positives, dtype = tf.float64) / tf.cast(x = true_positives + false_positives, dtype = tf.float64) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
recall = tf.cast(x = true_positives, dtype = tf.float64) / tf.cast(x = true_positives + false_negatives, dtype = tf.float64) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
f_beta_score = (1 + params["f_score_beta"] ** 2) * (precision * recall) / (params["f_score_beta"] ** 2 * precision + recall) # time_shape = (number_of_batch_time_anomaly_thresholds,), features_shape = (number_of_batch_features_anomaly_thresholds,)
return accuracy, precision, recall, f_beta_score
# Time based
accuracy_time, precision_time, recall_time, f_beta_score_time = \
calculate_composite_classification_metrics(time_anomaly_thresholds, true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable)
# Features based
accuracy_features, precision_features, recall_features, f_beta_score_features = \
calculate_composite_classification_metrics(features_anomaly_thresholds, true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable)
with tf.control_dependencies(control_inputs = [precision_time, precision_features]):
with tf.control_dependencies(control_inputs = [recall_time, recall_features]):
with tf.control_dependencies(control_inputs = [f_beta_score_time, f_beta_score_features]):
def find_best_anomaly_threshold(anomaly_thresholds, f_beta_score, user_passed_anomaly_threshold, anomaly_threshold_variable):
if user_passed_anomaly_threshold == None:
best_anomaly_threshold = tf.gather(params = anomaly_thresholds, indices = tf.argmax(input = f_beta_score, axis = 0)) # shape = ()
else:
best_anomaly_threshold = user_passed_anomaly_threshold # shape = ()
with tf.control_dependencies(control_inputs = [tf.assign(ref = anomaly_threshold_variable, value = best_anomaly_threshold)]):
return tf.identity(input = anomaly_threshold_variable)
# Time based
best_anomaly_threshold_time = find_best_anomaly_threshold(time_anomaly_thresholds, f_beta_score_time, params["time_anomaly_threshold"], time_anomaly_threshold_variable)
# Features based
best_anomaly_threshold_features = find_best_anomaly_threshold(features_anomaly_thresholds, f_beta_score_features, params["features_anomaly_threshold"], features_anomaly_threshold_variable)
with tf.control_dependencies(control_inputs = [tf.assign(ref = time_anomaly_threshold_variable, value = best_anomaly_threshold_time),
tf.assign(ref = features_anomaly_threshold_variable, value = best_anomaly_threshold_features)]):
loss = tf.reduce_sum(input_tensor = tf.zeros(shape = (), dtype = tf.float64) * dummy_variable)
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = tf.train.get_global_step(),
learning_rate = params["learning_rate"],
optimizer = "SGD")
elif mode == tf.estimator.ModeKeys.EVAL:
def update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, anomaly_threshold, mahalanobis_distance, true_positives_at_thresholds_variable, false_negatives_at_thresholds_variable, false_positives_at_thresholds_variable, true_negatives_at_thresholds_variable):
mahalanobis_distance_over_threshold = mahalanobis_distance > anomaly_threshold # time_shape = (current_batch_size, sequence_length), features_shape = (current_batch_size, number_of_features)
mahalanobis_distance_any_over_threshold = tf.reduce_any(input_tensor = mahalanobis_distance_over_threshold, axis = -1) # time_shape = (current_batch_size,), features_shape = (current_batch_size,)
predicted_normals = tf.equal(x = mahalanobis_distance_any_over_threshold, y = False) # time_shape = (current_batch_size,), features_shape = (current_batch_size,)
predicted_anomalies = tf.equal(x = mahalanobis_distance_any_over_threshold, y = True) # time_shape = (current_batch_size,), features_shape = (current_batch_size,)
# Calculate confusion matrix of current batch
true_positives = tf.reduce_sum(input_tensor = tf.cast(x = tf.logical_and(x = labels_anomalous_mask, y = predicted_anomalies), dtype = tf.int64),
axis = -1) # time_shape = (), features_shape = ()
false_negatives = tf.reduce_sum(input_tensor = tf.cast(x = tf.logical_and(x = labels_anomalous_mask, y = predicted_normals), dtype = tf.int64),
axis = -1) # time_shape = (), features_shape = ()
false_positives = tf.reduce_sum(input_tensor = tf.cast(x = tf.logical_and(x = labels_normal_mask, y = predicted_anomalies), dtype = tf.int64),
axis = -1) # time_shape = (), features_shape = ()
true_negatives = tf.reduce_sum(input_tensor = tf.cast(x = tf.logical_and(x = labels_normal_mask, y = predicted_normals), dtype = tf.int64),
axis = -1) # time_shape = (), features_shape = ()
with tf.control_dependencies(control_inputs = [tf.assign_add(ref = true_positives_at_thresholds_variable, value = true_positives), tf.assign_add(ref = false_negatives_at_thresholds_variable, value = false_negatives), tf.assign_add(ref = false_positives_at_thresholds_variable, value = false_positives), tf.assign_add(ref = true_negatives_at_thresholds_variable, value = true_negatives)]):
return tf.identity(input = true_positives_at_thresholds_variable), tf.identity(input = false_negatives_at_thresholds_variable), tf.identity(input = false_positives_at_thresholds_variable), tf.identity(input = true_negatives_at_thresholds_variable)
with tf.variable_scope(name_or_scope = "anomaly_threshold_eval_variables", reuse = tf.AUTO_REUSE):
# Time based
true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable = \
update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, time_anomaly_threshold_variable, mahalanobis_distance_batch_time, true_positives_at_threshold_eval_time_variable, false_negatives_at_threshold_eval_time_variable, false_positives_at_threshold_eval_time_variable, true_negatives_at_threshold_eval_time_variable)
# Features based
true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable = \
update_anomaly_threshold_variables(labels_normal_mask, labels_anomalous_mask, features_anomaly_threshold_variable, mahalanobis_distance_batch_features, true_positives_at_threshold_eval_features_variable, false_negatives_at_threshold_eval_features_variable, false_positives_at_threshold_eval_features_variable, true_negatives_at_threshold_eval_features_variable)
with tf.control_dependencies(control_inputs = [true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable, true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable]):
def calculate_composite_classification_metrics(anomaly_thresholds, true_positives, false_negatives, false_positives, true_negatives, accuracy_at_threshold_variable, precision_at_threshold_variable, recall_at_threshold_variable, f_beta_score_at_threshold_variable):
accuracy = tf.cast(x = true_positives + true_negatives, dtype = tf.float64) / tf.cast(x = true_positives + false_negatives + false_positives + true_negatives, dtype = tf.float64) # shape = ()
precision = tf.cast(x = true_positives, dtype = tf.float64) / tf.cast(x = true_positives + false_positives, dtype = tf.float64) # shape = ()
recall = tf.cast(x = true_positives, dtype = tf.float64) / tf.cast(x = true_positives + false_negatives, dtype = tf.float64) # shape = ()
f_beta_score = (1 + params["f_score_beta"] ** 2) * (precision * recall) / (params["f_score_beta"] ** 2 * precision + recall) # shape = ()
with tf.control_dependencies(control_inputs = [tf.assign(ref = accuracy_at_threshold_variable, value = accuracy), tf.assign(ref = precision_at_threshold_variable, value = precision), tf.assign(ref = recall_at_threshold_variable, value = recall)]):
with tf.control_dependencies(control_inputs = [tf.assign(ref = f_beta_score_at_threshold_variable, value = f_beta_score)]):
return tf.identity(input = accuracy_at_threshold_variable), tf.identity(input = precision_at_threshold_variable), tf.identity(input = recall_at_threshold_variable), tf.identity(input = f_beta_score_at_threshold_variable)
with tf.variable_scope(name_or_scope = "anomaly_threshold_eval_variables", reuse = tf.AUTO_REUSE):
# Time based
accuracy_time, precision_time, recall_time, f_beta_score_time = \
calculate_composite_classification_metrics(time_anomaly_threshold_variable, true_positives_time_variable, false_negatives_time_variable, false_positives_time_variable, true_negatives_time_variable, accuracy_at_threshold_eval_time_variable, precision_at_threshold_eval_time_variable, recall_at_threshold_eval_time_variable, f_beta_score_at_threshold_eval_time_variable)
# Features based
accuracy_features, precision_features, recall_features, f_beta_score_features = \
calculate_composite_classification_metrics(features_anomaly_threshold_variable, true_positives_features_variable, false_negatives_features_variable, false_positives_features_variable, true_negatives_features_variable, accuracy_at_threshold_eval_features_variable, precision_at_threshold_eval_features_variable, recall_at_threshold_eval_features_variable, f_beta_score_at_threshold_eval_features_variable)
with tf.control_dependencies(control_inputs = [accuracy_time, precision_time, recall_time, f_beta_score_time, accuracy_features, precision_features, recall_features, f_beta_score_features]):
loss = tf.losses.mean_squared_error(labels = X_time_centered, predictions = X_time_reconstructed)
# Anomaly detection eval metrics
eval_metric_ops = {
# Time based
"time_anomaly_true_positives": tuple([true_positives_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"time_anomaly_false_negatives": tuple([false_negatives_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"time_anomaly_false_positives": tuple([false_positives_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"time_anomaly_true_negatives": tuple([true_negatives_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"time_anomaly_accuracy": tuple([accuracy_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"time_anomaly_precision": tuple([precision_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"time_anomaly_recall": tuple([recall_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"time_anomaly_f_beta_score": tuple([f_beta_score_at_threshold_eval_time_variable, tf.zeros(shape = [], dtype = tf.float64)]),
# Features based
"features_anomaly_true_positives": tuple([true_positives_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"features_anomaly_false_negatives": tuple([false_negatives_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"features_anomaly_false_positives": tuple([false_positives_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"features_anomaly_true_negatives": tuple([true_negatives_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.int64)]),
"features_anomaly_accuracy": tuple([accuracy_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"features_anomaly_precision": tuple([precision_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"features_anomaly_recall": tuple([recall_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.float64)]),
"features_anomaly_f_beta_score": tuple([f_beta_score_at_threshold_eval_features_variable, tf.zeros(shape = [], dtype = tf.float64)])
}
else: # mode == tf.estimator.ModeKeys.PREDICT
# Flag predictions as either normal or anomalous
batch_time_anomaly_flags = tf.where(condition = tf.reduce_any(input_tensor = tf.greater(x = tf.abs(x = mahalanobis_distance_batch_time), # shape = (current_batch_size,)
y = time_anomaly_threshold_variable),
axis = 1),
x = tf.ones(shape = [current_batch_size], dtype = tf.int64),
y = tf.zeros(shape = [current_batch_size], dtype = tf.int64))
batch_features_anomaly_flags = tf.where(condition = tf.reduce_any(input_tensor = tf.greater(x = tf.abs(x = mahalanobis_distance_batch_features), # shape = (current_batch_size,)
y = features_anomaly_threshold_variable),
axis = 1),
x = tf.ones(shape = [current_batch_size], dtype = tf.int64),
y = tf.zeros(shape = [current_batch_size], dtype = tf.int64))
# Create predictions dictionary
predictions_dict = {"X_time_abs_reconstruction_error": tf.reshape(tensor = X_time_abs_reconstruction_error, shape = [current_batch_size, params["sequence_length"], number_of_features]),
"X_features_abs_reconstruction_error": tf.transpose(a = tf.reshape(tensor = X_features_abs_reconstruction_error, shape = [current_batch_size, number_of_features, params["sequence_length"]]), perm = [0, 2, 1]),
"mahalanobis_distance_batch_time": mahalanobis_distance_batch_time,
"mahalanobis_distance_batch_features": mahalanobis_distance_batch_features,
"batch_time_anomaly_flags": batch_time_anomaly_flags,
"batch_features_anomaly_flags": batch_features_anomaly_flags}
# Create export outputs
export_outputs = {"predict_export_outputs": tf.estimator.export.PredictOutput(outputs = predictions_dict)}
# Return EstimatorSpec
return tf.estimator.EstimatorSpec(
mode = mode,
predictions = predictions_dict,
loss = loss,
train_op = train_op,
eval_metric_ops = eval_metric_ops,
export_outputs = export_outputs)
# Create our serving input function to accept the data at serving and send it in the right format to our custom estimator
def serving_input_fn(sequence_length):
# This function fixes the shape and type of our input strings
def fix_shape_and_type_for_serving(placeholder):
current_batch_size = tf.shape(input = placeholder, out_type = tf.int64)[0]
# String split each string in the batch and output the values from the resulting SparseTensors
split_string = tf.stack(values = tf.map_fn( # shape = (batch_size, sequence_length)
fn = lambda x: tf.string_split(source = [placeholder[x]], delimiter = ',').values,
elems = tf.range(start = 0, limit = current_batch_size, dtype = tf.int64),
dtype = tf.string), axis = 0)
# Convert each string in the split tensor to float
feature_tensor = tf.string_to_number(string_tensor = split_string, out_type = tf.float64) # shape = (batch_size, sequence_length)
return feature_tensor
# This function fixes dynamic shape ambiguity of last dimension so that we will be able to use it in our DNN (since tf.layers.dense require the last dimension to be known)
def get_shape_and_set_modified_shape_2D(tensor, additional_dimension_sizes):
# Get static shape for tensor and convert it to list
shape = tensor.get_shape().as_list()
# Set outer shape to additional_dimension_sizes[0] since we know that this is the correct size
shape[1] = additional_dimension_sizes[0]
# Set the shape of tensor to our modified shape
tensor.set_shape(shape = shape) # shape = (batch_size, additional_dimension_sizes[0])
return tensor
# Create placeholders to accept the data sent to the model at serving time
feature_placeholders = { # all features come in as a batch of strings, shape = (batch_size,), this was so because of passing the arrays to online ml-engine prediction
feature: tf.placeholder(dtype = tf.string, shape = [None]) for feature in UNLABELED_CSV_COLUMNS
}
# Create feature tensors
features = {key: fix_shape_and_type_for_serving(placeholder = tensor) for key, tensor in feature_placeholders.items()}
# Fix dynamic shape ambiguity of feature tensors for our DNN
features = {key: get_shape_and_set_modified_shape_2D(tensor = tensor, additional_dimension_sizes = [sequence_length]) for key, tensor in features.items()}
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)
# Create estimator to train and evaluate
def train_and_evaluate(args):
# Create our custom estimator using our model function
estimator = tf.estimator.Estimator(
model_fn = pca_anomaly_detection,
model_dir = args["output_dir"],
params = {
"sequence_length": args["sequence_length"],
"learning_rate": args["learning_rate"],
"evaluation_mode": args["evaluation_mode"],
"k_principal_components": args["k_principal_components"],
"number_of_batch_time_anomaly_thresholds": args["number_of_batch_time_anomaly_thresholds"],
"number_of_batch_features_anomaly_thresholds": args["number_of_batch_features_anomaly_thresholds"],
"min_batch_time_anomaly_threshold": args["min_batch_time_anomaly_threshold"],
"max_batch_time_anomaly_threshold": args["max_batch_time_anomaly_threshold"],
"min_batch_features_anomaly_threshold": args["min_batch_features_anomaly_threshold"],
"max_batch_features_anomaly_threshold": args["max_batch_features_anomaly_threshold"],
"time_anomaly_threshold": args["time_anomaly_threshold"],
"features_anomaly_threshold": args["features_anomaly_threshold"],
"eps": args["eps"],
"f_score_beta": args["f_score_beta"]})
if args["evaluation_mode"] == "reconstruction":
estimator.train(
input_fn = read_dataset(
filename = args["train_file_pattern"],
mode = tf.estimator.ModeKeys.EVAL,
batch_size = args["train_batch_size"],
params = args),
steps = None)
else:
if args["evaluation_mode"] == "calculate_error_distribution_statistics":
# Get final mahalanobis statistics over the entire validation_1 dataset
estimator.train(
input_fn = read_dataset(
filename = args["train_file_pattern"],
mode = tf.estimator.ModeKeys.EVAL,
batch_size = args["train_batch_size"],
params = args),
steps = None)
elif args["evaluation_mode"] == "tune_anomaly_thresholds":
# Tune anomaly thresholds using valdiation_2 and validation_anomaly datasets
estimator.train(
input_fn = read_dataset(
filename = args["train_file_pattern"],
mode = tf.estimator.ModeKeys.EVAL,
batch_size = args["train_batch_size"],
params = args),
steps = None)
estimator.evaluate(
input_fn = read_dataset(
filename = args["eval_file_pattern"],
mode = tf.estimator.ModeKeys.EVAL,
batch_size = args["eval_batch_size"],
params = args),
steps = None)
# Export savedmodel with learned error distribution statistics to be used for inference
estimator.export_savedmodel(
export_dir_base = args['output_dir'] + "/export/exporter",
serving_input_receiver_fn = lambda: serving_input_fn(args["sequence_length"]))
return estimator
arguments = {}
# File arguments
arguments["train_file_pattern"] = "data/training_normal_sequences.csv"
arguments["eval_file_pattern"] = "data/validation_normal_1_sequences.csv"
arguments["output_dir"] = "trained_model"
# Sequence shape hyperparameters
arguments["sequence_length"] = sequence_length
# Training parameters
arguments["train_batch_size"] = 32
arguments["eval_batch_size"] = 32
arguments["train_steps"] = 2000
arguments["learning_rate"] = 0.01
arguments["start_delay_secs"] = 60
arguments["throttle_secs"] = 120
# Anomaly detection
arguments["evaluation_mode"] = "reconstruction"
arguments["k_principal_components"] = min(number_of_tags, 3)
arguments["number_of_batch_time_anomaly_thresholds"] = 300
arguments["number_of_batch_features_anomaly_thresholds"] = 300
arguments["min_batch_time_anomaly_threshold"] = 1
arguments["max_batch_time_anomaly_threshold"] = 100
arguments["min_batch_features_anomaly_threshold"] = 1
arguments["max_batch_features_anomaly_threshold"] = 100
arguments["time_anomaly_threshold"] = None
arguments["features_anomaly_threshold"] = None
arguments["eps"] = 10**-12
arguments["f_score_beta"] = 0.05
```
## Train reconstruction variables
```
# Train the model
shutil.rmtree(path = arguments["output_dir"], ignore_errors = True) # start fresh each time
estimator = train_and_evaluate(arguments)
```
## Look at PCA variable values
```
estimator.get_variable_names()
arr_training_normal_sequences = np.genfromtxt(fname = "data/training_normal_sequences.csv", delimiter = ';', dtype = str)
print("arr_training_normal_sequences.shape = {}".format(arr_training_normal_sequences.shape))
if number_of_tags == 1:
arr_training_normal_sequences = np.expand_dims(a = arr_training_normal_sequences, axis = -1)
arr_training_normal_sequences_features = np.stack(
arrays = [np.stack(
arrays = [np.array(arr_training_normal_sequences[example_index, tag_index].split(',')).astype(np.float)
for tag_index in range(number_of_tags)], axis = 1)
for example_index in range(len(arr_training_normal_sequences))], axis = 0)
print("arr_training_normal_sequences_features.shape = {}".format(arr_training_normal_sequences_features.shape))
```
### Time based
```
X_time = arr_training_normal_sequences_features.reshape(arr_training_normal_sequences_features.shape[0] * arr_training_normal_sequences_features.shape[1], number_of_tags)
X_time.shape
```
#### Count
```
estimator.get_variable_value(name = "pca_variables/pca_time_count_variable")
time_count = X_time.shape[0]
time_count
```
#### Mean
```
estimator.get_variable_value(name = "pca_variables/pca_time_mean_variable")
time_mean = np.mean(X_time, axis = 0)
time_mean
estimator.get_variable_value(name = "pca_variables/pca_time_mean_variable") / time_mean
```
#### Covariance
```
if estimator.get_variable_value(name = "pca_variables/pca_time_covariance_matrix_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_time_covariance_matrix_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_time_covariance_matrix_variable").shape)
if arguments["sequence_length"] == 1:
time_cov = np.zeros(shape = [number_of_tags, number_of_tags])
else:
time_cov = np.cov(np.transpose(X_time))
if time_cov.shape[0] <= 10:
print(time_cov)
else:
print(time_cov.shape)
estimator.get_variable_value(name = "pca_variables/pca_time_covariance_matrix_variable") / time_cov
```
#### Eigenvalues
```
if estimator.get_variable_value(name = "pca_variables/pca_time_eigenvalues_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_time_eigenvalues_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_time_eigenvalues_variable").shape)
time_eigenvalues, time_eigenvectors = np.linalg.eigh(a = time_cov)
if time_eigenvalues.shape[0] <= 10:
print(time_eigenvalues)
else:
print(time_eigenvalues.shape)
estimator.get_variable_value(name = "pca_variables/pca_time_eigenvalues_variable") / time_eigenvalues
```
#### Eigenvectors
```
if estimator.get_variable_value(name = "pca_variables/pca_time_eigenvectors_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_time_eigenvectors_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_time_eigenvectors_variable").shape)
if time_eigenvectors.shape[0] <= 10:
print(time_eigenvectors)
else:
print(time_eigenvectors.shape)
estimator.get_variable_value(name = "pca_variables/pca_time_eigenvectors_variable") / time_eigenvectors
```
### Features based
```
X_features = np.transpose(arr_training_normal_sequences_features, [0, 2, 1]).reshape(arr_training_normal_sequences_features.shape[0] * number_of_tags, arr_training_normal_sequences_features.shape[1])
X_features.shape
```
#### Count
```
estimator.get_variable_value(name = "pca_variables/pca_features_count_variable")
feat_count = X_features.shape[0]
feat_count
```
#### Mean
```
estimator.get_variable_value(name = "pca_variables/pca_features_mean_variable")
feat_mean = np.mean(X_features, axis = 0)
feat_mean
estimator.get_variable_value(name = "pca_variables/pca_features_mean_variable") / feat_mean
```
#### Covariance
```
if estimator.get_variable_value(name = "pca_variables/pca_features_covariance_matrix_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_features_covariance_matrix_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_features_covariance_matrix_variable").shape)
if number_of_tags == 1:
feat_cov = np.zeros(shape = [arguments["sequence_length"], arguments["sequence_length"]])
else:
feat_cov = np.cov(np.transpose(X_features))
if feat_cov.shape[0] <= 10:
print(feat_cov)
else:
print(feat_cov.shape)
estimator.get_variable_value(name = "pca_variables/pca_features_covariance_matrix_variable") / feat_cov
```
#### Eigenvalues
```
if estimator.get_variable_value(name = "pca_variables/pca_features_eigenvalues_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_features_eigenvalues_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_features_eigenvalues_variable").shape)
feat_eigenvalues, feat_eigenvectors = np.linalg.eigh(a = feat_cov)
if feat_eigenvalues.shape[0] <= 10:
print(feat_eigenvalues)
else:
print(feat_eigenvalues.shape)
estimator.get_variable_value(name = "pca_variables/pca_features_eigenvalues_variable") / feat_eigenvalues
```
#### Eigenvectors
```
if estimator.get_variable_value(name = "pca_variables/pca_features_eigenvectors_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "pca_variables/pca_features_eigenvectors_variable"))
else:
print(estimator.get_variable_value(name = "pca_variables/pca_features_eigenvectors_variable").shape)
if feat_eigenvectors.shape[0] <= 10:
print(feat_eigenvectors)
else:
print(feat_eigenvectors.shape)
estimator.get_variable_value(name = "pca_variables/pca_features_eigenvectors_variable") / feat_eigenvectors
```
## Train error distribution statistics variables
```
arguments["evaluation_mode"] = "calculate_error_distribution_statistics"
arguments["train_file_pattern"] = "data/validation_normal_1_sequences.csv"
arguments["eval_file_pattern"] = "data/validation_normal_1_sequences.csv"
arguments["train_batch_size"] = 32
arguments["eval_batch_size"] = 32
estimator = train_and_evaluate(arguments)
```
## Look at variable values
```
estimator.get_variable_names()
arr_validation_normal_1_sequences = np.genfromtxt(fname = "data/validation_normal_1_sequences.csv", delimiter = ';', dtype = str)
print("arr_validation_normal_1_sequences.shape = {}".format(arr_validation_normal_1_sequences.shape))
if number_of_tags == 1:
arr_validation_normal_1_sequences = np.expand_dims(a = arr_validation_normal_1_sequences, axis = -1)
arr_validation_normal_1_sequences_features = np.stack(
arrays = [np.stack(
arrays = [np.array(arr_validation_normal_1_sequences[example_index, tag_index].split(',')).astype(np.float)
for tag_index in range(number_of_tags)], axis = 1)
for example_index in range(len(arr_validation_normal_1_sequences))], axis = 0)
dict_validation_normal_1_sequences_features = {tag: arr_validation_normal_1_sequences_features[:, :, index] for index, tag in enumerate(UNLABELED_CSV_COLUMNS)}
validation_normal_1_predictions_list = [prediction for prediction in estimator.predict(
input_fn = tf.estimator.inputs.numpy_input_fn(
x = dict_validation_normal_1_sequences_features,
y = None,
batch_size = 32,
num_epochs = 1,
shuffle = False,
queue_capacity = 1000))]
```
### Time based
```
validation_normal_1_time_absolute_error = np.stack(arrays = [prediction["X_time_abs_reconstruction_error"] for prediction in validation_normal_1_predictions_list], axis = 0)
time_abs_err = validation_normal_1_time_absolute_error.reshape(validation_normal_1_time_absolute_error.shape[0] * validation_normal_1_time_absolute_error.shape[1], number_of_tags)
time_abs_err.shape
```
#### Count
```
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_count_batch_time_variable")
time_count = time_abs_err.shape[0]
time_count
```
#### Mean
```
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_mean_batch_time_variable")
time_mean = np.mean(time_abs_err, axis = 0)
time_mean
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_mean_batch_time_variable") / time_mean
```
#### Covariance
```
if estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_time_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_time_variable"))
else:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_time_variable").shape)
if arguments["sequence_length"] == 1:
time_cov = np.zeros(shape = [number_of_tags, number_of_tags])
else:
time_cov = np.cov(np.transpose(time_abs_err))
if time_cov.shape[0] <= 10:
print(time_cov)
else:
print(time_cov.shape)
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_time_variable") / time_cov
```
#### Inverse Covariance
```
if estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_time_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_time_variable"))
else:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_time_variable").shape)
time_inv = np.linalg.inv(time_cov + np.eye(number_of_tags) * arguments["eps"])
if time_inv.shape[0] <= 10:
print(time_inv)
else:
print(time_inv.shape)
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_time_variable") / time_inv
```
### Features based
```
validation_normal_1_features_absolute_error = np.stack(arrays = [prediction["X_features_abs_reconstruction_error"] for prediction in validation_normal_1_predictions_list], axis = 0)
feat_abs_err = np.transpose(validation_normal_1_features_absolute_error, [0, 2, 1]).reshape(validation_normal_1_features_absolute_error.shape[0] * number_of_tags, validation_normal_1_features_absolute_error.shape[1])
feat_abs_err.shape
```
#### Count
```
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_count_batch_features_variable")
feat_count = feat_abs_err.shape[0]
feat_count
```
#### Mean
```
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_mean_batch_features_variable")
feat_mean = np.mean(feat_abs_err, axis = 0)
feat_mean
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_mean_batch_features_variable") / feat_mean
```
#### Covariance
```
if estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_features_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_features_variable"))
else:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_features_variable").shape)
if number_of_tags == 1:
feat_cov = np.zeros(shape = [arguments["sequence_length"], arguments["sequence_length"]])
else:
feat_cov = np.cov(np.transpose(feat_abs_err))
if feat_cov.shape[0] <= 10:
print(feat_cov)
else:
print(feat_cov.shape)
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_covariance_matrix_batch_features_variable") / feat_cov
```
#### Inverse Covariance
```
if estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_features_variable").shape[0] <= 10:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_features_variable"))
else:
print(estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_features_variable").shape)
feat_inv = np.linalg.inv(feat_cov + np.eye(sequence_length) * arguments["eps"])
if feat_inv.shape[0] <= 10:
print(feat_inv)
else:
print(feat_inv.shape)
estimator.get_variable_value(name = "mahalanobis_distance_variables/absolute_error_inverse_covariance_matrix_batch_features_variable") / feat_inv
```
## Tune anomaly thresholds
```
arguments["evaluation_mode"] = "tune_anomaly_thresholds"
arguments["train_file_pattern"] = "data/labeled_validation_mixed_sequences.csv"
arguments["eval_file_pattern"] = "data/labeled_validation_mixed_sequences.csv"
arguments["train_batch_size"] = 64
arguments["eval_batch_size"] = 64
estimator = train_and_evaluate(arguments)
```
#### Time based
```
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/true_positives_at_thresholds_time_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/false_negatives_at_thresholds_time_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/false_positives_at_thresholds_time_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/true_negatives_at_thresholds_time_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/time_anomaly_threshold_variable")
```
#### Features based
```
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/true_positives_at_thresholds_features_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/false_negatives_at_thresholds_features_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/false_positives_at_thresholds_features_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/true_negatives_at_thresholds_features_variable")
estimator.get_variable_value(name = "mahalanobis_distance_threshold_variables/features_anomaly_threshold_variable")
```
#### Numpy
```
arr_validation_mixed_sequences = np.genfromtxt(fname = "data/labeled_validation_mixed_sequences.csv", delimiter = ';', dtype = str)
print("arr_validation_mixed_sequences.shape = {}".format(arr_validation_mixed_sequences.shape))
arr_validation_mixed_sequences_features = np.stack(
arrays = [np.stack(
arrays = [np.array(arr_validation_mixed_sequences[example_index, tag_index].split(',')).astype(np.float)
for tag_index in range(number_of_tags)], axis = 1)
for example_index in range(len(arr_validation_mixed_sequences))], axis = 0)
print("arr_validation_mixed_sequences_features.shape = {}".format(arr_validation_mixed_sequences_features.shape))
dict_validation_mixed_sequences_features = {tag: arr_validation_mixed_sequences_features[:, :, index] for index, tag in enumerate(UNLABELED_CSV_COLUMNS)}
validation_mixed_predictions_list = [prediction for prediction in estimator.predict(
input_fn = tf.estimator.inputs.numpy_input_fn(
x = dict_validation_mixed_sequences_features,
y = None,
batch_size = 128,
num_epochs = 1,
shuffle = False,
queue_capacity = 1000))]
arr_validation_mixed_sequences_labels = arr_validation_mixed_sequences[:, -1].astype(np.float64)
print("arr_validation_mixed_sequences_labels.shape = {}".format(arr_validation_mixed_sequences_labels.shape))
labels_normal_mask = arr_validation_mixed_sequences_labels.astype(np.float64) == 0
labels_anomalous_mask = arr_validation_mixed_sequences_labels.astype(np.float64) == 1
```
### Time based
```
arr_validation_mixed_predictions_mahalanobis_distance_batch_time = np.stack(arrays = [prediction["mahalanobis_distance_batch_time"]
for prediction in validation_mixed_predictions_list], axis = 0)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_time.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_time.shape))
min_normal_mahalanobis_distance_batch_time = np.min(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_time[labels_normal_mask, :], axis = -1))
max_normal_mahalanobis_distance_batch_time = np.max(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_time[labels_normal_mask, :], axis = -1))
print("min_normal_mahalanobis_distance_batch_time = {} & max_normal_mahalanobis_distance_batch_time = {}".format(min_normal_mahalanobis_distance_batch_time, max_normal_mahalanobis_distance_batch_time))
min_anomalous_mahalanobis_distance_batch_time = np.min(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_time[labels_anomalous_mask, :], axis = -1))
max_anomalous_mahalanobis_distance_batch_time = np.max(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_time[labels_anomalous_mask, :], axis = -1))
print("min_anomalous_mahalanobis_distance_batch_time = {} & max_anomalous_mahalanobis_distance_batch_time = {}".format(min_anomalous_mahalanobis_distance_batch_time, max_anomalous_mahalanobis_distance_batch_time))
number_of_batch_time_anomaly_thresholds = arguments["number_of_batch_time_anomaly_thresholds"];
batch_time_anomaly_thresholds = np.linspace(start = arguments["min_batch_time_anomaly_threshold"],
stop = arguments["max_batch_time_anomaly_threshold"],
num = number_of_batch_time_anomaly_thresholds)
print("batch_time_anomaly_thresholds.shape = {}".format(batch_time_anomaly_thresholds.shape))
arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds = np.stack(arrays = [arr_validation_mixed_predictions_mahalanobis_distance_batch_time > batch_time_anomaly_threshold
for batch_time_anomaly_threshold in batch_time_anomaly_thresholds], axis = -1)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds.shape))
arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds = np.any(a = arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds, axis = 1)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds.shape))
predicted_normals = arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds == 0
print("predicted_normals.shape = {}".format(predicted_normals.shape))
predicted_anomalies = arr_validation_mixed_predictions_mahalanobis_distance_batch_time_anomalies_multi_thresholds == 1
print("predicted_anomalies.shape = {}".format(predicted_anomalies.shape))
true_positives = np.sum(a = np.stack(arrays = [np.logical_and(labels_anomalous_mask, predicted_anomalies[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
false_negatives = np.sum(a = np.stack(arrays = [np.logical_and(labels_anomalous_mask, predicted_normals[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
false_positives = np.sum(a = np.stack(arrays = [np.logical_and(labels_normal_mask, predicted_anomalies[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
true_negatives = np.sum(a = np.stack(arrays = [np.logical_and(labels_normal_mask, predicted_normals[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
print("true_positives.shape = {}, false_negatives.shape = {}, false_positives.shape = {}, true_negatives.shape = {}".format(true_positives.shape, false_negatives.shape, false_positives.shape, true_negatives.shape))
print("true_positives = \n{}".format(true_positives))
print("false_negatives = \n{}".format(false_negatives))
print("false_positives = \n{}".format(false_positives))
print("true_negatives = \n{}".format(true_negatives))
print("true_positives + false_negatives + false_positives + true_negatives = \n{}".format(true_positives + false_negatives + false_positives + true_negatives))
accuracy = (true_positives + true_negatives) / (true_positives + false_negatives + false_positives + true_negatives)
accuracy
precision = true_positives / (true_positives + false_positives)
precision
recall = true_positives / (true_positives + false_negatives)
recall
f_beta_score = (1 + arguments["f_score_beta"] ** 2) * (precision * recall) / (arguments["f_score_beta"] ** 2 * precision + recall)
f_beta_score
batch_time_anomaly_thresholds[np.argmax(f_beta_score)]
```
### Features based
```
arr_validation_mixed_predictions_mahalanobis_distance_batch_features = np.stack(arrays = [prediction["mahalanobis_distance_batch_features"]
for prediction in validation_mixed_predictions_list], axis = 0)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_features.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_features.shape))
min_normal_mahalanobis_distance_batch_features = np.min(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_features[labels_normal_mask, :], axis = -1))
max_normal_mahalanobis_distance_batch_features = np.max(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_features[labels_normal_mask, :], axis = -1))
print("min_normal_mahalanobis_distance_batch_features = {} & max_normal_mahalanobis_distance_batch_features = {}".format(min_normal_mahalanobis_distance_batch_features, max_normal_mahalanobis_distance_batch_features))
min_anomalous_mahalanobis_distance_batch_features = np.min(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_features[labels_anomalous_mask, :], axis = -1))
max_anomalous_mahalanobis_distance_batch_features = np.max(np.max(arr_validation_mixed_predictions_mahalanobis_distance_batch_features[labels_anomalous_mask, :], axis = -1))
print("min_anomalous_mahalanobis_distance_batch_features = {} & max_anomalous_mahalanobis_distance_batch_features = {}".format(min_anomalous_mahalanobis_distance_batch_features, max_anomalous_mahalanobis_distance_batch_features))
number_of_batch_features_anomaly_thresholds = arguments["number_of_batch_features_anomaly_thresholds"];
batch_features_anomaly_thresholds = np.linspace(start = arguments["min_batch_features_anomaly_threshold"],
stop = arguments["max_batch_features_anomaly_threshold"],
num = number_of_batch_features_anomaly_thresholds)
print("batch_features_anomaly_thresholds.shape = {}".format(batch_features_anomaly_thresholds.shape))
arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds = np.stack(arrays = [arr_validation_mixed_predictions_mahalanobis_distance_batch_features > batch_features_anomaly_threshold
for batch_features_anomaly_threshold in batch_features_anomaly_thresholds], axis = -1)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds.shape))
arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds = np.any(a = arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds, axis = 1)
print("arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds.shape = {}".format(arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds.shape))
predicted_normals = arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds == 0
print("predicted_normals.shape = {}".format(predicted_normals.shape))
predicted_anomalies = arr_validation_mixed_predictions_mahalanobis_distance_batch_features_anomalies_multi_thresholds == 1
print("predicted_anomalies.shape = {}".format(predicted_anomalies.shape))
true_positives = np.sum(a = np.stack(arrays = [np.logical_and(labels_anomalous_mask, predicted_anomalies[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
false_negatives = np.sum(a = np.stack(arrays = [np.logical_and(labels_anomalous_mask, predicted_normals[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
false_positives = np.sum(a = np.stack(arrays = [np.logical_and(labels_normal_mask, predicted_anomalies[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
true_negatives = np.sum(a = np.stack(arrays = [np.logical_and(labels_normal_mask, predicted_normals[:, threshold])
for threshold in range(number_of_batch_time_anomaly_thresholds)],
axis = -1),
axis = 0)
print("true_positives.shape = {}, false_negatives.shape = {}, false_positives.shape = {}, true_negatives.shape = {}".format(true_positives.shape, false_negatives.shape, false_positives.shape, true_negatives.shape))
print("true_positives = \n{}".format(true_positives))
print("false_negatives = \n{}".format(false_negatives))
print("false_positives = \n{}".format(false_positives))
print("true_negatives = \n{}".format(true_negatives))
print("true_positives + false_negatives + false_positives + true_negatives = \n{}".format(true_positives + false_negatives + false_positives + true_negatives))
accuracy = (true_positives + true_negatives) / (true_positives + false_negatives + false_positives + true_negatives)
accuracy
precision = true_positives / (true_positives + false_positives)
precision
recall = true_positives / (true_positives + false_negatives)
recall
f_beta_score = (1 + arguments["f_score_beta"] ** 2) * (precision * recall) / (arguments["f_score_beta"] ** 2 * precision + recall)
f_beta_score
batch_features_anomaly_thresholds[np.argmax(f_beta_score)]
```
# Local Prediction
```
arr_labeled_test_mixed_sequences = np.genfromtxt(fname = "data/labeled_test_mixed_sequences.csv", delimiter = ';', dtype = str)
arr_labeled_test_mixed_sequences_features = np.stack(
arrays = [np.stack(
arrays = [np.array(arr_labeled_test_mixed_sequences[example_index, tag_index].split(',')).astype(np.float)
for tag_index in range(number_of_tags)], axis = 1)
for example_index in range(len(arr_labeled_test_mixed_sequences))], axis = 0)
dict_labeled_test_mixed_sequences_features = {tag: arr_labeled_test_mixed_sequences_features[:, :, index] for index, tag in enumerate(UNLABELED_CSV_COLUMNS)}
arr_test_labels = arr_labeled_test_mixed_sequences[:, -1]
predictions_list = [prediction for prediction in estimator.predict(
input_fn = tf.estimator.inputs.numpy_input_fn(
x = dict_labeled_test_mixed_sequences_features,
y = None,
batch_size = 128,
num_epochs = 1,
shuffle = False,
queue_capacity = 1000))]
arr_test_labels[0:10]
```
### Normal example
```
normal_test_example_index = np.argmax(arr_test_labels == '0')
predictions_list[normal_test_example_index]
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i, arr in enumerate(np.split(ary = predictions_list[normal_test_example_index]["X_time_abs_reconstruction_error"].flatten(), indices_or_sections = len(UNLABELED_CSV_COLUMNS), axis = 0)):
sns.tsplot(arr, color = flatui[i%len(flatui)] )
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i, arr in enumerate(np.split(ary = predictions_list[normal_test_example_index]["X_features_abs_reconstruction_error"].flatten(), indices_or_sections = len(UNLABELED_CSV_COLUMNS), axis = 0)):
sns.tsplot(arr, color = flatui[i%len(flatui)] )
```
### Anomalous Example
```
anomalous_test_example_index = np.argmax(arr_test_labels == '1')
predictions_list[anomalous_test_example_index]
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i, arr in enumerate(np.split(ary = predictions_list[anomalous_test_example_index]["X_time_abs_reconstruction_error"].flatten(), indices_or_sections = len(UNLABELED_CSV_COLUMNS), axis = 0)):
sns.tsplot(arr, color = flatui[i%len(flatui)] )
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i, arr in enumerate(np.split(ary = predictions_list[anomalous_test_example_index]["X_features_abs_reconstruction_error"].flatten(), indices_or_sections = len(UNLABELED_CSV_COLUMNS), axis = 0)):
sns.tsplot(arr, color = flatui[i%len(flatui)] )
```
| github_jupyter |
# Intro
$\DeclareMathOperator*{\argmin}{argmin}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\space}{\text{ }}
\newcommand{\QQQ}{\boxed{?\:}}$
# Sample Space and Events
Suppose an experiment whose outcome is not predictable in advance, only the set of all possible outcomes is known. We call this set: ***sample space***, $S$, of this experiment.
And an ***Event*** is any subset of this sample space, denoted as $E$. The event $E$ **occurs** when the outcome lies in $E$. And following that we can define the Union, Intersection, and the ***null event***, $\varnothing$. We call two sets are ***mutually exclusive*** when their intersection is a null event.
Finally we define the ***complement***. For any event $E$ we define the new event $E^c$, referred to as the complement of $E$, to *consists of all outcomes in the sample sapce $S$ that are not in $E$*.
# Probability Defined on Events
Consider an experiment, with sample sapce $S$, and for each event $E$, we assume that a **number** $P(E)$ is defined and satisfies the following these conditions:
- $0 \leq P(E) \leq 1$
- $P(S) = 1$
- For **any sequence** of events $E_1, E_2, \dots$ that are mutually exclusive, that is events for which $E_nE_m = E_n \cap E_m = \varnothing$ when $n \neq m$, then
$\d{P\left(\bigcup_{n=1}^{\infty}E_n\right) = \sum_{n=1}^{\infty} P(E_n)} $
$\QQQ$Can this **any sequence** be limited?
We refer to $P(E)$ as the ***probability*** of the event $E$.
$Remark$
>These probabilities have a nice intuitive property, that if our experiment is repeated over and over again $wp1$, the proportion of time that event $E$ occurs will just be $P(E)$.
Other conclusions
- $1 = P(S) = P(E \cup E^c) = P(E) + P(E^c)$ or $P(E^c) = 1 - P(E)$
- $P(\varnothing) = 1 - P(S) = 0$
- $P(E) + P(F) = P(E \cup F) + P(EF)$, and when they're mutually exclusive, we have $P(E) + P(F) = P(E \cup F)$
- $(EG \cup FG ) = \cdots = (E \cup F)G $
- $P(E \cup F \cup G) = P((E\cup F)\cup G) = P(E \cup F) + P(G) - P((E\cup F)G)\\
\Rightarrow P(E \cup F \cup G) = P(E) + P(F) + P(G) - P(EF) - P(FG) - P(GF) + P(EFG)$
- and by **induction** that for any $\mathbf{n}$ events $E_1,E_2, \dots, E_n$, we have:$\\[0.5em]$
$$\begin{align}
P(E_1 \cup E_2 \cup \cdots \cup E_n) &= \sum_{i}P(E_i) \sum_{i < j}P(E_iE_j) + \sum_{i<j<k}P(E_iE_jE_k) \\
&\;\;\; - \sum_{i<j<k<l}P(E_iE_jE_kE_l) + \dots \\
&\;\;\; + (-1)^{n+1} P(E_1E_2E_3\cdots E_n)
\end{align}$$
# Conditional Probabilities
***Conditional Probability*** for event $E$ given that event $F$ has occured is denoted by $P(E\mid F)$, and the formula for this is
$$P(E \mid F) = \ffrac{P(EF)} {P(F)}$$
$Remark$
This is well defined when $P(F)>0$, hence $P(E \mid F)$ is only defined when $P(F)>0$.
# Independent Events
Event $E$ and $F$ are ***independent*** if $P(EF) = P(E)P(F)$. And an direct result is $P(E \mid F) = P(E)$, otherwise they are ***dependent***. And to prove independency, we need both $P(EF) = P(E)P(F)$ and $P(E \mid F) = P(E)$, then that will work.
An extended definition, for $n$ events $E_1, \dots, E_n$. If for every subset $E_1', \dots, E_r'$, with $r \leq n$, of these events, we always have $P(E_1' \dots E_r') = P(E_1')\dots P(E_r')$.
$Remark$
***Pairwise independence*** CANNOT ensure independence. And to distinguish these two, we call that ***jointly independent***.
**e.g.**
$r$ players each initially having $n_i$ units, $n_i > 0$, $i = 1, \dots, r$. For each stage two players are chosen to play, with the winner win $1$ unit from the loser. Any player whose fortune drops to $0$ is eliminated and game continues until one single player has all $n \equiv \sum n_i$ units. Assuming that the results of successive games are independent and that each game is equally likely to be won by either of its two players. Find the probability that player $i$ is the victor!
> First we suppose there're totoally $n$ players with each player initially having only $1$ unit and others are same with before.
>
>Now because this is the same for all players, it follows that each player has the same chance of being the victor. Consequently, each player has player probability $1/n$ of being the victor.
>
>Now suppose these players are divided into $r$ teams, with team $i$ containing $n_i$ players. Then it's easy to see that the probability that the victor is a member of team $i$ is $n_i/n$. And this is same for the previous problem.
***
Suppose that a sequence of experiments, each of which results in either a $1$ or a $0$, is to be performed. Let $E_i$, $i \geq 1$ denote the event that the $i\text{-th}$ experiment results in a success. We call this sequence of experiments consists of **independent trials**, if for all $i_1, i_2, \dots, i_n$, this holds
$\;\;\;\;\d{P(E_{i_1} E_{i_2} \cdots E_{i_n}) = \prod_{j=1}^{n}P(E_{i_j})}$
# Bayes' Formula
Let $E$ and $F$ be events. We may express $E$ as $E = EF \cup EF^c$ because in order for a point to be in $E$, it must either in both $E$ and $F$ or it must be in $E$ and not in $F$.
And since $EF$ and $EF^c$ are mutually exclusive, we have that
$$\begin{align}
P(E) &= P(EF) + P(EF^c) \\
&= P(E \mid F) P(F) + P(E\mid F^c) P(F^c) \\
&= P(E \mid F) P(F) + P(E\mid F^c) (1-P(F))
\end{align}$$
which implies that $P(E)$ can be seen as a **weighted average** of
- the conditional probability of $E$ given that $F$ has occurred
- the conditional probability of $E$ given that $F$ has not occured.
and each is given as much weight as the event on which it is contioned has of occurring.
**e.g.**
First toss with results $H$ or $T$, each with probability $1/2$. Then for result $H$, get $W$ with probability $2/9$ and $B$ with probability $7/9$ and for result $T$, get $W$ with probability $5/11$ and $B$ with probability $6/11$. Now given $W$, what's the conditional probability of $H$?
>$\d{P(H\mid W) = \frac{P(HW)} {P(W)} = \frac{P(HW)} {P(W\mid H)P(H) + P(W \mid T)P(T)}} = \frac{\frac{2} {9} \frac{1} {2}} {\frac{2} {9} \frac{1} {2} + \frac{5} {11}\frac{1} {2}} = \frac{22} {67}$
***
Here's the generalized method. Suppose that $F_1, F_2, \dots, F_n$ are mutually exclusive events such that $\bigcup_{i=1}^{n}F_i = S$. In other words, exactly one of the events $F_1, F_2, \dots, F_n$ will occur. Then we write:
$$E = \bigcup_{i=1}^{n}EF_i$$
and using the fact that events $EF_i$ for $i = 1, 2, \dots, n$ are also mutually exclusive, we obtain that:
$$
\begin{align}
P(E) &= \sum_{i=1}^{n} P(EF_i) \\
&= \sum_{i=1}^{n} P(E \mid F_i) P(F_i)
\end{align}
$$
Then we have the formula, the ***Bayes' formula***:
$$\boxed{
P(F_j\mid E) = \ffrac{P(EF_j)} {P(E)} = \frac{P(E \mid F_j)P(F_j)} {\sum_{i=1}^{n} P(E \mid F_i) P(F_i)}}
$$
# Exercises
**32**
Suppose $n$ men throw their hats in the center of the room. Each men then randomly selects a hat. Show that the probability that **none** of these $n$ men selects his own hat is
$$\frac{1} {2!} - \frac{1} {3!} + \frac{1} {4!} - + \cdots \frac{(-1)^n} {n!}$$
>Let $E_i$ denote the event that person $i$ selects his own hat, then we have the finanl probability equals to
>
>$$\begin{align}
P &= 1 - P(E_1 \cup E_2 \cup \cdots \cup E_n) \\
&= 1 - \left[ \sum_{i} P(E_{i}) - \sum_{i < j}P(E_iE_j) + \cdots + (-1)^{n+1}P(E_1E_2\cdots E_n) \right] \\
\end{align}$$
>
>After applying the formula before, the question facing now is what is $P(E_{i_1}\cdots E_{i_{h}})$. Here's how we calculate them
>
>$$\begin{align}
\sum_{i_1 < \cdots < i_h} P(E_{i_1}\cdots E_{i_{h}}) &= \sum_{i_1 < \cdots < i_h} \frac{(n-h)!} {n!} \\
&= \binom{n} {h} \frac{(n-h)!} {n!} \\
&= \frac{1} {h!}
\end{align}$$
>
>The following steps are obvious
>
>However, I would like to show this by the pure theory from number series.
>
>We denote the probability for $n$ men as $P_n$, and it's easy to find that
>
>$$\begin{align}
P_n &= &\frac{n-1} {n} \frac{1} {n-1} P_{n-2} \\
&& +\frac{n-1} {n} \frac{n-2} {n-1} \frac{1} {n-2} P_{n-3}\\
&& \cdots \\
&& + \frac{n-1} {n} \cdots \frac{1} {2} P_{1}\\
&= \frac{1} {n} \sum_{i=1}^{n-2}P_i
\end{align}$$
>
>$$\begin{align}
nP_n &= P_1 + P_2 + \cdots + P_{n-2} \\
nP_n &= (n-1)P_{n-1} + P_{n-2} \\
n(P_n - P_{n-1}) &= -(P_{n-1} - P_{n-2})
\end{align}$$
>
>Almost there.
| github_jupyter |
# Imports
```
import pandas as pd
import spacy
from spacy.tokenizer import Tokenizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neighbors import NearestNeighbors
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from xgboost import XGBClassifier
from sklearn.decomposition import TruncatedSVD
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
```
# Loading data from urls
- I'm going to combine two data sets together.
- I plan on the datasets having similar strains
- Hoping to merge together descriptions to create a more detailed description of each strain
```
url = "https://raw.githubusercontent.com/kushyapp/cannabis-dataset/master/Dataset/Strains/strains-kushy_api.2017-11-14.csv"
url2 = "https://raw.githubusercontent.com/med-cabinet-5/data-science/master/cannabis.csv"
df = pd.read_csv(url)
df2 = pd.read_csv(url2)
ls = [df, df2]
for x in ls:
x.info()
print("\n")
```
# Helper Funtions
**get_lemmas** - return lemmas from a desired string
**get_word_vectors** - return word vectors from spacy
**pred** - make a prediction that return a nested dictionary of 5 closest neighbors and summary stats
**get_words** - create a word embedding visualization
**cleaner** - clean and merge the datasets
```
def get_lemmas(text):
"""Return the Lemmas"""
lemmas = []
doc = nlp(text)
for token in doc:
if ((token.is_stop == False) and (token.is_punct == False)) and (token.pos_!= 'PRON'):
lemmas.append(token.lemma_)
return lemmas
def get_word_vectors(words):
# converts a list of words into their word vectors
return [nlp(word).vector for word in words]
def pred(x):
"""Make prediction and return nested dictionary
x = string
"""
# Load mode file and perform prediction
model = pickle.load(open("mvp.sav", "rb"))
x = [x]
trans = tfidf.transform(x)
pred = model.kneighbors(trans.todense())[1][0]
#create empty dictionary
pred_dict = {}
#summary statistics of 5 closest neighbors
for x in pred:
print("\nStrain Name: ", df_test["Strain"][x])
print("\nType: ", df_test["Type"][x])
print("\nDescription: ", df_test["Description"][x])
print("\nFlavor: ", df_test["Flavor"][x])
print("\nEffects: ", df_test["Effects"][x])
print("\nAilments: ", df_test["Ailment"][x])
print("\n--------------------------------------------------------------------------------------------------------------------------------")
# add new dictionary to pred_dict containing predictions
preds_dict ={(1+len(pred_dict)): {"strain":df_test["Strain"][x],
"type": df_test["Type"][x],
"description": df_test["Description"][x],
"flavor": df_test["Flavor"][x],
"effects": df_test["Effects"][x],
"ailments": df_test["Ailment"][x]}}
pred_dict.update(preds_dict)
return pred, pred_dict
def get_words(x):
"""Get word vectors froma column of strings and plot"""
#create a new column with lemmas
df_test["lemmas"] = df_test[x].apply(get_lemmas)
#create a list of lists from the lemmas column
ls_lemmas = []
for _ in df_test["lemmas"]:
ls_lemmas.append(_)
# create a list of items from the sublist items
ls_lemmas_new = [item for sublist in ls_lemmas for item in sublist]
# create list of lemmatized words
lemmas = []
for _ in ls_lemmas_new:
_ = get_lemmas(_)
lemmas.append(_)
lemmas= [item for sublist in ls_lemmas for item in sublist]
# create new list with lowered words
lemmas_1 = []
for _ in lemmas:
_= _.lower()
lemmas_1.append(_)
# Get rid of duplicates
lemmas = list(set(lemmas_1))
# intialise pca model and tell it to project data down onto 2 dimensions
pca = PCA(n_components=2)
# fit the pca model to our spacy 300 dim data, this will work out which is the best
# way to project the data down that will best maintain the relative distances
# between data points. It will store these intructioons on how to transform the data.
pca.fit(get_word_vectors(lemmas))
# Tell our (fitted) pca model to transform our 300D data down onto 2D using the
# instructions it learnt during the fit phase.
word_vecs_2d = pca.transform(get_word_vectors(lemmas))
# create a nice big plot
plt.figure(figsize=(40,30))
# plot the scatter plot of where the words will be
sns.scatterplot(word_vecs_2d[:,0], word_vecs_2d[:,1], size=lemmas, )
# for each word and coordinate pair: draw the text on the plot
for word, coord in zip(lemmas, word_vecs_2d):
x, y = coord
plt.text(x, y, word, size= 30)
# show the plot
plt.show()
return lemmas
def cleaner(df1, df2):
"""Clean Dataframes and concat together & keep as much text information as possible"""
print(f"Shape of df1: {df1.shape}")
# Fill NaN with empyty strings to make concat easier
df1 = df1.fillna("")
# Concat all df1 text columns into a single column containing row corpus
df1['alltext'] = df1['description'].str.cat(df1["type"], sep=" ")
df1['alltext'] = df1['alltext'].str.cat(df1["effects"], sep=" ")
df1['alltext'] = df1["alltext"].str.cat(df1["ailment"], sep=" ")
df1['alltext'] = df1["alltext"].str.cat(df1["flavor"], sep=" ")
df1['alltext'] = df1["alltext"].str.cat(df1["location"], sep=" ")
df1['alltext'] = df1["alltext"].str.cat(df1["terpenes"], sep=" ")
# Rename columns to match DF to concat too
df1 = df1[["name","description", "alltext", "type", "effects", "ailment", "flavor"]]
df1 = df1.rename(columns={"name":"Strain",
"type":"Type",
"effects":"Effects",
"flavor":"Flavor",
"description": "Description"
})
print(f"Shape of df2: {df2.shape}")
# Fill NaN with empyty strings to make concat easier
df2 = df2.fillna("")
# Add 'ailment' column to df2 to make concat easier
df2["ailment"] = ""
# Concat all df2 text columns into a single column containing row corpus
df2['alltext'] = df2['Effects'].str.cat(df2["Flavor"], sep=" ")
df2['alltext'] = df2['alltext'].str.cat(df2["Description"], sep=" ")
df2['alltext'] = df2["alltext"].str.cat(df2["Type"], sep=" ")
# Concat df1 & df2
df_cat = pd.concat([df1,df2], sort=False)
print(f"Shape after concat: {df_cat.shape}")
# Create column that shows the length of alltext to identify low word count rows
df_cat["len_length"] = df_cat["alltext"].apply(lambda x: len(x))
# Filter for rows only with more than 100 chars to filter our undescriptive rows out of the df
condition = df_cat['len_length'] > 100
df_cat = df_cat[condition]
# Create "lemmas" column to clean and groupby on cleaned strain names
df_cat['lemmas'] = df_cat['Strain'].apply(get_lemmas)
# Combine lemmas lists to create cleaned strain names with hyphens removed and text lowered
df_cat["strain_clean"] = df_cat["lemmas"].apply(lambda x: " ".join(x))
df_cat["strain_clean"] = df_cat["strain_clean"].apply(lambda x: x.replace("-"," "))
df_cat["strain_clean"] = df_cat["strain_clean"].apply(lambda x: x.lower())
# Groupby "strain_clean" and agg using join then reset index
df_cat = df_cat.groupby("strain_clean").agg(" ".join).reset_index()
# Only keep needed columns
keep_cols = ["strain_clean", "Type", "Effects", "ailment", "Flavor", "Description", "alltext"]
df_cat = df_cat[keep_cols]
# Rename columns to keep similar name structure
df_cat = df_cat.rename(columns = {"strain_clean":"Strain",
"ailment":"Ailment"})
# Add in single strain name
df_cat["Strain"][0] = "one to one"
# Title the strain names
df_cat["Strain"] = df_cat["Strain"].apply(lambda x: x.title())
# Remove duplicates and make text presentable
ls_dupe = ["Effects", "Flavor", "Type", "Ailment", "alltext"]
for x in ls_dupe:
df_cat[x] = df_cat[x].apply(get_lemmas)
df_cat[x] = df_cat[x].map(lambda x: list(set(map(str.lower, x))))
df_cat[x] = df_cat[x].str.join(", ")
df_cat[x] = df_cat[x].apply(lambda x: x.title())
#Summary
print(f"Final Shape: {df_cat.shape}")
return df_cat
```
# Clean Data
```
# Create the nlp object
nlp = spacy.load("en_core_web_lg")
# Create tokenizer object
tokenizer = Tokenizer(nlp.vocab)
# Clean
df_test = cleaner(df, df2)
df_test.head(3)
```
# Model MVP: Nearest Neighbors
```
text = df_test["alltext"]
# Instantiate vectorizer object
tfidf = TfidfVectorizer(tokenizer=get_lemmas, min_df=0.025, max_df=.98, ngram_range=(1,3))
# Create a vocabulary and get word counts per document
dtm = tfidf.fit_transform(text) # Similiar to fit_predict
# Get feature names to use as dataframe column headers
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
# View Feature Matrix as DataFrame
print(dtm.shape)
dtm.head()
# Fit on TF-IDF Vectors
nn = NearestNeighbors(n_neighbors=5, algorithm='kd_tree', radius=.5)
nn.fit(dtm)
review = [""" I cant sleep and have back spasms
"""]
new = tfidf.transform(review)
preds = nn.kneighbors(new.todense())
preds
df_test["Description"][1345]
df_test.loc[1345]
```
# Save Model
```
filename = "mvp.sav"
pickle.dump(nn, open(filename, "wb"))
```
# Load Model
```
model = pickle.load(open("mvp.sav", "rb"))
review = ["pain"]
trans = tfidf.transform(review)
model.kneighbors(trans.todense())[1]
```
# Results
```
df_test["Description"][1241]
df_test["Effects"][1241]
```
# Demo
```
preds2, pred_dict = pred("II have trouble sleeping and pain")
preds2
# return nested dictionary
pred_dict
```
# Model Stretch:
- Create labels from type column
- apply xgboost and grid search to identify what type of cannabis a patient wants based on
their description
- Provide end user with a probability of type (indica, sativa, hybrid, indica-hybrid, sativa-hybrid) of cannabis they are looking for
```
df_test.head(1)
df_test["Type"].value_counts()
# Create column of labels
# dictionary to map over Type column and create labels
label_dict = {"Hybrid": 5,
"Indica": 4,
"Sativa": 3,
"Hybrid, Indica": 2,
"Sativa, Hybrid": 1}
df_test['labels'] = df_test["Type"].map(label_dict)
df_test["labels"].value_counts()
train, test = train_test_split(df_test, random_state=42, stratify=df_test["labels"])
training, validation = train_test_split(train, random_state=42, stratify=train['labels'])
trainX = training.drop(columns=["labels", "lemmas"])
trainy = training.pop("labels")
valX = validation.drop(columns=["labels", "lemmas"])
valy = validation.pop("labels")
testX = test.drop(columns=["labels", "lemmas"])
testy = test.pop("labels")
trainX_cv = df_test.pop("alltext")
trainy_cv = df_test.pop("labels")
split_ls = [trainX, trainy, valX, valy, testX, testy, trainX_cv, trainy_cv]
for x in split_ls:
print(f"Shape: {x.shape}")
svd = TruncatedSVD(n_components=300,
algorithm='randomized',
n_iter=20)
vect = TfidfVectorizer(stop_words='english',
ngram_range=(1,2))
clf = XGBClassifier()
lsi = Pipeline([('vect', vect), ('svd', svd)])
pipe = Pipeline([('lsi', lsi), ('clf', clf)])
parameters = {
'lsi__svd__n_components': (10,55),
'lsi__vect__max_df': (0.75, 1.0),
'clf__max_depth': (5,10,15),
'clf__n_estimators': (200, 500, 1000),
'lsi__vect__min_df': (0.025, 0.05)
}
grid_search = GridSearchCV(pipe, parameters, cv=10, n_jobs=-1, verbose=10)
grid_search.fit(trainX_cv, trainy_cv)
grid_search.best_score_
# Predictions on a test
pred = grid_search.predict(["I have trouble sleeping at night"])
pred
# Probabilities to return
grid_search.predict_proba(["I have trouble sleeping"])
# Save
filename = "stretch.sav"
pickle.dump(grid_search, open(filename, "wb"))
# Load
model = pickle.load(open("stretch.sav", "rb"))
review = ["pain"]
model.predict(review)
model.predict_proba(["I have trouble sleeping at night"])
```
# Data Viz
```
# Word embeddings for cannabis effects
get_words("Effects")
# Word embeddings for cannabis flavor
get_words("Flavor")
get_words("Ailment")
```
| github_jupyter |
# Analyze articles from mega journals
## Import packages
First, let us import all the packages, which are needed for this notebook:
```
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from crossref.restful import Works, Journals
from wordcloud import WordCloud
```
If you miss one of these packages, then you can also install it directly here with something like the following (don't forget to restart the kernel afterwards):
```
!pip install crossrefapi wordcloud
```
## Choose the mega journal to analyze
For each mega journal a specific search within the CrossRef API is defined (see below).
You can choose the mega journal here from this list and then make the analysis for that.
The resulting figures and data are saved the the specified prefix then.
```
# You can change this to one of the values:
choice = "academia-letters"
#choice = "plos-one"
#choice = "peerj"
#choice = "peerj-cs"
#choice = "sage-open"
#choice = "open-research-europe"
#choice = "f1000-research"
prefix = choice + "-"
```
## Analyze the papers
This step relies on the previously created CSV files, where the data is from Crossref is saved.
If you want to take fresh data first, then jump to the section below.
Otherwise you can continue here to analyze the same data as it was used originally.
```
# reading csv file as pandas dataframe
data = pd.read_csv('./data/' + prefix + 'papers.csv')
data['issued'] = pd.to_datetime(data['issued'])
data['year'] = data['issued'].dt.year
data['month'] = data['issued'].dt.month
data['weekday'] = data['issued'].dt.weekday
# showing the first entries
data.head()
# Some consistency and sanity checks
print(data['prefix'].value_counts(), "\n")
print(data['language'].value_counts(), "\n")
print(data['container-title'].value_counts(), "\n")
print(data['ISSN'].value_counts(), "\n")
print(data['publisher'].value_counts(), "\n")
print(data['type'].value_counts(), "\n")
print(data['member'].value_counts(), "\n")
```
### Publications per month
```
data['issued'].value_counts()
data.value_counts(subset=['year', 'month'], sort=False)
fig = plt.figure()
ax = fig.add_subplot(111)
data.value_counts(subset=['year', 'month'], sort=False).plot(kind='bar', legend=False).set(xlabel=None)
# the following needs matplot 3.4.1 or higher
ax.bar_label(ax.containers[0], size=8)
plt.savefig('./figures/' + prefix + 'published-articles.png', dpi=600, bbox_inches="tight")
```
### Publications per weekday
```
data['weekday'].value_counts().sort_index()
fig = plt.figure()
ax = fig.add_subplot(111)
data['weekday'].value_counts().sort_index().rename(index={0: "Mo", 1: "Di", 2: "Mi", 3: "Do", 4: "Fr", 5: "Sa", 6: "So"}).plot(kind='bar')
plt.xticks(rotation=0)
# the following needs matplot 3.4.1 or higher
ax.bar_label(ax.containers[0])
plt.savefig('./figures/' + prefix + 'publications-weekday.png', dpi=600)
```
If articles are published on Sunday, this may indicate a (fully) automated publishing workflow.
### Word cloud from the title words
First, we concatenate the title of all published articles into one variable.
```
wordsFromTitles = ""
with open('./data/' + prefix + 'papers.csv', 'r', newline='') as papersfile:
reader = csv.DictReader(papersfile)
for row in reader:
wordsFromTitles += row['title'] + " | "
# print(wordsFromTitles)
from wordcloud import WordCloud
#wordcloud = WordCloud().generate(wordsFromTitles)
wordcloud = WordCloud(max_font_size=50, collocation_threshold=5, background_color="white", colormap="Dark2", width=800, height=400).generate(wordsFromTitles)
# save it to a file
wordcloud.to_file('./figures/' + prefix + 'wordcloud.png')
# preview the cloud here directly
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
```
Some other results from the exploration phase is saved at the end of this notebook.
## Refetch and update the data (optional)
The data from CrossRef's API was fetched along these lines. You can refetch them respectively update the data.
All the data here will be saved as CSV files, such that afterwards you can re-analyze them by the code above.
```
works = Works()
journals = Journals()
if choice == "academia-letters": articles = works.filter(container_title="Academia Letters")
if choice == "plos-one": articles = journals.works('1932-6203').filter(from_issued_date='2021')
if choice == "peerj": articles = journals.works('2167-8359').filter(from_issued_date='2021')
if choice == "peerj-cs": articles = journals.works('2376-5992').filter(from_issued_date='2021')
if choice == "sage-open": articles = journals.works('2158-2440').filter(from_issued_date='2021')
if choice == "open-research-europe": articles = journals.works('2732-5121').filter(from_issued_date='2021')
if choice == "f1000-research": articles = journals.works('2046-1402').filter(from_issued_date='2021')
print(articles.count(), articles.url)
def parse(key, obj):
if key in obj:
value = obj[key]
else:
return ''
if key in ['issued', 'published', 'published-online', 'indexed', 'created']:
# e.g. {"date-parts":[[2021,8,17]],"date-time":"2021-08-17T20:02:44Z","timestamp":1629230564000}
if "date-parts" in value:
return ", ".join(["-".join(map(str, date)) for date in value['date-parts']])
else:
print("WARNING: unexpected date format", value)
if key in ['title', 'container-title']:
if len(value) == 1:
return(value[0])
else:
print("WARNING: unexpected title format", value)
# ISSN is an array and we simply concatenate multiple values
if key in ['ISSN']:
return ", ".join(value)
if key in ['affiliation']:
return " + ".join([aff['name'] for aff in value])
return value
fields = ['DOI', 'prefix', 'title', 'language', 'container-title', 'ISSN', 'publisher', 'URL', 'issued', 'published', 'published-online', 'indexed', 'created', 'reference-count', 'is-referenced-by-count', 'type', 'member']
with open('./data/' + prefix + 'papers.csv', 'w', newline='') as papersfile:
paperswriter = csv.writer(papersfile)
paperswriter.writerow(fields)
author_fields = ['given', 'family', 'sequence', 'affiliation']
with open('./data/' + prefix + 'authors.csv', 'w', newline='') as authorsfile:
authorswriter = csv.writer(authorsfile)
authorswriter.writerow(author_fields + ['DOI'])
i=0
for entry in articles:
i += 1
line = [parse(field, entry) for field in fields]
paperswriter.writerow(line)
if 'author' in entry:
for author in entry['author']:
author_line = [parse(field, author) for field in author_fields]
author_line.append(entry['DOI'])
authorswriter.writerow(author_line)
else:
print("WARNING: no author info for", entry['DOI'], entry['title'])
# For testing
#if i > 3: break
```
## Other things from the exploration phase
### Citation distribution
```
# print(data['reference-count'].value_counts())
citations = data['is-referenced-by-count'].value_counts().sort_index()
print(data['is-referenced-by-count'].value_counts(normalize=True).sort_index())
citations.plot(kind='bar')
hmax = max(data['is-referenced-by-count'].value_counts()[1:])*2
plt.axis(ymax=hmax)
plt.xlabel("Citations")
plt.ylabel("Number of Articles")
#plt.bar_label()
plt.savefig('./figures/' + prefix + 'citations-distribution.png')
```
### Number of publications on specific dates
```
print(data['issued'].value_counts())
```
For Academia Letters three of the top values from above are from July 2021, i.e. on some of these days a lot of articles were published. Let us have a closer look:
```
data[(data['year'] == 2021) & (data['month'] == 7)]['issued'].value_counts()
```
The top three are more than 500 articles published within 3 days... that is a lot!
### Analyze authors and affiliations
```
# reading csv file as pandas dataframe
authors_data = pd.read_csv('./data/' + prefix + 'authors.csv')
authors_data['full-name'] = authors_data['given'] + " " + authors_data['family']
data.head()
authors_data['full-name'].value_counts()
authors_data['affiliation'].value_counts()
affiliation_data = authors_data.dropna()
affiliation_data[affiliation_data['affiliation'].str.contains('Mannheim')]
```
| github_jupyter |
# The Curse of Dimensionality
An increase in the number of dimensions of a dataset means that there are more entries in the vector of features that represents each observation in the corresponding Euclidean space. We measure the distance in a vector space using Euclidean distance, also known as L2 norm, which we applied to the vector of linear regression coefficients to train a regularized Ridge Regression model.
The Euclidean distance between two n-dimensional vectors with Cartesian coordinates p = (p1, p2, ..., pn) and q = (q1, q2, ..., qn) is computed using the familiar formula developed by Pythagoras:
$$d(p, q)=\sqrt{\sum_{i=1}^n(p_i−q_i)^2}$$
Hence, each new dimension adds a non-negative term to the sum, so that the distance increases with the number of dimensions for distinct vectors. In other words, as the number of features grows for a given number of observations, the feature space becomes increasingly sparse, i.e., less dense or emptier. On the flip side, the lower data density requires more observations to keep the average distance between data points the same.
## Imports and Settings
```
%matplotlib inline
import pandas as pd
import numpy as np
from numpy import clip, full, fill_diagonal
from numpy.random import uniform, multivariate_normal, seed
from scipy.spatial.distance import pdist, squareform
import matplotlib.pyplot as plt
import seaborn as sns
seed(42)
sns.set_style('white')
```
## Simulate pairwise distances of points in $\mathbb{R}^n$ (while $n$ increases)
The simulation draws features in the range [0, 1] from uncorrelated uniform or correlated normal distributions and gradually increases the number of features to 2,500.
The average distance between data points increases to over 11 times the feature range for features drawn from the normal distribution, and to over 20 times in the (extreme) case of uncorrelated uniform distribution
### Compute Distance Metrics
```
def get_distance_metrics(points):
"""Calculate mean of pairwise distances and
mean of min pairwise distances"""
pairwise_dist = squareform(pdist(points))
fill_diagonal(pairwise_dist, np.nanmean(pairwise_dist, axis=1))
avg_distance = np.mean(np.nanmean(pairwise_dist, axis=1))
fill_diagonal(pairwise_dist, np.nanmax(pairwise_dist, axis=1))
avg_min_distance = np.mean(np.nanmin(pairwise_dist, axis=1))
return avg_distance, avg_min_distance
```
### Simulate Distances
```
def simulate_distances(m, n, mean, var, corr):
"""Draw m random vectors of dimension n
from uniform and normal distributions
and return pairwise distance metrics"""
uni_dist = get_distance_metrics(uniform(size=(m, n)))
cov = full(shape=(n, n), fill_value=var * corr)
fill_diagonal(cov, var)
normal_points = multivariate_normal(
full(shape=(n,), fill_value=mean), cov, m)
normal_points = clip(normal_points, a_min=0, a_max=1)
norm_dist = get_distance_metrics(normal_points)
return uni_dist, norm_dist
```
### Sampling Parameters
```
n_points = 1000
min_dim, max_dim, step = 1, 2502, 100 # from 1 - 2501
dimensions = range(min_dim, max_dim, step)
```
### Normal Distribution Params
```
mean = 0.5
var = (mean/3)**2 # 99% of sample in [0, 1]
corr = 0.25
```
### Run Simulation
```
col_names = ['Avg. Uniform', 'Min. Uniform', 'Avg. Normal', 'Min. Normal']
avg_dist = []
for dim in dimensions:
uni_dist, norm_dist = simulate_distances(n_points,
dim,
mean,
var,
corr)
avg_dist.append([*uni_dist,
*norm_dist])
distances = pd.DataFrame(data=avg_dist,
columns=col_names,
index=dimensions)
```
## Plot Average and Minimum Distance of Points in Unit Hypercube
```
title = 'Distance of {:,.0f} Data Points in a Unit Hypercube'.format(n_points)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(14, 8))
distances[[ 'Avg. Uniform', 'Avg. Normal']].plot.bar(title='Average ' + title, ax=axes[0], rot=0)
distances[[ 'Min. Uniform', 'Min. Normal']].plot.bar(title='Minimum ' + title, ax=axes[1], rot=0)
for ax in axes:
ax.grid(axis='y', lw=1, ls='--')
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x() + .005, p.get_height() + .25), fontsize=10)
ax.set_ylabel('Distance')
axes[1].set_xlabel('Dimensionality')
sns.despine()
fig.tight_layout();
```
| github_jupyter |
# Prevalence of Personal Attacks
In this notebook, we do some basic investigation into the frequency of personal attacks on Wikipedia. We will attempt to provide some insight into the following questions:
- What fraction of comments are personal attacks?
- What fraction of users have made a personal attack?
- What fraction of users have been attacked on their user page?
- Are there any temporal trends in the frequency of attacks?
We have 2 separate types of data at our disposal. First, we have a random sample of roughly 100k human-labeled comments. Each comment was labeled by 10 separate people as to whether the comment is a personal attack. We can average these labels to get a single score. Second, we have the full history of comments with attack probabilities generated by a machine learning model. The machine learning model is imperfect. In particular, it is more conservative than human annotators about making "extreme" predictions. In particular, it is less likely to give non-attacks a score of 0 and also less likely to give attacks a score greater than 0.5, compared to humans (for more details, see the "Model Checking" notebook). For all questions above apart from the first, we will need to use the model scores.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from load_utils import *
from analysis_utils import compare_groups
d = load_diffs()
df_events, df_blocked_user_text = load_block_events_and_users()
```
### Q: What fraction of comments are personal attacks?
#### Methodology 1:
Compute fraction of comments predicted to be attacks for different classification thresholds. We can rely on just the human labeled data for this. However, we can also see what results we would get if we used the model predictions.
```
def prevalence_m1(d, sample, score, ts = np.arange(0.5, 0.91, 0.1)):
d_a = pd.concat([pd.DataFrame({'threshold': t, 'attack': d[sample].query("ns=='article'")[score] >= t }) for t in ts], axis = 0)
d_a['condition'] = 'article talk'
d_u = pd.concat([pd.DataFrame({'threshold': t, 'attack': d[sample].query("ns=='user'")[score] >= t }) for t in ts], axis = 0)
d_u['condition'] = 'user talk'
df = pd.concat([d_u, d_a])
df['attack'] = df['attack'] * 100
sns.pointplot(x="threshold", y="attack", hue="condition", data=df)
plt.ylabel('percent of comments that are attacks')
plt.ylim(-0.1, plt.ylim()[1])
plt.savefig('../../paper/figs/threshold_prevalence_%s_%s' % (sample, score))
# human annotations
prevalence_m1(d, 'annotated', 'recipient_score')
```
A higher proportion of user talk comments are attacks. At a conservative threshold of 0.8, 1:400 user talk comments is an attack.
```
# model predictions on human annotated data
prevalence_m1(d, 'annotated', 'pred_recipient_score')
```
This is the same plot as above, except that we used the model scores instead of the mean human scores. The model gives fewer comments a score above 0.5. Hence, it suggests that a smaller proportion of comments is personal attacks. This is just for comparison. We can simply use the human scores to answer this question.
```
#prevalence_m1(d, 'sample', 'pred_recipient_score')
```
#### Methodology 2
A potential downside of measuring personal attacks as we did above is that, when we pick a threshold, we don't count any of the attacking comments with a score below the threshold. If we interpret the score as the probability that a human would consider the comment an attack, then we can use the scores to compute the expected fraction of attacks in a corpus. To get a measure of uncertainty, we can use the scores as probabilities in a simulation as follows: for each comment, label it as a personal attack with the probability assigned by the model/annotators. Count the fraction of comments labeled as personal attacks. Repeat to get a distribution and take 95% interval.
```
def prevalence_m2(d, sample, score, n = 100):
def compute_ci(a, n = 10):
m = a.shape[0]
v = a.values.reshape((m,1))
fs = np.sum(np.random.rand(m, n) < v, axis = 0) / m
print("Percent of comments labeled as attacks: (%.3f, %.3f)" % ( 100*np.percentile(fs, 2.5), 100*np.percentile(fs, 97.5)))
print('\nUser:')
compute_ci(d[sample].query("ns=='user'")[score])
print('Article:')
compute_ci(d[sample].query("ns=='article'")[score])
# human annotations
prevalence_m2(d, 'annotated', 'recipient_score')
```
This method gives that roughly 2.3% of user talk comments are personal attacks. I would consider this method less reliable than the above, since here we are relying on the fact that each annotator is high quality. Just having one trigger happy annotator for each comment already gives an expected attack fraction of 10%.
```
# model predictions on annotated data
prevalence_m2(d, 'annotated', 'pred_recipient_score')
```
The model over-predicts relative to the annotators for method M2, since the model's score distribution is skewed right in the [0, 0.2] interval and that is where most of the data is. Again, we don't need the model scores to answer this question, I am just including it for comparison.
```
#prevalence_m2(d, 'sample', 'pred_recipient_score')
```
### Q: What fraction of users have made a personal attack?
#### Methodology 1:
Take unsampled data from 2015. Compute fraction of people who authored one comment above the threshold for different thresholds. Note: this requires using the model scores.
```
ts = np.arange(0.5, 0.91, 0.1)
dfs = []
for t in ts:
dfs.append (\
d['2015'].assign(attack= lambda x: 100 * (x.pred_recipient_score > t))\
.groupby(['user_text', 'author_anon'], as_index = False)['attack'].max()\
.assign(threshold = t)
)
sns.pointplot(x='threshold', y = 'attack', hue = 'author_anon', data = pd.concat(dfs))
plt.ylabel('percent of attacking users')
```
- 1.6% of registered users have written at least one comment with a score above 0.5
- 0.5% of registered users have written at least one comment with a score above 0.8.
#### Methodology 2:
Take unsampled data. For each comment, let it be an attack with probability equal to the model prediction. Count the number of users that have made at least 1 attack. Repeat.
```
def simulate_num_attacks(df, group_col = 'user_text'):
n = df.assign( uniform = np.random.rand(df.shape[0], 1))\
.assign(is_attack = lambda x: (x.pred_recipient_score >= x.uniform).astype(int))\
.groupby(group_col)['is_attack']\
.max()\
.sum()
return n
n_attacks = [simulate_num_attacks(d['2015']) for i in range(100)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d['2015'].user_text.unique()))
# ignore anon users
d_temp = d['2015'].query('not author_anon and not recipient_anon')
n_attacks = [simulate_num_attacks(d_temp) for i in range(100)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d_temp.user_text.unique()))
```
These results are inflated, due to model's right skew. This means that users with a lot of comments are very likely to be assigned at least one attacking comment in the simulation. I would not recommend using this method.
### Q: What fraction of users have been attacked on their user page?
#### Methodology:
Take unsampled data from 2015. Compute fraction of people who recieved one comment above the threshold for different thresholds.
```
dfs = []
for t in ts:
dfs.append (\
d['2015'].query("not own_page and ns=='user'")\
.assign(attack = lambda x: 100 * (x.pred_recipient_score >= t))\
.groupby(['page_title', 'recipient_anon'], as_index = False)['attack'].max()\
.assign(threshold = t)
)
sns.pointplot(x='threshold', y = 'attack', hue = 'recipient_anon', data = pd.concat(dfs))
plt.ylabel('percent of attacked users')
```
#### Methodology 2:
Take unsampled data. For each comment, let it be an attack with probability equal to the model prediction. Count the number of users that have received at least 1 attack. Repeat.
```
n_attacks = [simulate_num_attacks(d['2015'].query("ns=='user'"), group_col = 'page_title') for i in range(10)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d['2015'].page_title.unique()))
# ignore anon users
d_temp = d['2015'].query("not author_anon and not recipient_anon and ns=='user'")
n_attacks = [simulate_num_attacks(d_temp, group_col = 'page_title') for i in range(10)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d_temp.page_title.unique()))
```
Just as above, these results are inflated, due to model's right skew.
### Q: Has the proportion of attacks changed year over year?
```
df_span = d['sample'].query('year > 2003 & year < 2016')
df_span['is_attack'] = (df_span['pred_recipient_score'] > 0.5).astype(int) * 100
x = 'year'
o = range(2004, 2016)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
```
The is a strong yearly pattern. The fraction of attacks peaked in 2008, which is when participation peaked as well, Since 2013, we have seen an increase in the fraction of attacks.
### Q: Is there a seasonal effect?
```
x = 'month'
o = range(1, 13)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
```
Not much of a pattern, large intervals relative to the trend.
### Q: Is there an hour of day effect?
```
x = 'hour'
o = range(1, 24)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
```
This is imperfect because timestamps are in UTC and not adjusted for time zone of the editor (the IPs are private).
| github_jupyter |
#### Loading Libraries
```
%matplotlib inline
import glob
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
from nxviz import MatrixPlot, CircosPlot, ArcPlot
import pandas as pd
import tqdm
import sys
sys.path.append('../')
import termite as trmt
```
#### Load Dataset
The Experiment class in the *termite* module provides tools for preprocessing a tracking result dataset, we'll assume that the data was generated using this process from now on.
Let's proceed to load the data then...
```
nest = trmt.Experiment('/media/dmrib/tdata/Syntermes/N11HHS2018-8W4S/N11HHS2018-8W4S-1/Expanded/')
```
This gives us an object containing both the nests' termites trails and their metadata so we can further investigate the underlying structure of termite self organization.
#### Encounters Graphs
Let's build a graph representing the encounters in a given frame...
```
def build_frame_encounters_graph(nest, frame_number):
G = nx.Graph()
color_map = []
for termite in nest.termites:
G.add_node(termite.label)
if termite.caste == 'S':
color_map.append('gray')
else:
color_map.append('black')
for termite in nest.termites:
for other in nest.termites:
if termite != other:
if termite.trail.loc[frame_number, f'encountering_{other.label}']:
G.add_edge(termite.label, other.label)
return G, color_map
```
And here's how to visualize it:
```
frame_encounters, c_map = build_frame_encounters_graph(nest, 1000)
nx.draw_circular(frame_encounters, node_color=c_map, node_size=1000, alpha=0.9, with_labels=True, edge_color='red', linewidths=3, width=5, font_color='white')
```
And we can also create the graph for the entire experiment...
```
def build_encounters_graph(nest):
G = nx.Graph()
color_map = []
for termite in nest.termites:
G.add_node(termite.label)
if termite.caste == 'S':
color_map.append('gray')
else:
color_map.append('black')
for termite in nest.termites:
for other in nest.termites:
if termite != other:
if termite.trail[f'encountering_{other.label}'].sum() > 0:
G.add_edge(termite.label, other.label)
return G
```
And see how it looks like:
```
all_encounters = build_encounters_graph(nest)
nx.draw_circular(all_encounters, node_color=c_map, node_size=1000, alpha=0.9, with_labels=True, edge_color='red', linewidths=3, width=5, font_color='white')
```
This one is better visualized with a matrix plot!
```
plot = MatrixPlot(all_encounters)
plot.draw()
```
| github_jupyter |
```
import webbrowser, os
import json
import boto3
import io
from io import BytesIO
import sys
from pprint import pprint
import pdf2image
import os
def extract_jpg(file_name,csv_output_file):
table_csv = get_table_csv_results(file_name)
with open(csv_output_file, "w") as fout:
fout.write(table_csv)
print('CSV OUTPUT FILE: ',csv_output_file)
#extract_jpg('out.jpg')
def extract_pdf(file_name,csv_output_file):
from pdf2image import convert_from_path
pages = convert_from_path(file_name, 500)
table_csv=""
for page in pages:
page.save('out.jpeg', 'JPEG')
table_csv+= get_table_csv_results('out.jpeg')
os.remove("out.jpeg")
with open(csv_output_file, "w") as fout:
fout.write(table_csv)
print('CSV OUTPUT FILE: ',csv_output_file)
#extract_pdf(file_name)
def get_rows_columns_map(table_result, blocks_map):
rows = {}
for relationship in table_result['Relationships']:
if relationship['Type'] == 'CHILD':
for child_id in relationship['Ids']:
cell = blocks_map[child_id]
if cell['BlockType'] == 'CELL':
row_index = cell['RowIndex']
col_index = cell['ColumnIndex']
if row_index not in rows:
# create new row
rows[row_index] = {}
# get the text value
rows[row_index][col_index] = get_text(cell, blocks_map)
return rows
def get_text(result, blocks_map):
text = ''
if 'Relationships' in result:
for relationship in result['Relationships']:
if relationship['Type'] == 'CHILD':
for child_id in relationship['Ids']:
word = blocks_map[child_id]
if word['BlockType'] == 'WORD':
text += word['Text'] + ' '
if word['BlockType'] == 'SELECTION_ELEMENT':
if word['SelectionStatus'] =='SELECTED':
text += 'X '
return text
def get_table_csv_results(file_name):
with open(file_name, 'rb') as file:
img_test = file.read()
bytes_test = bytearray(img_test)
print('Image loaded', file_name)
# process using image bytes
# get the results
client = boto3.client('textract')
response = client.analyze_document(Document={'Bytes': bytes_test}, FeatureTypes=['TABLES'])
# Get the text blocks
blocks=response['Blocks']
pprint(blocks)
blocks_map = {}
table_blocks = []
for block in blocks:
blocks_map[block['Id']] = block
if block['BlockType'] == "TABLE":
table_blocks.append(block)
if len(table_blocks) <= 0:
return "<b> NO Table FOUND </b>"
csv = ''
for index, table in enumerate(table_blocks):
csv += generate_table_csv(table, blocks_map, index +1)
csv += '\n\n'
return csv
def generate_table_csv(table_result, blocks_map, table_index):
rows = get_rows_columns_map(table_result, blocks_map)
table_id = 'Table_' + str(table_index)
# get cells.
csv = 'Table: {0}\n\n'.format(table_id)
for row_index, cols in rows.items():
for col_index, text in cols.items():
csv += '{}'.format(text) + ","
csv += '\n'
csv += '\n\n\n'
return csv
file_name = 'sample form for grant of site clearance.pdf'
out= file_name[::-1]
output_file_name= out[out.index('.')+1:][::-1]
#output_file = file_name[:file_name.index('.')] + '.csv'
csv_output_file = output_file_name+'.csv'
if(file_name[-3:]=="pdf"):
extract_pdf(file_name,csv_output_file)
else:
extract_jpg(file_name,csv_output_file)
```
# EXTRACTING KEY VALUE PAIRS/JSON FILE OUT OF CSV FILE
```
import csv
file_name= csv_output_file
rows=[]
with open(file_name, 'r',encoding="utf8") as csvfile:
csvreader = csv.reader(csvfile)
# extracting field names through first row
fields = next(csvreader)
# extracting each data row one by one
for row in csvreader:
rows.append(row)
print(rows)
#removing empty lines
master_list = []
for i in range(len(rows)):
if(len(rows[i])>=2):
master_list.append(rows[i])
print(master_list)
master_dict = {}
for i in master_list:
#try:
master_dict[i[0]] = i[1:]
#except:
# print("error")
# print(rows.index(i))
print(master_dict.keys())
import json
json_object = json.dumps(master_dict, indent = 4)
print(json_object)
json_output_file = output_file_name+'.json'
with open(json_output_file,'w') as outfile:
json.dump(json_object, outfile)
os.
```
| github_jupyter |
```
import sys
sys.path.append(r'C:\Users\moallemie\EMAworkbench-master')
sys.path.append(r'C:\Users\moallemie\EM_analysis')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ema_workbench import load_results, ema_logging
from ema_workbench.em_framework.salib_samplers import get_SALib_problem
from SALib.analyze import morris
# Set up number of scenarios, outcome of interest.
sc = 500 # Specify the number of scenarios where the convergence in the SA indices occures
t = 2100
top_factor = 5
outcome_var = 'Coal Production Indicator' # Specify the outcome of interest for SA ranking verification
```
## Loading experiments
```
ema_logging.log_to_stderr(ema_logging.INFO)
from Model_init import vensimModel
from ema_workbench import (TimeSeriesOutcome,
perform_experiments,
RealParameter,
CategoricalParameter,
ema_logging,
save_results,
load_results)
directory = 'C:/Users/moallemie/EM_analysis/Model/'
df_unc = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Uncertainties')
df_unc['Min'] = df_unc['Min'] + df_unc['Reference'] * 0.75
df_unc['Max'] = df_unc['Max'] + df_unc['Reference'] * 1.25
# From the Scenario Framework (all uncertainties), filter only those top 20 sensitive uncertainties under each outcome
sa_dir='C:/Users/moallemie/EM_analysis/Data/'
mu_df = pd.read_csv(sa_dir+"MorrisIndices_{}_sc5000_t{}.csv".format(outcome_var, t))
mu_df.rename(columns={'Unnamed: 0': 'Uncertainty'}, inplace=True)
mu_df.sort_values(by=['mu_star'], ascending=False, inplace=True)
mu_df = mu_df.head(20)
mu_unc = mu_df['Uncertainty']
mu_unc_df = mu_unc.to_frame()
# Remove the rest of insensitive uncertainties from the Scenario Framework and update df_unc
keys = list(mu_unc_df.columns.values)
i1 = df_unc.set_index(keys).index
i2 = mu_unc_df.set_index(keys).index
df_unc2 = df_unc[i1.isin(i2)]
vensimModel.uncertainties = [RealParameter(row['Uncertainty'], row['Min'], row['Max']) for index, row in df_unc2.iterrows()]
df_out = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Outcomes')
vensimModel.outcomes = [TimeSeriesOutcome(out) for out in df_out['Outcome']]
r_dir = 'D:/moallemie/EM_analysis/Data/'
results = load_results(r_dir+'SDG_experiments_ranking_verification_{}_sc{}.tar.gz'.format(outcome_var, sc))
experiments, outcomes = results
```
## Calculating SA (Morris) metrics
```
#Sobol indice calculation as a function of number of scenarios and time
def make_morris_df(scores, problem, outcome_var, sc, t):
scores_filtered = {k:scores[k] for k in ['mu_star','mu_star_conf','mu','sigma']}
Si_df = pd.DataFrame(scores_filtered, index=problem['names'])
sa_dir='C:/Users/moallemie/EM_analysis/Data/'
Si_df.to_csv(sa_dir+"MorrisIndices_verification_{}_sc{}_t{}.csv".format(outcome_var, sc, t))
Si_df.sort_values(by=['mu_star'], ascending=False, inplace=True)
Si_df = Si_df.head(20)
Si_df = Si_df.iloc[::-1]
indices = Si_df[['mu_star','mu']]
errors = Si_df[['mu_star_conf','sigma']]
return indices, errors
# Set the outcome variable, number of scenarios generated, and the timeslice you're interested in for SA
problem = get_SALib_problem(vensimModel.uncertainties)
X = experiments.iloc[:, :-3].values
Y = outcomes[outcome_var][:,-1]
scores = morris.analyze(problem, X, Y, print_to_console=False)
inds, errs = make_morris_df(scores, problem, outcome_var, sc, t)
```
## Plotting SA results
```
# define the ploting function
def plot_scores(inds, errs, outcome_var, sc):
sns.set_style('white')
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(3, 6))
ind = inds.iloc[:,0]
err = errs.iloc[:,0]
ind.plot.barh(xerr=err.values.T,ax=ax, color = ['#FCF6F5']*(20-top_factor)+['#C6B5ED']*top_factor,
ecolor='dimgray', capsize=2, width=.9)
ax.set_ylabel('')
ax.legend().set_visible(False)
ax.set_xlabel('mu_star index', fontsize=12)
ylabels = ax.get_yticklabels()
ylabels = [item.get_text()[:-10] for item in ylabels]
ax.set_yticklabels(ylabels, fontsize=12)
ax.set_title("Number of experiments (N): "+str(sc*21), fontsize=12)
plt.suptitle("{} in 2100".format(outcome_var), y=0.94, fontsize=12)
plt.rcParams["figure.figsize"] = [7.08,7.3]
plt.savefig('{}/Morris_verification_set_{}_sc{}.png'.format(r'C:/Users/moallemie/EM_analysis/Fig/sa_ranking', outcome_var, sc), dpi=600, bbox_inches='tight')
return fig
plot_scores(inds, errs, outcome_var, sc)
plt.close()
```
| github_jupyter |
# API Gallery
_This notebook is part of a tutorial series on [txtai](https://github.com/neuml/txtai), an AI-powered semantic search platform._
The txtai API is a web-based service backed by [FastAPI](https://fastapi.tiangolo.com/). All txtai functionality including similarity search, extractive QA and zero-shot labeling is available via the API.
This notebook installs the txtai API and shows an example using each of the supported language bindings for txtai.
# Install dependencies
Install `txtai` and all dependencies. Since this notebook uses the API, we need to install the api extras package.
```
%%capture
!pip install git+https://github.com/neuml/txtai#egg=txtai[api]
```
# Python
The first method we'll try is direct access via Python. We'll use zero-shot labeling for all the examples here. See [this notebook](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/07_Apply_labels_with_zero_shot_classification.ipynb) for more details on zero-shot classification.
## Configure Labels instance
```
%%capture
import os
from IPython.core.display import display, HTML
from txtai.pipeline import Labels
def table(rows):
html = """
<style type='text/css'>
@import url('https://fonts.googleapis.com/css?family=Oswald&display=swap');
table {
border-collapse: collapse;
width: 900px;
}
th, td {
border: 1px solid #9e9e9e;
padding: 10px;
font: 20px Oswald;
}
</style>
"""
html += "<table><thead><tr><th>Text</th><th>Label</th></tr></thead>"
for text, label in rows:
html += "<tr><td>%s</td><td>%s</td></tr>" % (text, label)
html += "</table>"
display(HTML(html))
# Create labels model
labels = Labels()
```
## Apply labels to text
```
data = ["Wears a red suit and says ho ho",
"Pulls a flying sleigh",
"This is cut down and decorated",
"Santa puts these under the tree",
"Best way to spend the holidays"]
# List of labels
tags = ["🎅 Santa Clause", "🦌 Reindeer", "🍪 Cookies", "🎄 Christmas Tree", "🎁 Gifts", "👪 Family"]
# Render output to table
table([(text, tags[labels(text, tags)[0][0]]) for text in data])
```
Once again we see the power of zero-shot labeling. The model wasn't trained on any data specific to this example. Still amazed with how much knowledge is stored in large NLP models.
# Start an API instance
Now we'll start an API instance to run the remaining examples. The API needs a configuration file to run. The example below is simplified to only include labeling. See [this link](https://github.com/neuml/txtai#api) for a more detailed configuration example.
The API instance is started in the background.
```
%%writefile index.yml
# Labels settings
labels:
!CONFIG=index.yml nohup uvicorn "txtai.api:app" &> api.log &
!sleep 90
```
# JavaScript
txtai.js is available via NPM and can be installed as follows.
```bash
npm install txtai
```
For this example, we'll clone the txtai.js project to import the example build configuration.
```
%%capture
!git clone https://github.com/neuml/txtai.js
```
## Create labels.js
The following file is a JavaScript version of the labels example.
```
%%writefile txtai.js/examples/node/src/labels.js
import {Labels} from "txtai";
import {sprintf} from "sprintf-js";
const run = async () => {
try {
let labels = new Labels("http://localhost:8000");
let data = ["Wears a red suit and says ho ho",
"Pulls a flying sleigh",
"This is cut down and decorated",
"Santa puts these under the tree",
"Best way to spend the holidays"];
// List of labels
let tags = ["🎅 Santa Clause", "🦌 Reindeer", "🍪 Cookies", "🎄 Christmas Tree", "🎁 Gifts", "👪 Family"];
console.log(sprintf("%-40s %s", "Text", "Label"));
console.log("-".repeat(75))
for (let text of data) {
let label = await labels.label(text, tags);
label = tags[label[0].id];
console.log(sprintf("%-40s %s", text, label));
}
}
catch (e) {
console.trace(e);
}
};
run();
```
## Build and run labels example
```
%%capture
os.chdir("txtai.js/examples/node")
!npm install
!npm run build
!node dist/labels.js
```
The JavaScript program is showing the same results as when natively running through Python!
# Java
txtai.java integrates with standard Java build tools (Gradle, Maven, SBT). The following shows how to add txtai as a dependency to Gradle.
```gradle
implementation 'com.github.neuml:txtai.java:v4.0.0'
```
For this example, we'll clone the txtai.java project to import the example build configuration.
```
%%capture
os.chdir("/content")
!git clone https://github.com/neuml/txtai.java
```
## Create LabelsDemo.java
The following file is a Java version of the labels example.
```
%%writefile txtai.java/examples/src/main/java/LabelsDemo.java
import java.util.Arrays;
import java.util.ArrayList;
import java.util.List;
import txtai.API.IndexResult;
import txtai.Labels;
public class LabelsDemo {
public static void main(String[] args) {
try {
Labels labels = new Labels("http://localhost:8000");
List <String> data =
Arrays.asList("Wears a red suit and says ho ho",
"Pulls a flying sleigh",
"This is cut down and decorated",
"Santa puts these under the tree",
"Best way to spend the holidays");
// List of labels
List<String> tags = Arrays.asList("🎅 Santa Clause", "🦌 Reindeer", "🍪 Cookies", "🎄 Christmas Tree", "🎁 Gifts", "👪 Family");
System.out.printf("%-40s %s%n", "Text", "Label");
System.out.println(new String(new char[75]).replace("\0", "-"));
for (String text: data) {
List<IndexResult> label = labels.label(text, tags);
System.out.printf("%-40s %s%n", text, tags.get(label.get(0).id));
}
}
catch (Exception ex) {
ex.printStackTrace();
}
}
}
```
## Build and run labels example
```
os.chdir("txtai.java/examples")
!../gradlew -q --console=plain labels 2> /dev/null
```
The Java program is showing the same results as when natively running through Python!
# Rust
txtai.rs is available via crates.io and can be installed by adding the following to your cargo.toml file.
```toml
[dependencies]
txtai = { version = "4.0" }
tokio = { version = "0.2", features = ["full"] }
```
For this example, we'll clone the txtai.rs project to import the example build configuration. First we need to install Rust.
```
%%capture
os.chdir("/content")
!apt-get install rustc
!git clone https://github.com/neuml/txtai.rs
```
## Create labels.rs
The following file is a Rust version of the labels example.
```
%%writefile txtai.rs/examples/demo/src/labels.rs
use std::error::Error;
use txtai::labels::Labels;
pub async fn labels() -> Result<(), Box<dyn Error>> {
let labels = Labels::new("http://localhost:8000");
let data = ["Wears a red suit and says ho ho",
"Pulls a flying sleigh",
"This is cut down and decorated",
"Santa puts these under the tree",
"Best way to spend the holidays"];
println!("{:<40} {}", "Text", "Label");
println!("{}", "-".repeat(75));
for text in data.iter() {
let tags = vec!["🎅 Santa Clause", "🦌 Reindeer", "🍪 Cookies", "🎄 Christmas Tree", "🎁 Gifts", "👪 Family"];
let label = labels.label(text, &tags).await?[0].id;
println!("{:<40} {}", text, tags[label]);
}
Ok(())
}
```
## Build and run labels example
```
%%capture
os.chdir("txtai.rs/examples/demo")
!cargo build
!cargo run labels
```
The Rust program is showing the same results as when natively running through Python!
# Go
txtai.go can be installed by adding the following import statement. When using modules, txtai.go will automatically be installed. Otherwise use `go get`.
```golang
import "github.com/neuml/txtai.go"
```
For this example, we'll create a standalone process for labeling. First we need to install Go.
```
%%capture
os.chdir("/content")
!apt install golang-go
!go get "github.com/neuml/txtai.go"
```
## Create labels.go
The following file is a Go version of the labels example.
```
%%writefile labels.go
package main
import (
"fmt"
"strings"
"github.com/neuml/txtai.go"
)
func main() {
labels := txtai.Labels("http://localhost:8000")
data := []string{"Wears a red suit and says ho ho",
"Pulls a flying sleigh",
"This is cut down and decorated",
"Santa puts these under the tree",
"Best way to spend the holidays"}
// List of labels
tags := []string{"🎅 Santa Clause", "🦌 Reindeer", "🍪 Cookies", "🎄 Christmas Tree", "🎁 Gifts", "👪 Family"}
fmt.Printf("%-40s %s\n", "Text", "Label")
fmt.Println(strings.Repeat("-", 75))
for _, text := range data {
label := labels.Label(text, tags)
fmt.Printf("%-40s %s\n", text, tags[label[0].Id])
}
}
```
## Build and run labels example
```
!go run labels.go
```
The Go program is showing the same results as when natively running through Python!
| github_jupyter |
# Dataset Statistics for SynthDet
This example notebook shows how to use datasetinsights to load synthetic datasets generated from the [SynthDet](https://github.com/Unity-Technologies/synthdet) example project and visualize dataset statistics.
In addition to the object bounding boxes, SynthDet produces a mix of built-in and project-specific metrics. Statistics for the `RenderedObjectInfo` metrics built into the Perception package can be calculated using `datasetinsights.data.datasets.statistics.RenderedObjectInfo`. SynthDet-specific statistics are loaded via `datasetinsights.data.simulation.Metrics` and are calculated directly in this notebook.
## Setup dataset
If the dataset was generated locally, point `data_root` below to the path of the dataset. The `GUID` folder suffix should be changed accordingly.
```
data_root = "/data/<GUID>"
```
### Unity Simulation [Optional]
If the dataset was generated on Unity Simulation, the following cells can be used to download the metrics needed for dataset statistics.
Provide the `run-execution-id` which generated the dataset and a valid `auth_token` in the following cell. The `auth-token` can be generated using the Unity Simulation [CLI](https://github.com/Unity-Technologies/Unity-Simulation-Docs/blob/master/doc/cli.md#usim-inspect-auth).
```
# import os
# from datasetinsights.data.simulation.download import download_manifest, Downloader
# data_volume = "/data" # directory where datasets should be downloaded to and loaded from
# run_execution_id = "ojEawoj" # Unity Simulation run-execution-id
# auth_token = "xxxx" # Unity Simulation auth token
# project_id = "xxxx" # Unity Project ID
# data_root = os.path.join(data_volume, run_execution_id)
```
Before loading the dataset metadata for statistics we first download the relevant files from Unity Simulation. For downloading files, Unity Simulation provides a manifest file providing file paths and signed urls for each file. `download_manifest()` will download the manifest file to disk. `Download` can then be used to download the metrics and metric definitions.
```
# manifest_file = os.path.join(data_volume, f"{run_execution_id}.csv")
# download_manifest(run_execution_id, manifest_file, auth_token, project_id)
# dl = Downloader(manifest_file, data_root, use_cache=True)
# dl.download_references()
# dl.download_metrics()
```
## Load dataset metadata
Once the dataset metadata is downloaded, it can be loaded for statistics using `datasetinsights.data.simulation`. Annotation and metric definitions are loaded into pandas dataframes using `AnnotationDefinitions` and `MetricDefinitions` respectively.
```
import datasetinsights.data.simulation as sim
ann_def = sim.AnnotationDefinitions(data_root)
ann_def.table
metric_def = sim.MetricDefinitions(data_root)
metric_def.table
```
## Built-in Statistics
The following tables and charts are supplied by `datasetinsights.data.datasets.statistics.RenderedObjectInfo` on datasets that include the "rendered object info" metric.
```
import datasetinsights.data.datasets.statistics as stat
from datasetinsights.visualization.plots import bar_plot, histogram_plot, rotation_plot
max_samples = 10000 # maximum number of samples points used in histogram plots
rendered_object_info_definition_id = "659c6e36-f9f8-4dd6-9651-4a80e51eabc4"
roinfo = stat.RenderedObjectInfo(data_root=data_root, def_id=rendered_object_info_definition_id)
```
### Descriptive Statistics
```
roinfo.num_captures()
roinfo.raw_table.head(3)
```
### Total Object Count
```
total_count = roinfo.total_counts()
total_count
bar_plot(
total_count,
x="label_id",
y="count",
x_title="Label Name",
y_title="Count",
title="Total Object Count in Dataset",
hover_name="label_name"
)
```
### Per Capture Object Count
```
per_capture_count = roinfo.per_capture_counts()
per_capture_count.head(10)
histogram_plot(
per_capture_count,
x="count",
x_title="Object Counts Per Capture",
y_title="Frequency",
title="Distribution of Object Counts Per Capture",
max_samples=max_samples
)
```
### Object Visible Pixels
```
histogram_plot(
roinfo.raw_table,
x="visible_pixels",
x_title="Visible Pixels Per Object",
y_title="Frequency",
title="Distribution of Visible Pixels Per Object",
max_samples=max_samples
)
```
## SynthDet Statistics
Metrics specific to the simulation can be loaded using `datasetinsights.data.simulation.Metrics`.
```
metrics = sim.Metrics(data_root=data_root)
```
### Foreground placement info
```
import pandas as pd
foreground_placement_info_definition_id = "061e08cc-4428-4926-9933-a6732524b52b"
columns = ("x_rot", "y_rot", "z_rot")
def read_foreground_placement_info(metrics):
filtered_metrics = metrics.filter_metrics(foreground_placement_info_definition_id)
combined = pd.DataFrame(filtered_metrics["rotation"].to_list(), columns=columns)
return combined
orientation = read_foreground_placement_info(metrics)
orientation.head(10)
rotation_plot(
orientation,
x="x_rot",
y="y_rot",
z="z_rot",
title="Object orientations",
max_samples=max_samples
)
histogram_plot(
orientation,
x="x_rot",
x_title="Object Rotation (Degree)",
y_title="Frequency",
title="Distribution of Object Rotations along X direction",
max_samples=max_samples
)
histogram_plot(
orientation,
x="y_rot",
x_title="Object Rotation (Degree)",
y_title="Frequency",
title="Distribution of Object Rotations along Y direction",
max_samples=max_samples
)
histogram_plot(
orientation,
x="z_rot",
x_title="Object Rotation (Degree)",
y_title="Frequency",
title="Distribution of Object Rotations along Z direction",
max_samples=max_samples
)
```
### Lighting info
```
lighting_info_definition_id = "939248ee-668a-4e98-8e79-e7909f034a47"
x_y_columns = ["x_rotation", "y_rotation"]
color_columns = ["color.r", "color.g", "color.b", "color.a"]
def read_lighting_info(metrics):
filtered_metrics = metrics.filter_metrics(lighting_info_definition_id)
colors = pd.json_normalize(filtered_metrics["color"])
colors.columns = color_columns
combined = pd.concat([filtered_metrics[x_y_columns], colors], axis=1, join='inner')
return combined
lighting = read_lighting_info(metrics)
lighting.head(5)
rotation_plot(
lighting,
x="x_rotation",
y="y_rotation",
title="Light orientations",
max_samples=max_samples
)
histogram_plot(
lighting,
x="x_rotation",
x_title="Lighting Rotation (Degree)",
y_title="Frequency",
title="Distribution of Lighting Rotations along X direction",
max_samples=max_samples
)
histogram_plot(
lighting,
x="y_rotation",
x_title="Lighting Rotation (Degree)",
y_title="Frequency",
title="Distribution of Lighting Rotations along Y direction",
max_samples=max_samples
)
histogram_plot(
lighting,
x="color.r",
x_title="Lighting Color",
y_title="Frequency",
title="Distribution of Lighting Color Redness",
max_samples=max_samples
)
histogram_plot(
lighting,
x="color.g",
x_title="Lighting Color",
y_title="Frequency",
title="Distribution of Lighting Color Greeness",
max_samples=max_samples
)
histogram_plot(
lighting,
x="color.b",
x_title="Lighting Color",
y_title="Frequency",
title="Distribution of Lighting Color Blueness",
max_samples=max_samples
)
```
## Images With 2D Bounding Boxes
In this section, we provide sample code to render 2d bounding boxes on top of the captured images.
### Unity Simulation [Optional]
If the dataset was generated on Unity Simulation, the following cells can be used to download the images, captures and annotations in the dataset. Make sure you have enough disk space to store all files. For example, a dataset with 100K captures requires roughly 300GiB storage.
```
# dl.download_captures()
# dl.download_binary_files()
```
### Captures
```
bounding_box_definition_id = "c31620e3-55ff-4af6-ae86-884aa0daa9b2"
cap = sim.Captures(data_root)
captures = cap.filter(def_id=bounding_box_definition_id)
captures.head(3)
```
### Visualize
```
import os
import random
from PIL import Image
from datasetinsights.visual import plot_bboxes
from datasetinsights.data.datasets.synthetic import read_bounding_box_2d
line_width = 5
resize_scale = 2
def draw_bounding_boxes(filename, annotation):
filepath = os.path.join(data_root, filename)
image = Image.open(filepath)
boxes = read_bounding_box_2d(annotation)
img_with_boxes = plot_bboxes(image, boxes, box_line_width=line_width)
print(f"Image: {filename}")
new_size = (img_with_boxes.width // resize_scale, img_with_boxes.height // resize_scale)
return img_with_boxes.resize(new_size)
# pick an index at random
index = random.randrange(captures.shape[0])
filename = captures.loc[index, "filename"]
annotation = captures.loc[index, "annotation.values"]
draw_bounding_boxes(filename, annotation)
```
| github_jupyter |
# Warsztat 1 – podstawy programowania<a id=top></a>
### <a href='#Warsztat-1---podstawy-programowania'>Spis treści</a>
<ul>
<li><a href='#Hello-world'><span>Hello world</span></a></li>
<li><a href='#Kalkulator'><span>Kalkulator</span></a></li>
<li><a href='#Zmienne'><span>Zmienne</span></a></li>
<li><a href='#Komentowanie'><span>Komentowanie</span></a></li>
<li><a href='#Instrukcja-import'><span>Instrukcja import</span></a></li>
</ul>
Zanim rozpoczniemy przegląd podstawowych elementów składowych <b>Pythona</b>, warto rzucić okiem na dobre praktyki programistyczne, zebrane w formie wiersza i schowane wewnątrz samego języka.
Żeby dotrzeć do tej poezji, należy zaimportować specjalną bibliotekę.
```
import this
```
W tym momencie ten wierszyk nie ma dla Ciebie zbyt wiele sensu, ale warto będzie do niego wrócić po przejściu całego warsztatu.
Python jest uznawany za język programowania o bardzo łatwej do zrozumienia składni a przy tym o bardzo dużych możliwościach (co przyciąga rzesze ludzi do nauki tego właśnie języka).
Zostało to całkiem zabawne podsumowane przez słynny komiks <b>xkcd</b>, któzy też został schowany wewnątrz Pythona w formie żartu. Można go wywołać poprzez wykonanie poniżej komendy:
```
import antigravity
```
## Hello world
Tradycyjnie, naukę dowolnego języka programowania zaczyna się od pokazania adeptowi jak wypisać na konsoli komunikat "Hello world". Podtrzymując tę tradycję, za chwilę zobaczymy jak to zrobić w Pythonie.
Najpierw jednak przywołamy kolejny dowcip zaszyty w samym języku, który właśnie do wypisywania tego komunikatu ma oddzielną funkcję.
```
import __hello__
```
Co ciekawe, działa to wyłącznie raz, żeby uzyskać ten efekt ponownie, trzeba zrestartować silnik Pythona (tzw. kernel).<br><br>
Poniżej natomiast możemy zobaczyć, w jaki sposób wypisać na konsoli dowolny tekst (w tym też oczywiście "Hello world").
```
print("Hello world")
```
Zauważmy, że po wciśnięciu klawisza cudzysłowia " Jupyter dopowiada drugi taki sam znak a kursor ustawia między nimi. To samo dzieje się dla apostrofu ' i nawiasów ( { \[ . Sprawia to, że wystarczy użyć dany klaswisz raz i można wprowadzić od razu poprawnie sformatowany element.
Jeśli zdarzy nam się wcisnąć ten klawisz również na zakończenie (pomimo tego, że już znajduje się w tekście), Jupyter zastąpi stary znak nowym, dzięki czemu w dokumencie ciągle będzie odpowiednia ilość cudzysłowiów i innych znaków specjalnych.
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
## Kalkulator
Najprostszym zadaniem, jakie możemy wykonać za pomocą interpretera Pythona jest ewaluacja równań matematycznych wprowadzanych bezpośrednio do konsoli.<br>
Python będzie zachowywał się identycznie do każdego zwykłego kalkulatora. Jest on również wyczulony na kolejność działań (zgodną z międzynarodowymi standardami w matematyce), dlatego należy pamiętać o wykorzystywaniu nawiasów do wskazywania kolejności obliczeń, która nie wynika z zastosowanych operacji.
```
2+2
4563-6744
346-(3452+5463)-345+3666
```
Dostępne mamy wszystkie podstawowe działania matematyczne (bardziej zaawansowane będą wymagały dodatkowych bibliotek):<br><br>
| Działanie | Operator |
| :---: | :---: |
| Dodawanie | + |
| Odejmowanie | - |
| Mnożenie | * |
| Dzielenie | / |
| Modulo (pot. reszta z dzielenia) | % |
| Potęgowanie | \** |
| Dzielenie "podłogowe" (ang. floor division) | // |
```
21+22
78-35
43*1
387/9
```
Modulo oznacza dokładnie resztę z dzielenia, którą można odnaleźć we wzorze choćby <a href="https://pl.wikipedia.org/wiki/Twierdzenie_o_dzieleniu_z_reszt%C4%85#Twierdzenie">tutaj</a>.
```
435%56
```
Dzielenie podłogowe polega na wykonaniu normalnego dzielenia, ale zwrócenia wyłącznie części całkowitej takiego obliczenia.<br>
Dlatego niezależnie od tego czy wynik dzielenia wynosi 2, 2.5 czy 2.95, dzielenie podłogowe zwróci nam zawsze 2.
```
395//9
```
Dzięki czemu możemy zauważyć, że modulo i dzielenie podłogowe są ze sobą powiązane.<br>
a // b = c<br>
a % b = d<br>
zatem:<br>
a = b * c + d
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
### Ćwiczenia:
#### Oblicz pole powierzchni trójkąta o podstawie $a=14$ i wysokości $h=10$. Wzór na pole trójąta: $S=\frac{a \cdot h}{2}$
#### Ustal czy liczba 123456 jest podzielna przez 24.
#### Wylicz ile pełnych pint piwa można nalać ze 150 litrowego kega. (wskazówka: pinta = 568ml)
## Zmienne
Podstawową jednostką budującą każdy język programowania są zmienne.
Zadaniem zmiennej jest przechowywanie konkretnej informacji w taki sposób, żeby była ona łatwo dostępna poprzez wpisanie nazwy tej zmiennej do konsoli.
Ogólna zasada składniowa jest taka:<br>
<center>**zmienna = wartość zmiennej (informacja)**</center>
Zmienna może przechowywać różne rodzaje informacji:
Litery, słowa i zdania przechowywane są w zmiennych typu tekstowego. W pythonie ten typ nazywany jest stringiem.
```
a = "napis w zmiennej a" #zmienna typu tekstowego (string)
a
```
Zmienną tekstową można stworzyć za pomocą zaróno cudzysłowu " jak i apostrofu ' . Nie ma obostrzeń w używaniu konkretnego typu, warto jednak trzymać się jednego sposobu, żeby utrzymać czystość i zrozumiałość kodu.
```
a2 = 'napis w zmiennej a2'
a2
```
Każdy język programowania bardzo mocno korzysta ze zmiennych będących liczbami i możliwości wykonywania na nich wszelakich obliczeń.<br>
Python posiada kilka typów zmiennych liczbowych. Najbardziej podstawowym jest typ całkowity (ang. integer), przechowujący liczby całkowite (dodatnie oraz ujemne).
```
b = 43 #zmienna typu liczbowego (integer)
b
```
Zmiennej nie musimy przypisać poprzez bezpośrednie wpisanie wartości, ale do zmiennej można przypisać rezultat dowolnej operacji na danych.
Kolejność działania Pythona polega najpierw na ewaluacji prawej strony tego "równania" a dopiero następnie przypisanie wyniku do konkretnej zmiennej.
```
c = (11**2-10)*6
c
```
Tutaj mamy przykład przypisania do zmiennej wyniku działania na dwóch innych zmiennych:
```
d=5
e=3
f=d/e
f
```
Nazwa 'zmienna' wskazuje na możliwość modyfikacji informacji, jaka będzie do niej przypisana w programie.<br>
Przykładowo, bardzo popularnym zabiegiem jest tworzenie zmiennej w charakterze licznika, która ma za zadanie zliczać np. ilość zanalizowanych plików. Do tego celu najpierw deklaruje się zmienną o wartości 0, a następnie zwiększa jej wartość po analizie każdego pliku.
```
licznik = 0
licznik = licznik + 1 #przy każdym wykonaniu kodu z tej komórki, wartość zmiennej licznik wzrośnie o 1
licznik +=2
```
Dzięki wspomnianej wcześniej kolejności interpretacji kodu (najpierw prawa strona równania, potem lewa), możliwe jest wzięcie obecnej wartości licznika, dodanie do niej 1 a następnie przypisanie nowej wartości do zmiennej.<br>
Konstrukcje podobne do powyższej są tak częste, że wiele języków programowania przewiduje ich skrócone wersje. Nie inaczej jest z Pythonem.
| Wersja pełna | Wersja skrócona |
| :-: | :-: |
| licznik = licznik + 2 | licznik += 2 |
| licznik = licznik - 2 | licznik -= 2 |
| licznik = licznik * 2 | licznik \*= 2 |
| licznik = licznik / 2 | licznik /= 2 |
Po pewnym czasie nie sposób zauważyć, że modyfikacja zmiennej, tak naprawdę polega na wykonaniu jakiejś operacji na starej zmiennej i przypisaniu nowej wartości pod tę samą zmienną. Dlatego modyfikacja zmiennej jest raczej skrótem myślowym.
Jest to istotne przede wszystkim w momencie posiadania zmiennych o podobnej nazwie. Jeśli bowiem wywołamy zmienną po raz drugi, tylko z inną wartością, pierwsza wartość zostanie utracona - bardzo łatwo jest w ten sposób nadpisać sobie zmienną i spowodować niewłaściwe działanie programu.
```
moja_zmienna = "chcę tu mieć napis"
moja_zmienna = 42
print(moja_zmienna)
```
Widzimy, że po napisie pierwotnie przypisanym do zmiennej nie pozostał żaden ślad. Dlatego należy bardzo uważnie tworzyć nazwy dla zmiennych. W bardziej rozbudowanych programach pewną pomocą są zmienne lokalne i globalne, którym przyjrzymy się dokładnie podczas warsztatu poświęconego funkcjom.
#### Dozwolone nazwy
Python pozwala na prawie nieograniczoną dowolność w dobieraniu nazw dla wszystkich obiektów dostępnych w języku. Istnieje jednak kilka drobnych ograniczeń.<br>
Po pierwsze, każda nazwa musi zaczynać się od litery albo od podkreślnika _ (zatem odpadają cyfry i znaki specjalne).<br>
Po drugie, istnieje grupa nazw powiązana z samą konstrukcją języka i jako takie, są to nazwy zastrzeżone. Poniżej możemy zobaczyć ich listę.
```
import keyword
import builtins
print(dir(builtins))
print(keyword.kwlist)
```
Rozszerzeniem punktu drugiego jest unikanie nazw, które mogą być elementami modułów przez nas używanych. Dla przykładu, jeśli będziemy wykorzystywali bibliotekę **math** warto unikać nazw typowych dla działań matematycznych, np. sin, cos czy tan. W innym wypadku może dojść do konfliktu i niepoprawnego działania programu.
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
## Komentowanie
W kilku miejscach tego warsztatu pojawiły się fragmenty tekstu oddzielone od reszty programu, które nie wywoływały żadnej akcji Pythona.
Są to komentarze, czyli fragmenty kodu, które mają być ignorowane przez Pythona i służą wyłącznie dokumentowaniu zawartości programu oraz informowaniu innych ludzi (np. koleżankę-programistkę), do czego służy konkretny fragment kodu.
Python pozwala na dwa podstawowe sposoby wprowadzania komentarzy.<br>
Jednolinijkowe komentarze tworzone są za pomocą znaku <b>#</b>, który mówi Pythonowi, że wszystko, co znajduje się w tej linijce kodu za tym znakiem, należy zignorować.
```
zmienna = 'ta część kodu jest wykonywana' #natomiast wszystko po # jest ignorowane (Jupyter nawet zmienia kolor takiego kodu)
```
Drugą możliwością są komentarze wielolinijkowe, które tworzy się poprzez zamknięcie komentarza wewnątrz potrójnych apostrofów.
```
zmienna = 'przypisuję informację'
'''Tutaj znajdzie się wszelka informacja,
jaką chcielibyście przekazać kolejnym pokoleniom programistów,
którzy odkopią kod po wielkiej apokalipsie atomowej'''
print(zmienna)
```
Na samym początku trudno dostrzec niezbędność opatrywania komentarzem swojego kodu, ale powinno się to stać jaśniejsze, kiedy będziemy omawiać funkcje czy obiekty oraz tworzyć dłuższe i bardziej skomplikowane programy.<br>
Bardzo często zdarza się też, że wracając po pewnym czasie do własnego programu potrzebujemy sporej ilości czasu, żeby przypomnieć sobie jego szczegóły. Dobra dokumentacja pozwala zaoszczędzić mnóstwo czasu oraz frustracji podczas programowania.
<b>Ciekawostka:</b> Tak rozbudowane komentarze można wykorzystać do drukowania wielolinijkowego na konsoli.
```
print('''Tutaj znajdzie się wszelka informacja,
jaką chcielibyście przekazać kolejnym pokoleniom programistów,
którzy odkopią kod po wielkiej apokalipsie atomowej''')
```
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
## Instrukcja <b>import</b>
Na samym początku tego warsztatu importowaliśmy kilka bibliotek, więc na koniec omówmy sobie po co istnieje taki mechanizm w Pythonie.<br>
Podobnie jak wiele (jeśli nie wszystkie) współczesne języki programowania, kod Pythona możemy podzielić na <b>rdzeń</b> (ang. core) oraz <b>elementy dodatkowe</b> (moduły, biblioteki). Podczas uruchomienia "czystego" Pythona, zostają załadowane postawowe funkcje i obiekty tego języka, które od razu mogą zostać użyte przez programistę.<br>
Funkcja print, operacje artymetyczne, podstawowe typy zmiennych - to tylko niektóre z wielu elementów rdzenia Pythona, których użycie nie wymaga żadnego dodatkowego kodu. Siłą tego języka jest to, że z tych podstawowych elementów można budować większe i bardziej skomplikowane struktury.<br>
Żeby ułatwić tworzenie, wykorzystywanie, zarządzanie i dzielenie się nimi, wprowadzono do Pythona modułowość, która pozwala tworzyć własne biblioteki, które wzbogacają dostępny repertuar funkcji, obiektów i mozłiwości przekształcania danych.
Instalator Anacondy, oprócz zainstalowania Pythona oraz Jupytera - interaktywnej powłoki, umieścił na komputerze pewnej ilości dodatkowych bibliotek i modułów wybranych pod kątem przydatności dla naukowców.<br>
Przyjrzymy się teraz różnym sposobym importowania bibliotek i wykorzystywania funkcji w nich zawartych.
### Włączenie całej biblioteki
Najprostszym sposobem jest po prostu włączenie całej biblioteki poprzez komendę:
```python
import <nazwa biblioteki>```
```
import numpy
print(numpy.sqrt(16))
print(numpy.square(7))
```
Możemy wtedy korzystać ze wszystkich funkcji i obiektów zawartych w takim module, musimy jednak przed ich nazwą dodać nazwę modułu z jakiego pochodzą. W naszym przypadku, przed funkcjami pierwiastkowania i potęgowania dodajemy nazwę 'numpy'.
Często jednak moduły będą miały długie albo niewiele znaczące nazwy. Możemy je zastąpić własną nazwą. W przypadku numpy, przyjęło się skracać nazwę modułu do 'np'.
```
import numpy as np
print(np.sqrt(16))
print(np.square(7))
```
### Import wybranych elementów
Możliwe jest również załadowanie pojedynczych fukcji czy klas z całego modułu. Jest to rozwiązanie korzystne zwłaszcza w momencie, kiedy zamierzamy wykorzystać tylko jeden/kilka elementów. Skróci to czas ładowania biblioteki oraz ograniczy ilość elementów w przestrzeni roboczej Pythona.
```
from numpy import sqrt, square
print(sqrt(16))
print(square(7))
```
Drugą korzyścią takiego rozwiązania jest możliwość korzystania z funkcji bez potrzeby dodawania przedrostka z nazwą modułu. Należy jednak zachować czujność, ponieważ może dojść do konfliktu nazw (np. stworzyliśmy wcześniej własną funkcję o takiej samej nazwie), co spowoduje błędne działanie programu.<br>
Dlatego też nie poleca się wczytywać w ten sposób całych modułów (choć jest to oczywiście możliwe za pomocą poniższej składni).
```
from numpy import *
print(sqrt(16))
```
Nie jesteśmy jednak zdani wyłącznie na inwencję twórców danego modułu. Nawet wczytując pojedynczą funkcję, możemy nadać własną nazwę.
```
from numpy import sqrt as pierwiastek
print(pierwiastek(4))
```
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
### Ćwiczenia:
#### Przypisz poniższe wyrażenia do samodzielnie nazwanych zmiennych:
<ul style="list-style-type: square;">
<li><span>Ilość tygodni w roku, jeśli demokratycznie wykluczylibyśmy poniedziałek jako dzień tygodnia.</span></li>
<li><span>Wynik dzielenia swojego roku urodzenia przez różnicę dnia i miesiąca urodzenia.</span></li>
<li><span>Ilość maratończyków potrzebną do przebiegnięcia dystansu od Ziemi do Słońca (przyjmijmy umownie 150 mln km).</span></li>
<li><span>Wynik działania $2^{3^4}$</span></li>
<li><span>Równanie wynoszące dokładnie 117, jedynie przy użyciu cyfr 2, 6 i 9.</span></li>
</ul>
#### Za pomocą wyszukiwarki znajdź w module numpy funkcję do wyliczania logarytmu dziesiętnego z podanej liczby. Zaimportuj ją jako 'logartym10' i użyj na zmiennych z poprzedniego ćwiczenia.
| github_jupyter |
# A Working Example
This example shows :
1) how to run the program for some video
2) how to set some important parameters for the detection, fixing and post processing
3) how to show the results
## The detection part
The detection part is seprarted by loading a pretrained YOLOv4 network. The training should be done already before running this tracking program.
The library that load the structure of YOLO is part of the project (https://github.com/Tianxiaomo/pytorch-YOLOv4)
Using other detection network is possiable, you just have to use it within the class `YoloDetector` and edit its methods accordingly.
To load all the libraries shown below, you should be at the root of the project folder. If you are running this in the `docs` directory, then you can type
```
%cd ../..
from detection import YoloDetector
```
This class can take several input parameters, like the YOLO configuration file, the model file (in pytorch format) and lastly a flag to wheather to use GPU or CPU
```
detector = YoloDetector('yolov4-obj.cfg','Yolov4_epoch300.pth',use_cuda=False)
```
Detection is only performed every *N* frame. Where *N* is set according to `config.py` file with this varaiable: `detect_every_N`. Bigger values are better but slower to run.
Other paramters that are worth mentioning are:
```python
### Detection paramters
detect_thresh = 0.35 #Yolo detection
# distance to the nearst match between detection and tracking
# output in pixels
dist_thresh = 35
```
The `detect_thresh` is used to put a threshold on YOLO probabilty output for each detected class. The lower this value the more detections we will get but with less accuracy.
The `dist_thresh` is for matching each detection with already detected object, it shouldn't be too big.
## How to run
To run the full program, you need to run the `main.py` script with the `-v` flag set to the name and directory of the video in the base directory of the project.
Try to run the following command to see the result on the sample video:
`python main.py -v docs\sample.mp4`
## Fixing and smoothing
In `config.py` file the following are the parameters for the part of fixing the view. The first boolean is for wheather to do fixing or not , mainly because it is slow.
```python
### fix view paramers
do_fix = False
fixing_dilation = 13
min_matches = 15
```
The part for doing the smoothing is also included in the same file as follows. The first boolean is for doing the smoothing or not, the other two is related to Savitzky-Golay filter from `scipy` library.
```python
### Smoothing for post processing
do_smooth = True
window_size = 7
polydegree = 3
```
Other step (than smoothing) is included in the `postprocess.py` file, namely, the orientation calculation. It is calculated from the trajectory itself where the next point help in finding the current point heading.
## How to show result
After running the program on your video a txt file will be saved in the **outputs** directory with the same name as the provided video.
*Note* if your videos are named with similar names (for example numbers), you should copy the last result to other folder before start tracking a new video, because any file with similar name will be overwritten.
If you want to show the result with angles and smoothing for the sample video above, you can run the command:
`python show_results.py -v docs\sample.mp4`
The directory after the -v flag is the same as the one from which the video were processed.
| github_jupyter |
<a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-3-public/blob/main/Course%201%20-%20Custom%20Models%2C%20Layers%20and%20Loss%20Functions/Week%205%20-%20Callbacks/C1_W5_Lab_1_exploring-callbacks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Ungraded Lab: Introduction to Keras callbacks
In Keras, `Callback` is a Python class meant to be subclassed to provide specific functionality, with a set of methods called at various stages of training (including batch/epoch start and ends), testing, and predicting. Callbacks are useful to get a view on internal states and statistics of the model during training. The methods of the callbacks can be called at different stages of training/evaluating/inference. Keras has available [callbacks](https://keras.io/api/callbacks/) and we'll show how you can use it in the following sections. Please click the **Open in Colab** badge above to complete this exercise in Colab. This will allow you to take advantage of the free GPU runtime (for faster training) and compatibility with all the packages needed in this notebook.
## Model methods that take callbacks
Users can supply a list of callbacks to the following `tf.keras.Model` methods:
* [`fit()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit), [`fit_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit_generator)
Trains the model for a fixed number of epochs (iterations over a dataset, or data yielded batch-by-batch by a Python generator).
* [`evaluate()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate), [`evaluate_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate_generator)
Evaluates the model for given data or data generator. Outputs the loss and metric values from the evaluation.
* [`predict()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict), [`predict_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict_generator)
Generates output predictions for the input data or data generator.
## Imports
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import io
from PIL import Image
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping, LearningRateScheduler, ModelCheckpoint, CSVLogger, ReduceLROnPlateau
%load_ext tensorboard
import os
import matplotlib.pylab as plt
import numpy as np
import math
import datetime
import pandas as pd
print("Version: ", tf.__version__)
tf.get_logger().setLevel('INFO')
```
# Examples of Keras callback applications
The following section will guide you through creating simple [Callback](https://keras.io/api/callbacks/) applications.
```
# Download and prepare the horses or humans dataset
splits, info = tfds.load('horses_or_humans', as_supervised=True, with_info=True, split=['train[:80%]', 'train[80%:]', 'test'])
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
SIZE = 150 #@param {type:"slider", min:64, max:300, step:1}
IMAGE_SIZE = (SIZE, SIZE)
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
def build_model(dense_units, input_shape=IMAGE_SIZE + (3,)):
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(dense_units, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')
])
return model
```
## [TensorBoard](https://keras.io/api/callbacks/tensorboard/)
Enable visualizations for TensorBoard.
```
!rm -rf logs
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir)
model.fit(train_batches,
epochs=10,
validation_data=validation_batches,
callbacks=[tensorboard_callback])
%tensorboard --logdir logs
```
## [Model Checkpoint](https://keras.io/api/callbacks/model_checkpoint/)
Callback to save the Keras model or model weights at some frequency.
```
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_batches,
epochs=5,
validation_data=validation_batches,
verbose=2,
callbacks=[ModelCheckpoint('weights.{epoch:02d}-{val_loss:.2f}.h5', verbose=1),
])
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_batches,
epochs=1,
validation_data=validation_batches,
verbose=2,
callbacks=[ModelCheckpoint('saved_model', verbose=1)
])
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_batches,
epochs=2,
validation_data=validation_batches,
verbose=2,
callbacks=[ModelCheckpoint('model.h5', verbose=1)
])
```
## [Early stopping](https://keras.io/api/callbacks/early_stopping/)
Stop training when a monitored metric has stopped improving.
```
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_batches,
epochs=50,
validation_data=validation_batches,
verbose=2,
callbacks=[EarlyStopping(
patience=3,
min_delta=0.05,
baseline=0.8,
mode='min',
monitor='val_loss',
restore_best_weights=True,
verbose=1)
])
```
## [CSV Logger](https://keras.io/api/callbacks/csv_logger/)
Callback that streams epoch results to a CSV file.
```
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
csv_file = 'training.csv'
model.fit(train_batches,
epochs=5,
validation_data=validation_batches,
callbacks=[CSVLogger(csv_file)
])
pd.read_csv(csv_file).head()
```
## [Learning Rate Scheduler](https://keras.io/api/callbacks/learning_rate_scheduler/)
Updates the learning rate during training.
```
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
def step_decay(epoch):
initial_lr = 0.01
drop = 0.5
epochs_drop = 1
lr = initial_lr * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lr
model.fit(train_batches,
epochs=5,
validation_data=validation_batches,
callbacks=[LearningRateScheduler(step_decay, verbose=1),
TensorBoard(log_dir='./log_dir')])
%tensorboard --logdir log_dir
```
## [ReduceLROnPlateau](https://keras.io/api/callbacks/reduce_lr_on_plateau/)
Reduce learning rate when a metric has stopped improving.
```
model = build_model(dense_units=256)
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_batches,
epochs=50,
validation_data=validation_batches,
callbacks=[ReduceLROnPlateau(monitor='val_loss',
factor=0.2, verbose=1,
patience=1, min_lr=0.001),
TensorBoard(log_dir='./log_dir')])
%tensorboard --logdir log_dir
```
| github_jupyter |
# Evaluación del tiempo de atención a incidentes viales
Se trabajó con el dataset "" que contiene los datos de los reportes de incidentes viales de la ciudad de México de los años 2016 a 2021 realizados por el c4. Este dataset fue preprocesdado durante el módulo anterior.
```
# Importar librearías
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import datetime as dt
from scipy import stats as sts
df = pd.read_csv('data/incidentes-viales-c5-limpio.csv', sep="$", index_col=0)
df.head()
df.shape
diccionario_de_conversion = {
'codigo_cierre': 'category',
'fecha_hora_creacion': 'datetime64[ns]',
'fecha_hora_cierre': 'datetime64[ns]',
'delegacion_inicio': 'category',
'incidente_c4': 'category',
'clas_con_f_alarma': 'category',
'tipo_entrada': 'category',
'delegacion_cierre':'category',
'mes':'category'
}
df = df.astype(diccionario_de_conversion)
df.describe()
```
# 1. Tiempo de Atención
Se creó la columna "tiempo_atencion" que contiene el tiempo en horas (decimal) que transcurrió entre la creación del reporte y su cierre.
```
df['tiempo_atencion'] = (df['fecha_hora_cierre'] - df['fecha_hora_creacion']) / dt.timedelta(hours=1)
df.head()
```
## 1.1. Limpieza
Se encontrarón valores de tiempo de atención negativos. Lo cuál es evidentemente un error, puesto que significaría que el reporte de incidente fue cerrado antes de que se creara.
```
df_backup = df.copy()
print(f"Hay {df[df['tiempo_atencion'] < 0].shape[0]} datos con tiempo de atención negativo")
wrong_data = df.copy()
wrong_data = wrong_data[wrong_data['tiempo_atencion']<0]
wrong_data[['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
```
Estos datos aparecen porque para ciertos valores de las columnas "fecha_hora_cierre" y "fecha_hora_creación" no se respeta el formato "AAAA-MM-DD" provocando ambigüedad en las fechas reportadas. Por ejemplo, el reporte con índice 1558
dice haber sido levantado el día 30 de junio del 2017 pero su fecha de cierre es 07 de enero del 2017 (una fecha anterior), sin embargo, al invertir el mes y día de la fecha de cierre, sería 01 de julio del 2017, lo cual hace más sentido.
Para resolverlo, se corrigen las fechas mal escritas en la columna "fecha_hora_creacion":
```
# Condicion 1, Día y mes de creación invertidos en fecha_hora_creacion
c1 = (wrong_data.fecha_hora_cierre.dt.day >= 12) & (wrong_data.fecha_hora_cierre.dt.month == wrong_data.fecha_hora_creacion.dt.day)
wd_c1 = wrong_data[c1]
wd_c1[['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
# 982 de 1922 datos
wd_c1['fecha_hora_creacion'] = wd_c1['fecha_hora_creacion'].apply(lambda x: dt.datetime.strftime(x, '%Y-%d-%m %H:%M:%S'))
wd_c1['fecha_hora_creacion'] = pd.to_datetime(wd_c1['fecha_hora_creacion'])
wd_c1['tiempo_atencion'] = (wd_c1['fecha_hora_cierre'] - wd_c1['fecha_hora_creacion']) / dt.timedelta(hours=1)
df.loc[wd_c1.index] = wd_c1
df.loc[wd_c1.index][['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
```
A continuacion las fechas de cierre mal escritas que se abrieron en un mes y se cerraron al siguiente
```
# Correction of 'fecha_hora_cierre' en la frontera superior del mes
c3 = (wrong_data.fecha_hora_creacion.dt.day >= 12) & (wrong_data.fecha_hora_creacion.dt.month == wrong_data.fecha_hora_cierre.dt.day-1)
wd_c3 = wrong_data[c3]
wd_c3[['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
# 933 de 1922
wd_c3['fecha_hora_cierre'] = wd_c3['fecha_hora_cierre'].apply(lambda x: dt.datetime.strftime(x, '%Y-%d-%m %H:%M:%S'))
wd_c3['fecha_hora_cierre'] = pd.to_datetime(wd_c3['fecha_hora_cierre'])
wd_c3['tiempo_atencion'] = (wd_c3['fecha_hora_cierre'] - wd_c3['fecha_hora_creacion']) / dt.timedelta(hours=1)
df.loc[wd_c3.index] = wd_c3
df.loc[wd_c3.index][['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
```
A continuación las fechas de creacion mal escritas que se abrieron en un mes y se cerraron al siguiente
```
c4 = (wrong_data.fecha_hora_cierre.dt.day >= 12) & (wrong_data.fecha_hora_cierre.dt.month == wrong_data.fecha_hora_creacion.dt.day+1)
wd_c4 = wrong_data[c4]
wd_c4[['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
wd_c4['fecha_hora_creacion'] = wd_c4['fecha_hora_creacion'].apply(lambda x: dt.datetime.strftime(x, '%Y-%d-%m %H:%M:%S'))
wd_c4['fecha_hora_creacion'] = pd.to_datetime(wd_c4['fecha_hora_creacion'])
wd_c4['tiempo_atencion'] = (wd_c4['fecha_hora_cierre'] - wd_c4['fecha_hora_creacion']) / dt.timedelta(hours=1)
df.loc[wd_c4.index] = wd_c4
df.loc[wd_c4.index][['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
```
El resto de valores negativos es ambiguo, pues ambas fechas podrían estar escritas correctamente por lo que no podemos determinar con certeza cual fecha hay que corregir. Por lo tanto se deshecharan estos datos.
```
df[df['tiempo_atencion'] < 0][['fecha_hora_cierre', 'fecha_hora_creacion', 'tiempo_atencion']]
df_clean = df[df['tiempo_atencion'] > 0]
df_clean_backup = df_clean.copy()
df_clean.head()
```
## 1.2 Estimados de locación y variabilidad
```
tiempo_atencion_mean = df_clean.tiempo_atencion.mean()
tiempo_atencion_std = df_clean.tiempo_atencion.std()
tiempo_atencion_median = df_clean.tiempo_atencion.median()
tiempo_atencion_trimmed_mean = sts.trim_mean(df_clean['tiempo_atencion'], 0.01)
print(f'La media del tiempo de atención es {round(tiempo_atencion_mean,2)} horas')
print(f'La varianza del tiempo de atención es {round(tiempo_atencion_std,2)} horas')
print(f'La mediana del tiempo de atención es {round(tiempo_atencion_mean,2)} horas')
print(f'La media del tiempo de atención es {round(tiempo_atencion_trimmed_mean,2)} horas')
```
Como la media y la mediana son muy parecidas, es probable que no haya valores atípicos. Esto es reforzado por el hecho de qe la media truncada al 1% cambia mucho en comparación a la media. Sin embargo, revisaremos los estimados de orden para corroborarlo
## 1.3 Estimados de orden
```
tiempo_atencion_min = df_clean['tiempo_atencion'].min()
tiempo_atencion_percentile_10 = df_clean['tiempo_atencion'].quantile(0.1)
tiempo_atencion_percentile_25 = df_clean['tiempo_atencion'].quantile(0.25)
tiempo_atencion_percentile_50 = df_clean['tiempo_atencion'].quantile(0.5)
tiempo_atencion_percentile_75 = df_clean['tiempo_atencion'].quantile(0.75)
tiempo_atencion_percentile_90 = df_clean['tiempo_atencion'].quantile(0.9)
tiempo_atencion_max = df_clean['tiempo_atencion'].max()
print(f'El valor tiempo de atención mínimo es de {round(tiempo_atencion_min,2)} horas')
print(f'El percentil 10% de tiempo de atención es {round(tiempo_atencion_percentile_10,2)} horas')
print(f'El percentil 25% de tiempo de atención es {round(tiempo_atencion_percentile_25,2)} horas')
print(f'El percentil 50% de tiempo de atención es {round(tiempo_atencion_percentile_50,2)} horas')
print(f'El percentil 75% de tiempo de atención es {round(tiempo_atencion_percentile_75,2)} horas')
print(f'El percentil 90% de tiempo de atención es {round(tiempo_atencion_percentile_90,2)} horas')
print(f'El valor tiempo de atención máximo es de {round(tiempo_atencion_max,2)} horas')
```
La diferencia tan grande entre el tiempo de atención máximo y el percentil 90 demuestran que efectivamente hay valores atípicos que pueden ser encontrados
```
tiempo_atencion_range = tiempo_atencion_max - tiempo_atencion_min
tiempo_atencion_iq_range = tiempo_atencion_percentile_75 - tiempo_atencion_percentile_25
print(f'El rango del tiempo de atencion es {round(tiempo_atencion_range,2)} horas')
print(f'El rango intercuartil del tiempo de atencion es {round(tiempo_atencion_iq_range,2)} horas')
df_clean.describe()
sns.set(style="ticks")
ta = df_clean['tiempo_atencion']
fig, axs = plt.subplots(1, 2, sharey=False, sharex=False, figsize=(12,4))
fig.suptitle('Distribución del tiempo de atención a incidentes vehiculares en CDMX')
# Boxplot
sns.boxplot(x=ta, ax=axs[0]);
axs[0].set(xlabel='Tiempo atención (Horas)')
# Desnity plot
sns.distplot(ta, hist=False, ax=axs[1]);
axs[1].set(xlabel='Tiempo atención (Horas)', ylabel='Densidad')
```
## 1.4 Filtro de datos atípicos
Se filtraron los datos utilizando el rango intercuartílico como parámetro para filtrar los datos atípicos.
```
ll = ta > tiempo_atencion_percentile_25 - (tiempo_atencion_iq_range * 1.5)
ul = ta < tiempo_atencion_percentile_75 + (tiempo_atencion_iq_range * 1.5)
df_filtered = df_clean[ll & ul]
ta_filtered = df_filtered['tiempo_atencion']
# Porcentaje de datos eliminados por ser atípicos
(1 - (ta_filtered.shape[0]/ta.shape[0]))*100
fig, axs = plt.subplots(1, 2, sharey=False, sharex=False, figsize=(12,4))
fig.suptitle('Distribución del tiempo de atención a incidentes vehiculares en CDMX')
# Boxplot
sns.boxplot(x=ta_filtered, ax=axs[0]);
axs[0].set(xlabel='Tiempo (horas)');
# Desnity plot
sns.distplot(ta_filtered, kde=False, norm_hist=False, bins=50, ax=axs[1]);
axs[1].set(xlabel='Tiempo (horas)');
axs[1].set(ylabel='Frecuencia');
axs[1].axvline(ta_filtered.median(), color='r')
plt.savefig('img/ta_distribucion.png')
df_filtered.tiempo_atencion.median()
df_filtered.describe()
```
Esta nueva información sigue una distribución más concentrada considerando únicamente los tiempos de atención que representan al grueso de los datos.
# 2. Análisis exploratorio del tiempo de atención
```
from scipy.stats import skew, kurtosis
print(f'Curtosis: {kurtosis(ta_filtered)}')
print(f'Asimetría: {skew(ta_filtered)}')
```
Tal como se aprecia en la gráfica de densidad, el tiempo de atención a los incidentes viales tiene una distribución asimétrica cargada a la izquierda con un coeficiente de asimetría mayor a 1.2. La curtosis resultó ser mayor a 0.84, esto implica que hay más dispersión de los datos.
## 2.3. Distribución por categorías
### Delegación
Se realizó un análisis exploratorio de la distribución del tiempo de atención por localidades, para ver si hay delegaciones en donde los incidentes reportados se cierran con mayor prontitud.
```
by_delegacion = df_filtered.groupby('folio')[['tiempo_atencion', 'latitud', 'longitud']].last()
delegacion = df.groupby('folio')['delegacion_inicio'].last()
merged = by_delegacion.merge(delegacion, left_index=True, right_index=True)
by_delegacion.head()
plt.figure(figsize=(15, 5))
sns.boxplot(data=merged, x='delegacion_inicio', y='tiempo_atencion');
plt.suptitle('Distribución de tiempo de atención por delegación');
plt.xticks(rotation=45);
plt.xlabel('Delegación');
plt.ylabel('Tiempo de atención (Horas)');
plt.axhline(ta_filtered.median(), color='#000000');
plt.savefig('img/ta_distribucion_delegacion.png')
```
Se observó que las medianas del tiempo de atención por delegación oscilan al rededor de la media muestral original, y no difieren demasiado entre ellas.
### Clase de incidente
El c4 clasifica los reportes en 4 clases dependiendo de la gravedad de la situación, las cuales son:
* Emergencias
* Urgencias médicas
* Delitos
* Flasas alarmas
El análisis exploratorio por clase se realizó para ver si este grado de emergencia influye directamente en el tiempo de atencion.
```
by_clase = df_filtered.groupby('folio')[['tiempo_atencion', 'latitud', 'longitud']].last()
clase = df.groupby('folio')['clas_con_f_alarma'].last()
merged = by_clase.merge(clase, left_index=True, right_index=True)
plt.figure(figsize=(8, 4))
sns.boxplot(data=merged, x='clas_con_f_alarma', y='tiempo_atencion');
plt.suptitle('Distribución de tiempo de atención por tipo de incidente');
plt.xlabel('Clase de incidente');
plt.ylabel('Tiempo de atención');
plt.axhline(ta_filtered.median(), color='#000000');
plt.savefig('img/ta_distribucion_clase.png')
```
Se observó que el tiempo de atención para emergencias, urgencias y delitos es bastante cercano a la media muestral original. Sin embargo, las falsas alarmas son cerradas con mucha mayor prontitud. Lo anterior tiene sentido, ya que normalmente las falsas alarmas se detectan incluso antes de enviar unidades al sitio, o se identifican rápidamente estando ahí.
### Características del incidente
Las características del incicente clasifican los siniestros dependiendo de claves particulares que identifican la magnitud del accidente y el estado de los involucrados.
```
by_incidente = df_filtered.groupby('folio')[['tiempo_atencion', 'latitud', 'longitud']].last()
clase = df.groupby('folio')['incidente_c4'].last()
merged = by_clase.merge(clase, left_index=True, right_index=True)
plt.figure(figsize=(15, 4))
sns.boxplot(data=merged, x='incidente_c4', y='tiempo_atencion');
plt.suptitle('Distribución de tiempo de atención por tipo de accidente');
plt.xticks(rotation=90);
plt.xlabel('Accidente');
plt.ylabel('Tiempo de atención');
plt.axhline(ta_filtered.median(), color='#000000');
plt.savefig('img/ta_distribucion_tipo.png')
```
Se encontró mayor variabilidad en cuanto al tiempo de atención entre los diferentes tipos de incidentes, así como una gran dispersión entre ellos. Dentro de la información más notable se puede observar que los incidentes que involucran cádaveres (por atropellamiento o colision entre vehiculos) suelen tardar mucho más en resolverse que la media general.Por otro lado los incidentes que involucran lesionados suelen tener mayoy variabilidad y outliers.
### Entrada del reporte
Actualmente el c4 utiliza 8 herramientas de entrada para que civiles o autoridades viales levanten reportes de incidentes, las cuáles son:
* Radio
* Botón de auxilio
* Cámara
* Llamada del 911
* Redes sociales
* Aplicacion 911
* Aplicación Zello
* Aplicativos
```
by_entrada = df_filtered.groupby('folio')[['tiempo_atencion', 'latitud', 'longitud']].last()
entrada = df.groupby('folio')['tipo_entrada'].last()
merged = by_clase.merge(entrada, left_index=True, right_index=True)
plt.figure(figsize=(12, 4))
sns.boxplot(data=merged, x='tipo_entrada', y='tiempo_atencion');
plt.suptitle('Distribución de tiempo de atención por entrada de reporte');
plt.xlabel('Entrada de reporte');
plt.ylabel('Tiempo de atención');
plt.axhline(ta_filtered.median(), color='#000000');
plt.savefig('img/ta_distribucion_entrada.png')
```
# 3. Preguntas
A partir de éste análisis exploratorio se encontraron algunas preguntas interesantes que podrían resolverse.
* **¿Es realmente diferente el tiempo de atención entre delegaciones, tipos de incidentes, o medios de reporte?**
* **¿El uso de aplicaciones móviles tiene un impacto en el tiempo de atención a los reportes?**
* **En caso de tenerlo, ¿Es más eficiente que el reporte por métodos tradicionales como la llamada?**
* **¿Existe una relación entre los factores antes mencionados que influya sobre el tiempo de atención?**
Resolver estas interrogantes justificaría el desarrollo de un modelo que permita predecir los tiempos de atencion de un determinado accidente, o la probabilidad de que dicha llamada sea una falsa alarma. Estos modelos podrían ser de utilidad para evaluar el rendimiento de la autoridades competentes al resolver estos siniestros, así como la eficacia de nuevas propuestas que se tomen en la busqueda de disminuir el tiempo de atención a los reportes.
Para resolver adecuadamente estas preguntas se realizó una segunda limpieza del dataset que consistió en trabajar con un dataset que no hiciera distinción entre el tipo de aplicación utilizada (Zello, APP911, Aplicativos)
```
df_filtered_bckup = df_filtered.copy()
df_filtered.tipo_entrada = df_filtered.apply(lambda r: 'APLICATIVOS' if (r.tipo_entrada == 'ZELLO') | (r.tipo_entrada == 'LLAMADA APP911') else r.tipo_entrada, axis=1)
```
## 4. Visualización cruzada
### Tiempo de atencion por delegación y por clase
```
by_delegacion_clase = df_filtered.groupby(['delegacion_inicio','clas_con_f_alarma'])[['tiempo_atencion']].median()
by_delegacion_clase = by_delegacion_clase.unstack(1)
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot()
plt1 = ax.bar(by_delegacion_clase.index, by_delegacion_clase['tiempo_atencion']['DELITO'],
#yerr= tiempo_atencion_iq_range,
label='DELITO')
#color=["#7788AA","#4E638E","#2E4372","#152A55"])
plt2 = ax.bar(by_delegacion_clase.index, by_delegacion_clase['tiempo_atencion']['URGENCIAS MEDICAS'],
bottom=by_delegacion_clase['tiempo_atencion']['DELITO']
)
#color=["#7788AA","#4E638E","#2E4372","#152A55"])
plt3 = ax.bar(by_delegacion_clase.index, by_delegacion_clase['tiempo_atencion']['EMERGENCIA'],
bottom=by_delegacion_clase['tiempo_atencion']['DELITO']+by_delegacion_clase['tiempo_atencion']['URGENCIAS MEDICAS'],
label='EMERGENCIA')
#color=["#FFD0AA", "#D4996A", "#AA6B39", "#804415"])
plt4 = ax.bar(by_delegacion_clase.index, by_delegacion_clase['tiempo_atencion']['FALSA ALARMA'],
bottom=by_delegacion_clase['tiempo_atencion']['EMERGENCIA']+by_delegacion_clase['tiempo_atencion']['DELITO']+by_delegacion_clase['tiempo_atencion']['URGENCIAS MEDICAS'])
#color=["#7788AA","#4E638E","#2E4372","#152A55"])
ax.set_ylabel('Tiempo de atencion', fontsize=15);
ax.set_xlabel('');
plt.xticks(rotation=70);
plt.legend((plt1[0], plt2[0], plt3[0], plt4[0]), ('Delito', 'Urgencias médicas', 'Emergencia', 'Falsa alarma'))
ax.set_title('Tiempo de atencion por delegacion', fontsize=20, pad=10);
plt.savefig('img/ta_delegacion_clase.png')
```
### Tiempo de atención por delegación y entrada
```
by_delegacion_entrada = df_filtered.groupby(['delegacion_inicio','tipo_entrada'])[['tiempo_atencion']].median()
by_delegacion_entrada = by_delegacion_entrada.unstack(1)
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot()
plt1 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['CÁMARA'],
label='CÁMARA')
plt2 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['RADIO'],
bottom=by_delegacion_entrada['tiempo_atencion']['CÁMARA']
)
plt3 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['BOTÓN DE AUXILIO'],
bottom=by_delegacion_entrada['tiempo_atencion']['CÁMARA']+by_delegacion_entrada['tiempo_atencion']['RADIO']
)
plt4 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['LLAMADA DEL 911'],
bottom=by_delegacion_entrada['tiempo_atencion']['CÁMARA']+by_delegacion_entrada['tiempo_atencion']['BOTÓN DE AUXILIO']+by_delegacion_entrada['tiempo_atencion']['RADIO']
)
plt5 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['REDES'],
bottom=by_delegacion_entrada['tiempo_atencion']['CÁMARA']+by_delegacion_entrada['tiempo_atencion']['BOTÓN DE AUXILIO']+by_delegacion_entrada['tiempo_atencion']['LLAMADA DEL 911']+by_delegacion_entrada['tiempo_atencion']['RADIO']
)
plt6 = ax.bar(by_delegacion_entrada.index, by_delegacion_entrada['tiempo_atencion']['APLICATIVOS'],
bottom=by_delegacion_entrada['tiempo_atencion']['CÁMARA']+by_delegacion_entrada['tiempo_atencion']['BOTÓN DE AUXILIO']+by_delegacion_entrada['tiempo_atencion']['LLAMADA DEL 911']+by_delegacion_entrada['tiempo_atencion']['RADIO']+by_delegacion_entrada['tiempo_atencion']['REDES']
)
ax.set_ylabel('Tiempo de atencion', fontsize=15);
ax.set_ylim(0, 11)
ax.set_xlabel('');
plt.xticks(rotation=70);
plt.legend((plt1[0], plt2[0], plt3[0], plt4[0], plt5[0], plt6[0]), ('Cámara', 'Radio', 'Botón auxilio', 'Llamada 911', 'Redes', 'Aplicaciones'))
ax.set_title('Tiempo de atencion por delegacion', fontsize=20, pad=10);
plt.savefig('img/ta_delegacion_entrada.png')
```
# 5. A/B test sobre el uso de aplicaciones móviles para reportar incidentes viales
El uso de aplicativos para levantar reportes es una medida implementada recién a partir del 2017.
```
df_filtered[df_filtered.tipo_entrada == 'APLICATIVOS']['fecha_hora_creacion'].min().year
```
Se evaluó si el uso de aplicaciones móviles optimiza el timepo de atencion a los incidentes en comparacion a los métodos convencionles (Llamada, cámara, radio o botón) utilizando la información registrada a partir del 2017. Se excluyeron arbitrariamente los reportes realizados mediante redes sociales pues no pertenencen a las categorías de interés.
```
df_2017 = df_filtered[df_filtered.fecha_hora_creacion.dt.year > 2016][['delegacion_inicio', 'tipo_entrada', 'tiempo_atencion']]
df_2017 = df_2017[df_2017.tipo_entrada != 'REDES']
df_2017.head()
```
Para este test se consideran dos formas de reporte distintos y se registra la media del tiempo de atención para cada una de ellas. El test A es la hipótesis alternativa (los reportes hechos por aplicativos), mientras que el test B es la hipótesis nula (los reportes levantados por métodos convencionales).
```
df_ab = df_2017.copy()
df_ab['tipo_entrada_ab'] = df_ab.apply(lambda x: 'APLICATIVOS' if x['tipo_entrada'] == 'APLICATIVOS' else 'OTROS', axis=1)
df_test = df_ab.groupby('tipo_entrada_ab')['tiempo_atencion'].median()
df_test = pd.DataFrame(df_test)
df_test['velocidad_atencion'] = 1 / df_test['tiempo_atencion']
df_test
```
En la tabla anterior parece que la mediana del timepo de atención es menor cuando el reporte se hizo mediante aplicativos en comparación a los métodos tradicionales. Es decir, que el test A es más veloz para resolver los incidentes que el test B
```
diferencia = (1 - (df_test.loc['OTROS']['velocidad_atencion'] / df_test.loc['APLICATIVOS']['velocidad_atencion']))*100
print(f'El tiempo de atencion en el test A (aplicativos) fue {round(diferencia,2)}% más rápido que el control')
```
Se aplica un test de permutación para ver si el resultado es producto del azar
```
df_all = df_ab[['delegacion_inicio', 'tiempo_atencion']] # Eliminar tipo de entrada
value_counts = df_ab['tipo_entrada_ab'].value_counts()
ta_a = []
ta_b = []
for _ in range(1000):
a = df.sample(value_counts.loc['APLICATIVOS'], replace=False)
ta_a.append(a['tiempo_atencion'].median())
b = df_ab.loc[~df_ab.index.isin(a.index)]
ta_b.append(b['tiempo_atencion'].median())
perm_results = pd.DataFrame({
'ta_APLICATIVOS': ta_a,
'ta_OTROS': ta_b
})
perm_results['va_APLICATIVOS'] = 1 / perm_results['ta_APLICATIVOS']
perm_results['va_OTROS'] = 1 / perm_results['ta_OTROS']
perm_results['diff'] = 1 - (perm_results.va_OTROS / perm_results.va_APLICATIVOS)
perm_results
P = (perm_results['diff'] >= diferencia/100).sum() / perm_results.shape[0]
sns.distplot(perm_results['diff'], kde=False, norm_hist=False);
plt.axvline(diferencia/100);
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5);
plt.text(0, 100, f'P value = {P}', fontsize=14, verticalalignment='top', bbox=props);
plt.xlabel('Diferencia entre las medianas de la velocidad de atención');
plt.ylabel('Frecuencia')
plt.savefig('img/ta_test_ab.png')
```
Esto quiere decir que ninguno de las diferencias obtenidas aleatoreamente es más extrema que la calculada originalmente. Es decir que el valor proporcionado por los datos originales es atípico para ser un valor causado por el azar y por tanto es estadísticamente significativo. Con un p<0.1 rechazamos la hipótesis nula.
| github_jupyter |
```
import matplotlib.pyplot as plt
from bubblekicker.bubblekicker import (BubbleKicker, batchbubblekicker, bubble_properties_calculate,
_bubble_properties_filter, bubble_properties_plot)
from bubblekicker.pipelines import CannyPipeline, AdaptiveThresholdPipeline
%matplotlib inline
img = '3411017m_0004.jpg'
```
### Pipelines testing¶
#### Canny canonical method
```
bubbler = CannyPipeline(img, channel='red') #setup the pipeline by loading the file
result = bubbler.run([20, 80], 3, 3, 1, 1) # executing the pipeline with custom parameters
marker_image, props = bubble_properties_calculate(result) # extract the properties
#filtered_bubbles = bubble_properties_filter(props) # filter based on the default filter rules
bubbler.plot();
# the filtered bubbles
plt.imshow(marker_image>0, cmap='gray');
fig, axs = bubble_properties_plot(props, "equivalent_diameter") # make a plot
bubbler.what_have_i_done()
#plt.savefig('BSDhist.png')
```
#### Adaptive threshold method
```
#setup the pipeline by loading the file
bubbler = AdaptiveThresholdPipeline(img, channel='red')
result = bubbler.run(191, 15, 2, 3, 1, 1) # executing the pipeline with custom parameters
bubbler.plot() # plot detected bubbbles
marker_image, props = bubble_properties_calculate(result) # extract the properties
## nbubbles, to include
# the filtered bubbles
plt.imshow(marker_image>0, cmap='gray');
#plt.imshow(marker_image, cmap='Greys', interpolation='nearest')
fig, axs = bubble_properties_plot(props, "equivalent_diameter") # make a plot
bubbler.what_have_i_done()
```
### Custom sequence
#### Canny
```
bubbler = BubbleKicker(img, channel='red')
bubbler.edge_detect_canny_opencv([20, 80]) # canny edge detection givin the two parameters to build the gaussian
bubbler.dilate_opencv(3) # dilate using opencv function
bubbler.plot();
bubbler.what_have_i_done()
```
we can see that too many edges are detected
```
bubbler.reset_to_raw()
bubbler.edge_detect_canny_opencv([120, 20])
bubbler.dilate_opencv(3)
bubbler.clear_border_skimage(3, 1)
bubbler.plot();
bubbler.what_have_i_done()
plt.savefig('cannyPipeline.jpg')
```
#### Adaptive method
```
bubbler.reset_to_raw() # here we are running things in default mode
bubbler.adaptive_threshold_opencv(401, 10)
bubbler.clear_border_skimage()
bubbler.plot()
bubbler.what_have_i_done()
```
### Now we might be ready to go for a batch of images
```
res = batchbubblekicker('sample_images', 'red',
AdaptiveThresholdPipeline,
401, 10, 3, 1, 1)
```
### Bubble properties
#### bubble properties can be returned as a table¶
```
bubbler = CannyPipeline(img, channel='red')
result = bubbler.run([120, 180], 3, 3, 1, 1)
marker_image, props = bubble_properties_calculate(result) #output do add nbubbles,
#print(nbubbles)
fig, axs = bubble_properties_plot(props, "equivalent_diameter")
props.head()
```
##### ...for the ease of object selection and futher filtering, based on default parameters
```
# derive and PLOT the bubble properties as a table with no filter
bubbler = CannyPipeline(img, channel='red')
result = bubbler.run([120, 180], 3, 3, 1, 1)
id_image, props = bubble_properties_calculate(result, rules={})
fig, axs = bubble_properties_plot(props, "equivalent_diameter")
#fig.savefig("examples/output_eq_diameter.png")
fig, axs = bubble_properties_plot(props, "area")
#fig.savefig("examples/output_area.png")
```
##### filter based on custom parameters
```
# filter bubble properties based on CUSTOM filter ruleset
custom_filter = {'circularity_reciprocal': {'min': 0.2, 'max': 1.6},
'convexity': {'min': 1.92}}
bubbler = CannyPipeline('0325097m_0305.tif', channel='red')
result = bubbler.run([120, 180], 3, 3, 1, 1)
id_image, props = bubble_properties_calculate(result, rules=custom_filter)
print(props.head())
fig, axs = bubble_properties_plot(props, "equivalent_diameter")
plt.show()
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 20
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Dropping pennies
I'll start by getting the units we need from Pint.
```
m = UNITS.meter
s = UNITS.second
```
And defining the initial state.
```
init = State(y=381 * m,
v=0 * m/s)
```
Acceleration due to gravity is about 9.8 m / s$^2$.
```
g = 9.8 * m/s**2
```
I'll start with a duration of 10 seconds and step size 0.1 second.
```
t_end = 10 * s
dt = 0.1 * s
```
Now we make a `System` object.
```
system = System(init=init, g=g, t_end=t_end, dt=dt)
```
And define the slope function.
```
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
g = system.g
dydt = v
dvdt = -g
return dydt, dvdt
```
It's always a good idea to test the slope function with the initial conditions.
```
dydt, dvdt = slope_func(system.init, 0, system)
print(dydt)
print(dvdt)
```
Now we're ready to call `run_ode_solver`
```
results, details = run_ode_solver(system, slope_func)
details
```
Here are the results:
```
results.head()
results.tail()
```
And here's position as a function of time:
```
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap20-fig01.pdf')
```
### Onto the sidewalk
To figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
```
t_crossings = crossings(results.y, 0)
```
For this example there should be just one crossing, the time when the penny hits the sidewalk.
```
t_sidewalk = t_crossings[0] * s
```
We can compare that to the exact result. Without air resistance, we have
$v = -g t$
and
$y = 381 - g t^2 / 2$
Setting $y=0$ and solving for $t$ yields
$t = \sqrt{\frac{2 y_{init}}{g}}$
```
sqrt(2 * init.y / g)
```
The estimate is accurate to about 9 decimal places.
## Events
Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.
Here's an event function that returns the height of the penny above the sidewalk:
```
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
```
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
```
results, details = run_ode_solver(system, slope_func, events=event_func)
details
```
The message from the solver indicates the solver stopped because the event we wanted to detect happened.
Here are the results:
```
results.tail()
```
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced.
The last time step is when the event occurred:
```
t_sidewalk = get_last_label(results) * s
```
The result is accurate to about 4 decimal places.
We can also check the velocity of the penny when it hits the sidewalk:
```
v_sidewalk = get_last_value(results.v)
```
And convert to kilometers per hour.
```
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
```
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
## Under the hood
Here is the source code for `crossings` so you can see what's happening under the hood:
```
source_code(crossings)
```
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).
### Exercises
**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):
"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."
Use `run_ode_solver` to answer this question.
Here are some suggestions about how to proceed:
1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.
2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.
3. Express your answer in days, and plot the results as millions of kilometers versus days.
If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.
You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
```
# Solution
N = UNITS.newton
kg = UNITS.kilogram
m = UNITS.meter
AU = UNITS.astronomical_unit
# Solution
r_0 = (1 * AU).to_base_units()
v_0 = 0 * m / s
init = State(r=r_0,
v=v_0)
# Solution
radius_earth = 6.371e6 * m
radius_sun = 695.508e6 * m
t_end = 1e7 * s
dt = t_end / 100
system = System(init=init,
G=6.674e-11 * N / kg**2 * m**2,
m1=1.989e30 * kg,
r_final=radius_sun + radius_earth,
m2=5.972e24 * kg,
t_end=t_end,
dt=dt)
# Solution
def universal_gravitation(state, system):
"""Computes gravitational force.
state: State object with distance r
system: System object with m1, m2, and G
"""
r, v = state
G, m1, m2 = system.G, system.m1, system.m2
force = G * m1 * m2 / r**2
return force
# Solution
universal_gravitation(init, system)
# Solution
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `m2`
returns: derivatives of y and v
"""
y, v = state
m2 = system.m2
force = universal_gravitation(state, system)
dydt = v
dvdt = -force / m2
return dydt, dvdt
# Solution
slope_func(system.init, 0, system)
# Solution
def event_func(state, t, system):
r, v = state
return r - system.r_final
# Solution
event_func(init, 0, system)
# Solution
results, details = run_ode_solver(system, slope_func, events=event_func)
details
# Solution
t_event = get_last_label(results)
# Solution
seconds = t_event * s
days = seconds.to(UNITS.day)
# Solution
results.index /= 60 * 60 * 24
# Solution
results.r /= 1e9
# Solution
plot(results.r, label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
```
| github_jupyter |
```
from collections import Counter
import math, random
def random_kid():
return random.choice(["boy", "girl"])
kid_test_list = [random_kid() for i in range(10)]
kid_test_list #random_kid 함수는 boy와 girl 두개의 값중에 하는 램덤하게 추출함
both_girls = 0
older_girl = 0
either_girl = 0
random.seed(0)
for _ in range(10000):
younger = random_kid()
older = random_kid()
if older == "girl": # 큰 아이가 여자일 경우 +1
older_girl += 1
if older == "girl" and younger == "girl": #둘다 여자일 경우 +1
both_girls += 1
if older == "girl" or younger == "girl": #둘중에 하나라도 여자일경우 +1
either_girl += 1
print ("P(both | older):", both_girls / older_girl) # 0.514 ~ 1/2 #큰 아이가 딸이고 둘다 딸일 확률
print ("P(both | either): ", both_girls / either_girl) # 0.342 ~ 1/3 # 둘중에 한명이 딸이면서 둘 따 딸일 확률
def uniform_pdf(x):
return 1 if x >= 0 and x < 1 else 0
def uniform_cdf(x):
"returns the probability that a uniform random variable is less than x"
if x < 0:
return 0 # uniform random is never less than 0
elif x < 1:
return x # e.g. P(X < 0.4) = 0.4
else:
return 1 # uniform random is always less than 1
import numpy as np
x = np.arange(-1.0, 2.0, 0.1)
result_array = np.vectorize(uniform_cdf, otypes=[np.float])(x)
import matplotlib.pyplot as plt
%pylab inline
plt.plot(x, result_array)
plt.axis([-1, 2, -1, 1.5])
plt.show()
def normal_pdf(x, mu=0, sigma=1):
sqrt_two_pi = math.sqrt(2 * math.pi)
return (math.exp(-(x-mu) ** 2 / 2 / sigma ** 2) / (sqrt_two_pi * sigma))
for sigma_value in [1,2,0.5,1]:
x = np.arange(-6.0, 6.0, 0.1)
result_array = np.vectorize(normal_pdf, otypes=[np.float])(x, sigma=sigma_value)
# plt.plot(x, result_array, "ro")
plt.plot(x, result_array)
plt.axis([-6, 6, 0, 1])
plt.show()
def plot_normal_pdfs(plt):
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs,[normal_pdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs,[normal_pdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs,[normal_pdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend()
plt.show()
import matplotlib.pyplot as plt
plot_normal_pdfs(plt)
def normal_cdf(x, mu=0,sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
def plot_normal_cdfs(plt):
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs,[normal_cdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs,[normal_cdf(x,sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs,[normal_cdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs,[normal_cdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend(loc=4) # bottom right
plt.show()
import matplotlib.pyplot as plt
plot_normal_cdfs(plt)
def inverse_normal_cdf(p, mu=0, sigma=1, tolerance=0.00001):
"""find approximate inverse using binary search"""
# if not standard, compute standard and rescale
if mu != 0 or sigma != 1:
return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance)
low_z, low_p = -10.0, 0 # normal_cdf(-10) is (very close to) 0
hi_z, hi_p = 10.0, 1 # normal_cdf(10) is (very close to) 1
while hi_z - low_z > tolerance:
mid_z = (low_z + hi_z) / 2 # consider the midpoint
mid_p = normal_cdf(mid_z) # and the cdf's value there
if mid_p < p:
# midpoint is still too low, search above it
low_z, low_p = mid_z, mid_p
elif mid_p > p:
# midpoint is still too high, search below it
hi_z, hi_p = mid_z, mid_p
else:
break
return mid_z
np.vectorize(inverse_normal_cdf, otypes=[np.float])([0, 0.5, 0.90, 0.95, 0.975, 1])
# 0%, 50%, 90%, 95%, 97.5%, 100%의 확률일경우 누적분포의 확률변수값
def bernoulli_trial(p):
return 1 if random.random() < p else 0
def binomial(p, n):
return sum(bernoulli_trial(p) for _ in range(n))
def make_hist(p, n, num_points):
data = [binomial(p, n) for _ in range(num_points)]
# use a bar chart to show the actual binomial samples
histogram = Counter(data)
plt.bar([x - 0.4 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
# use a line chart to show the normal approximation
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma)
for i in xs]
plt.plot(xs,ys)
plt.show()
make_hist(0.75,100,1000)
make_hist(0.50,100,1000)
```
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, sys
import glob
import random
import time
import imgaug
from imgaug import augmenters as iaa
from PIL import Image
from tqdm import tqdm
import numpy as np
from six.moves import range
import openslide
import tensorflow as tf
from torchvision import transforms # noqa
from torch.utils.data import DataLoader, Dataset
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.color import rgb2hsv
from skimage.filters import threshold_otsu
from tensorflow.keras import backend as K
import xml.etree.cElementTree as ET
def BinMorphoProcessMask(mask):
"""
Binary operation performed on tissue mask
"""
close_kernel = np.ones((20, 20), dtype=np.uint8)
image_close = cv2.morphologyEx(np.array(mask), cv2.MORPH_CLOSE, close_kernel)
open_kernel = np.ones((5, 5), dtype=np.uint8)
image_open = cv2.morphologyEx(np.array(image_close), cv2.MORPH_OPEN, open_kernel)
return image_open
def get_bbox(cont_img, rgb_image=None):
contours, _ = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rgb_contour = None
if rgb_image is not None:
rgb_contour = rgb_image.copy()
line_color = (0, 0, 255) # blue color code
cv2.drawContours(rgb_contour, contours, -1, line_color, 2)
bounding_boxes = [cv2.boundingRect(c) for c in contours]
for x, y, h, w in bounding_boxes:
rgb_contour = cv2.rectangle(rgb_contour,(x,y),(x+h,y+w),(0,255,0),2)
return bounding_boxes, rgb_contour
def get_all_bbox_masks_with_stride(mask, stride_factor):
"""
Find the bbox and corresponding masks
"""
bbox_mask = np.zeros_like(mask)
bounding_boxes, _ = get_bbox(mask)
y_size, x_size = bbox_mask.shape
for x, y, h, w in bounding_boxes:
x_min = x - stride_factor
x_max = x + h + stride_factor
y_min = y - stride_factor
y_max = y + w + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_max > y_size:
y_max = y_size - 1
bbox_mask[y_min:y_max:stride_factor, x_min:x_max:stride_factor]=1
return bbox_mask
def get_all_bbox_masks(mask, stride_factor):
"""
Find the bbox and corresponding masks
"""
bbox_mask = np.zeros_like(mask)
bounding_boxes, _ = get_bbox(mask)
y_size, x_size = bbox_mask.shape
for x, y, h, w in bounding_boxes:
x_min = x - stride_factor
x_max = x + h + stride_factor
y_min = y - stride_factor
y_max = y + w + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_max > y_size:
y_max = y_size - 1
bbox_mask[y_min:y_max, x_min:x_max]=1
return bbox_mask
def find_largest_bbox(mask, stride_factor):
"""
Find the largest bounding box encompassing all the blobs
"""
y_size, x_size = mask.shape
x, y = np.where(mask==1)
bbox_mask = np.zeros_like(mask)
x_min = np.min(x) - stride_factor
x_max = np.max(x) + stride_factor
y_min = np.min(y) - stride_factor
y_max = np.max(y) + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_max > y_size:
y_max = y_size - 1
# print(x_min, x_max, y_min, y_max)
bbox_mask[x_min:x_max, y_min:y_max]=1
return bbox_mask
def TissueMaskGeneration(slide_obj, level, RGB_min=50):
img_RGB = np.transpose(np.array(slide_obj.read_region((0, 0),
level,
slide_obj.level_dimensions[level]).convert('RGB')),
axes=[1, 0, 2])
img_HSV = rgb2hsv(img_RGB)
background_R = img_RGB[:, :, 0] > threshold_otsu(img_RGB[:, :, 0])
background_G = img_RGB[:, :, 1] > threshold_otsu(img_RGB[:, :, 1])
background_B = img_RGB[:, :, 2] > threshold_otsu(img_RGB[:, :, 2])
tissue_RGB = np.logical_not(background_R & background_G & background_B)
tissue_S = img_HSV[:, :, 1] > threshold_otsu(img_HSV[:, :, 1])
min_R = img_RGB[:, :, 0] > RGB_min
min_G = img_RGB[:, :, 1] > RGB_min
min_B = img_RGB[:, :, 2] > RGB_min
tissue_mask = tissue_S & tissue_RGB & min_R & min_G & min_B
return tissue_mask
def labelthreshold(image, threshold=0.5):
label = np.zeros_like(image)
label[image >= threshold] = 1
return label
def normalize_minmax(data):
"""
Normalize contrast across volume
"""
_min = np.float(np.min(data))
_max = np.float(np.max(data))
if (_max-_min)!=0:
img = (data - _min) / (_max-_min)
else:
img = np.zeros_like(data)
return img
# Image Helper Functions
def imshow(*args,**kwargs):
""" Handy function to show multiple plots in on row, possibly with different cmaps and titles
Usage:
imshow(img1, title="myPlot")
imshow(img1,img2, title=['title1','title2'])
imshow(img1,img2, cmap='hot')
imshow(img1,img2,cmap=['gray','Blues']) """
cmap = kwargs.get('cmap', 'gray')
title= kwargs.get('title','')
axis_off = kwargs.get('axis_off','')
if len(args)==0:
raise ValueError("No images given to imshow")
elif len(args)==1:
plt.title(title)
plt.imshow(args[0], interpolation='none')
else:
n=len(args)
if type(cmap)==str:
cmap = [cmap]*n
if type(title)==str:
title= [title]*n
plt.figure(figsize=(n*5,10))
for i in range(n):
plt.subplot(1,n,i+1)
plt.title(title[i])
plt.imshow(args[i], cmap[i])
if axis_off:
plt.axis('off')
plt.show()
class WSIStridedPatchDataset(Dataset):
"""
Data producer that generate all the square grids, e.g. 3x3, of patches,
from a WSI and its tissue mask, and their corresponding indices with
respect to the tissue mask
"""
def __init__(self, wsi_path, mask_path, label_path=None, image_size=256,
normalize=True, flip='NONE', rotate='NONE',
level=5, sampling_stride=16, roi_masking=True):
"""
Initialize the data producer.
Arguments:
wsi_path: string, path to WSI file
mask_path: string, path to mask file in numpy format OR None
label_mask_path: string, path to ground-truth label mask path in tif file or
None (incase of Normal WSI or test-time)
image_size: int, size of the image before splitting into grid, e.g. 768
patch_size: int, size of the patch, e.g. 256
crop_size: int, size of the final crop that is feed into a CNN,
e.g. 224 for ResNet
normalize: bool, if normalize the [0, 255] pixel values to [-1, 1],
mostly False for debuging purpose
flip: string, 'NONE' or 'FLIP_LEFT_RIGHT' indicating the flip type
rotate: string, 'NONE' or 'ROTATE_90' or 'ROTATE_180' or
'ROTATE_270', indicating the rotate type
level: Level to extract the WSI tissue mask
roi_masking: True: Multiplies the strided WSI with tissue mask to eliminate white spaces,
False: Ensures inference is done on the entire WSI
sampling_stride: Number of pixels to skip in the tissue mask, basically it's the overlap
fraction when patches are extracted from WSI during inference.
stride=1 -> consecutive pixels are utilized
stride= image_size/pow(2, level) -> non-overalaping patches
"""
self._wsi_path = wsi_path
self._mask_path = mask_path
self._label_path = label_path
self._image_size = image_size
self._normalize = normalize
self._flip = flip
self._rotate = rotate
self._level = level
self._sampling_stride = sampling_stride
self._roi_masking = roi_masking
self._preprocess()
def _preprocess(self):
self._slide = openslide.OpenSlide(self._wsi_path)
X_slide, Y_slide = self._slide.level_dimensions[0]
factor = self._sampling_stride
if self._label_path is not None:
self._label_slide = openslide.OpenSlide(self._label_path)
if self._mask_path is not None:
mask_file_name = os.path.basename(self._mask_path)
if mask_file_name.endswith('.npy'):
self._mask = np.load(self._mask_path)
if mask_file_name.endswith('.tif'):
mask_obj = openslide.OpenSlide(self._mask_path)
self._mask = np.array(mask_obj.read_region((0, 0),
level,
mask_obj.level_dimensions[level]).convert('L')).T
else:
# Generate tissue mask on the fly
self._mask = TissueMaskGeneration(self._slide, self._level)
# morphological operations ensure the holes are filled in tissue mask
# and minor points are aggregated to form a larger chunk
# self._mask = BinMorphoProcessMask(np.uint8(self._mask))
# self._all_bbox_mask = get_all_bbox_masks(self._mask, factor)
# self._largest_bbox_mask = find_largest_bbox(self._mask, factor)
# self._all_strided_bbox_mask = get_all_bbox_masks_with_stride(self._mask, factor)
X_mask, Y_mask = self._mask.shape
# print (self._mask.shape, np.where(self._mask>0))
# imshow(self._mask.T)
# cm17 dataset had issues with images being power's of 2 precisely
if X_slide // X_mask != Y_slide // Y_mask:
raise Exception('Slide/Mask dimension does not match ,'
' X_slide / X_mask : {} / {},'
' Y_slide / Y_mask : {} / {}'
.format(X_slide, X_mask, Y_slide, Y_mask))
self._resolution = np.round(X_slide * 1.0 / X_mask)
if not np.log2(self._resolution).is_integer():
raise Exception('Resolution (X_slide / X_mask) is not power of 2 :'
' {}'.format(self._resolution))
# all the idces for tissue region from the tissue mask
self._strided_mask = np.ones_like(self._mask)
ones_mask = np.zeros_like(self._mask)
ones_mask[::factor, ::factor] = self._strided_mask[::factor, ::factor]
if self._roi_masking:
self._strided_mask = ones_mask*self._mask
# self._strided_mask = ones_mask*self._largest_bbox_mask
# self._strided_mask = ones_mask*self._all_bbox_mask
# self._strided_mask = self._all_strided_bbox_mask
else:
self._strided_mask = ones_mask
# print (np.count_nonzero(self._strided_mask), np.count_nonzero(self._mask[::factor, ::factor]))
# imshow(self._strided_mask.T, self._mask[::factor, ::factor].T)
# imshow(self._mask.T, self._strided_mask.T)
self._X_idcs, self._Y_idcs = np.where(self._strided_mask)
self._idcs_num = len(self._X_idcs)
def __len__(self):
return self._idcs_num
def save_get_mask(self, save_path):
np.save(save_path, self._mask)
def get_mask(self):
return self._mask
def get_strided_mask(self):
return self._strided_mask
def __getitem__(self, idx):
x_coord, y_coord = self._X_idcs[idx], self._Y_idcs[idx]
# x = int(x_coord * self._resolution)
# y = int(y_coord * self._resolution)
x = int(x_coord * self._resolution - self._image_size//2)
y = int(y_coord * self._resolution - self._image_size//2)
img = self._slide.read_region(
(x, y), 0, (self._image_size, self._image_size)).convert('RGB')
if self._label_path is not None:
label_img = self._label_slide.read_region(
(x, y), 0, (self._image_size, self._image_size)).convert('L')
else:
label_img = Image.fromarray(np.zeros((self._image_size, self._image_size), dtype=np.uint8))
if self._flip == 'FLIP_LEFT_RIGHT':
img = img.transpose(Image.FLIP_LEFT_RIGHT)
label_img = label_img.transpose(Image.FLIP_LEFT_RIGHT)
if self._rotate == 'ROTATE_90':
img = img.transpose(Image.ROTATE_90)
label_img = label_img.transpose(Image.ROTATE_90)
if self._rotate == 'ROTATE_180':
img = img.transpose(Image.ROTATE_180)
label_img = label_img.transpose(Image.ROTATE_180)
if self._rotate == 'ROTATE_270':
img = img.transpose(Image.ROTATE_270)
label_img = label_img.transpose(Image.ROTATE_270)
# PIL image: H x W x C
img = np.array(img, dtype=np.float32)
label_img = np.array(label_img, dtype=np.uint8)
if self._normalize:
img = (img - 128.0)/128.0
return (img, x_coord, y_coord, label_img)
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
fruits = pd.read_table('fruit_data_with_colors.txt')
fruits.head()
print(fruits['fruit_name'].unique())
print(fruits.shape)
```
### Statistical Summary
```
fruits.describe()
```
We can see that the numerical values do not have the same scale. We will need to apply scaling to the test set that we computed for the training set.
### Fruit type distribution
```
print(fruits.groupby('fruit_name').size())
import seaborn as sns
sns.countplot(fruits['fruit_name'],label="Count")
plt.show()
```
The data is pretty balanced except mandarin. We will just have to go with it.
Box plot for each numeric variable will give us a clearer idea of the distribution of the input variables:
```
fruits.drop('fruit_label', axis=1).plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False, figsize=(9,9),
title='Box Plot for each input variable')
plt.savefig('fruits_boxplot')
plt.show()
import pylab as pl
fruits.drop('fruit_label' ,axis=1).hist(bins=30, figsize=(9,9))
pl.suptitle("Histogram for each numeric input variable")
plt.savefig('fruits_hist')
plt.show()
```
It looks like perhaps color score has a near Gaussian distribution.
```
from pandas.tools.plotting import scatter_matrix
scatter_matrix(fruits.drop('fruit_label', axis=1), figsize=(10, 5))
plt.show()
```
Some pairs of attributes are correlated (mass and width). This suggests a high correlation and a predictable relationship
```
feature_names = ['mass', 'width', 'height', 'color_score']
X = fruits[feature_names]
y = fruits['fruit_label']
from matplotlib import cm
cmap = cm.get_cmap('gnuplot')
scatter = pd.scatter_matrix(X, c = y, marker = 'o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap = cmap)
plt.suptitle('Scatter-matrix for each input variable')
plt.savefig('fruits_scatter_matrix')
```
### Create training and test sets
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```
### Apply scalling
```
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
```
## Build Models
### Logistic Regression
```
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
.format(logreg.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
.format(logreg.score(X_test, y_test)))
```
### Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X_train, y_train)
print('Accuracy of Decision Tree classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
```
#### Setting max decision tree depth to help avoid overfitting
```
clf2 = DecisionTreeClassifier(max_depth=3).fit(X_train, y_train)
print('Accuracy of Decision Tree classifier on training set: {:.2f}'
.format(clf2.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
.format(clf2.score(X_test, y_test)))
```
### K-Nearest Neighbors
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
print('Accuracy of K-NN classifier on training set: {:.2f}'
.format(knn.score(X_train, y_train)))
print('Accuracy of K-NN classifier on test set: {:.2f}'
.format(knn.score(X_test, y_test)))
```
### Linear Discriminant Analysis
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X_train, y_train)
print('Accuracy of LDA classifier on training set: {:.2f}'
.format(lda.score(X_train, y_train)))
print('Accuracy of LDA classifier on test set: {:.2f}'
.format(lda.score(X_test, y_test)))
```
### Gaussian Naive Bayes
```
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
print('Accuracy of GNB classifier on training set: {:.2f}'
.format(gnb.score(X_train, y_train)))
print('Accuracy of GNB classifier on test set: {:.2f}'
.format(gnb.score(X_test, y_test)))
```
### Support Vector Machine
```
from sklearn.svm import SVC
svm = SVC()
svm.fit(X_train, y_train)
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svm.score(X_train, y_train)))
print('Accuracy of SVM classifier on test set: {:.2f}'
.format(svm.score(X_test, y_test)))
```
The KNN algorithm was the most accurate model that we tried. The confusion matrix provides an indication of one error made.
Finally, the classification report provides a breakdown of each class by precision, recall, f1-score and support showing excellent results (However, the test set was small).
```
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
pred = knn.predict(X_test)
print(confusion_matrix(y_test, pred))
print(classification_report(y_test, pred))
```
### Plot the decision boundary of the k-nn classifier
```
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap, BoundaryNorm
import matplotlib.patches as mpatches
import matplotlib.patches as mpatches
X = fruits[['mass', 'width', 'height', 'color_score']]
y = fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
def plot_fruit_knn(X, y, n_neighbors, weights):
X_mat = X[['height', 'width']].as_matrix()
y_mat = y.as_matrix()
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#AFAFAF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF','#AFAFAF'])
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X_mat, y_mat)
# Plot the decision boundary by assigning a color in the color map
# to each mesh point.
mesh_step_size = .01 # step size in the mesh
plot_symbol_size = 50
x_min, x_max = X_mat[:, 0].min() - 1, X_mat[:, 0].max() + 1
y_min, y_max = X_mat[:, 1].min() - 1, X_mat[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, mesh_step_size),
np.arange(y_min, y_max, mesh_step_size))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot training points
plt.scatter(X_mat[:, 0], X_mat[:, 1], s=plot_symbol_size, c=y, cmap=cmap_bold, edgecolor = 'black')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
patch0 = mpatches.Patch(color='#FF0000', label='apple')
patch1 = mpatches.Patch(color='#00FF00', label='mandarin')
patch2 = mpatches.Patch(color='#0000FF', label='orange')
patch3 = mpatches.Patch(color='#AFAFAF', label='lemon')
plt.legend(handles=[patch0, patch1, patch2, patch3])
plt.xlabel('height (cm)')
plt.ylabel('width (cm)')
plt.title("4-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
plot_fruit_knn(X_train, y_train, 5, 'uniform')
plot_fruit_knn(X_train, y_train, 1, 'uniform')
plot_fruit_knn(X_train, y_train, 10, 'uniform')
plot_fruit_knn(X_test, y_test, 5, 'uniform')
k_range = range(1, 20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.scatter(k_range, scores)
plt.xticks([0,5,10,15,20])
```
For this particular dateset, we obtain the highest accuracy when k=5.
| github_jupyter |
## Sharing Tutorial: Securely Collaborating in Graphistry
Investigtions are better together. This tutorial walks through the new PyGraphistry method `.privacy()`, which enables API control of the new sharing features
We walk through:
* Global defaults for `graphistry.privacy(mode='private', ...)`
* Compositional per-visualization settings via `g.privacy(...)`
* Inviting and notifying via `privacy(invited_users=[{...])`
### Setup
You need pygraphistry 0.20.0+ for a corresponding Graphistry server (2.37.20+)
```
#! pip install --user -q graphistry pandas
import graphistry, pandas as pd
graphistry.__version__
graphistry.register(
api=3, username='myuser', password='mypass',
#protocol='http', server='my.private-server.com'
)
#demo data
g = graphistry.edges(pd.DataFrame({
's': ['a', 'b', 'c'],
'd': ['b', 'c', 'a'],
'v': [1, 1, 2]
}), 's', 'd')
g = g.settings(url_params={'play': 0})
```
## Safe default: Unlisted & owner-editable
When creating a plot, Graphistry creates a dedicated URL with the following rules:
* Viewing: Unlisted - Only those given the link can access it
* Editing: Owner-only
The URL is unguessable, and the only webpage it is listed at is the creator's private gallery: https://hub.graphistry.com/datasets/ . That means it is as private as whomever the owner shares the URL with.
```
public_url = g.plot(render=False)
```
## Switching to fully private by default
Call `graphistry.privacy()` to default to stronger privacy. It sets:
* `mode='private'` - viewing only by owners and invitees
* `invited_users=[]` - no invitees by default
* `notify=False` - no email notifications during invitations
* `message=''`
By default, this means an explicit personal invitation is necessary for viewing. Subsequent plots in the session will default to this setting.
You can also explicitly set or override those as optional parameters.
```
graphistry.privacy()
# or equivaently, graphistry.privacy(mode='private', invited_users=[], notify=False, message='')
owner_only_url = g.plot(render=False)
```
## Local overrides
We can locally override settings, such as opting back in to public sharing for some visualizations:
```
public_g = g.privacy(mode='public')
public_url1 = public_g.plot(render=False)
#Ex: Inheriting public_g's mode='public'
public_g2 = public_g.name('viz2')
public_url2 = public_g.plot(render=False)
#Ex: Global default was still via .privacy()
still_private_url = g.plot(render=False)
```
## Invitations and notifications
As part of the settings, we can permit specific individuals as viewers or editors, and optionally, send them an email notification
```
VIEW = '10'
EDIT = '20'
shared_g = g.privacy(
mode='private',
notify=True,
invited_users=[{'email': 'partner1@site1.com', 'action': VIEW},
{'email': 'partner2@site2.org', 'action': EDIT}],
message='Check out this graph!')
shared_url = shared_g.plot(render=False)
```
The options can be configured globally or locally, just as we did with `mode`. For example, we might not want to send emails by default, just on specific plots:
```
graphistry.privacy(
mode='private',
notify=False,
invited_users=[{'email': 'partner1@site1.com', 'action': VIEW},
{'email': 'partner2@site2.org', 'action': EDIT}])
shared_url = g.plot(render=False)
notified_and_shared_url = g.privacy(notify=True).plot(render=False)
```
Even if we do not explicitly notify recipients, the objects will still appear in their gallery at https://hub.graphistry.com/datasets/
| github_jupyter |
```
import os
import torch
from pytorch_lightning import LightningModule, Trainer, seed_everything
import tensorflow as tf
import transformers
from transformers import BertTokenizer
from transformers import BertForSequenceClassification, AdamW, BertConfig
from transformers import get_linear_schedule_with_warmup
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import random
import time
import datetime
import re
import emoji
from soynlp.normalizer import repeat_normalize
!git clone https://github.com/e9t/nsmc.git
!ls nsmc -la
train = pd.read_csv("nsmc/ratings_train.txt", sep='\t')
test = pd.read_csv("nsmc/ratings_test.txt", sep='\t')
print(train.shape)
print(test.shape)
train.head(5)
```
# 1. BERT-BASE-MULTLINGUAL PRETRAINED MODEL
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased')
# BERT의 입력 형식에 맞게 토큰을 추가해줍니다
sentences = ["[CLS] " + str(sentence) + " [SEP]" for sentence in train.document]
sentences[:10]
#토큰화 확인
lines = [
"아이펠 정책 피드백 감정분석팀 '정check'입니다.",
"구글 리뷰 데이터와 네이버 블로그 데이터를 사용해서 감정분석을 실행할 예정입니다.",
"열심히 하겠습니다."
]
encoded_lines2 = tokenizer(lines)
print(encoded_lines2)
tokenized_texts1 = [tokenizer.tokenize(str(lines)) for line in lines]
print(tokenized_texts1[:3])
#로드한 토크나이저로 토큰화
tokenized_texts = [tokenizer.tokenize(sentence) for sentence in sentences]
print(tokenized_texts[:3])
#BERT의 단어 집합에 특정 단어가 있는지 조회
print(tokenizer.vocab['감정'])
print(tokenizer.vocab['블로그'])
#key_error : bert 단어 집합에 존재하지 않는 단어. multilingual language model을 불러왔는데 기본적인 '감정'이 없다는게 흥미롭다
print(tokenizer.vocab['sentiment']) #영어로 치니까 나오네요.
#패딩
#token들의 max length보다 크게 MAX_LEN을 설정합니다.
#설정한 MAX_LEN 만큼 빈 공간을 0이 채웁니다.
MAX_LEN = 128
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype='long', truncating='post', padding='post')
input_ids[0]
#어텐션 마스크 : 실제 단어와 패딩 토큰을 구분. 패딩의 위치는 0 실단어의 위치는 1로 구분
attention_masks = []
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
print(attention_masks[0])
#train_test_validation split
train_inputs, validation_inputs, train_labels, validation_labels = \
train_test_split(input_ids, train['label'].values, random_state=2, test_size=0.2)
#어텐션 마스크 또한 test와 validation으로 분리
train_masks, validation_masks, _, _ = train_test_split(attention_masks,
input_ids,
random_state=2,
test_size=0.2)
# 데이터를 파이토치의 텐서로 변환
import torch
train_inputs = torch.tensor(train_inputs)
train_labels = torch.tensor(train_labels)
train_masks = torch.tensor(train_masks)
validation_inputs = torch.tensor(validation_inputs)
validation_labels = torch.tensor(validation_labels)
validation_masks = torch.tensor(validation_masks)
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
batch_size = 32 #batch size 62로 실험해보았으나 메모리 에러로 32로 고정
# 파이토치의 DataLoader로 입력, 마스크, 라벨을 묶어 데이터 설정
# 학습시 배치 사이즈 만큼 데이터를 가져옴
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
#pytorch dataloader
#PyTorch는 torch.utils.data.Dataset으로 Custom Dataset을 만든다
#torch.utils.data.DataLoader로는 데이터를 불러온다.
#TensorDataset : tensor를 감싸는 Dataset
#pytorch sampler
#sampler : index를 컨트롤하는 방법 / SequentialSampler : 인덱스를 항상 같은 순서로 불러온다
#참고 : https://subinium.github.io/pytorch-dataloader/
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
device = torch.device("cpu")
print('No GPU available, using the CPU instead.')
# Outputs of BERT, corresponding to one output vector of size 768 for each input token
#outputs = model(input_ids,
# attention_mask=attention_mask,
# token_type_ids=token_type_ids,
# position_ids=position_ids,
# head_mask=head_mask)
# Grab the [CLS] token, used as an aggregate output representation for classification tasks
#pooled_output = outputs[1]
# Create dropout (for training)
#dropout = nn.Dropout(hidden_dropout_prob)
# Apply dropout between the output of the BERT transformers and our final classification layer
#pooled_output = dropout(pooled_output)
# Create our classification layer
#classifier = nn.Linear(hidden_size, num_labels)
# Feed the pooled output through the classifier
#logits = classifier(pooled_output)
#bert 모델 생성
model = BertForSequenceClassification.from_pretrained("bert-base-multilingual-cased", num_labels=2)
model.cuda()
#num_labels=2 => 이진분류
#BertForSequenceClassification을 사용해서 모델구조를 불러올 수 있다.
#Huggning Face의 BertForSequenceClassification() 함수를 사용해서 파인튜닝 진행
#Linear classifier(선형분류) 중 binary classification을 진행해야함
#(classifier): Linear(in_features=768, out_features=2, bias=True)
# 옵티마이저 설정
optimizer = AdamW(model.parameters(),
lr = 2e-5, # 학습률
eps = 1e-8 # 0으로 나누는 것을 방지하기 위한 epsilon 값
)
epochs = 4
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0,
num_training_steps = total_steps)
#모델학습
# 정확도 계산 함수
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
# 시간 표시 함수
def format_time(elapsed):
# 반올림
elapsed_rounded = int(round((elapsed)))
# hh:mm:ss으로 형태 변경
return str(datetime.timedelta(seconds=elapsed_rounded))
# 재현을 위해 랜덤시드 고정
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# 그래디언트 초기화
model.zero_grad()
# 에폭만큼 반복
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# 시작 시간 설정
t0 = time.time()
# 로스 초기화
total_loss = 0
# 훈련모드로 변경
model.train()
# 데이터로더에서 배치만큼 반복하여 가져옴
for step, batch in enumerate(train_dataloader):
# 경과 정보 표시
if step % 500 == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# 배치를 GPU에 넣음
batch = tuple(t.to(device) for t in batch)
# 배치에서 데이터 추출
b_input_ids, b_input_mask, b_labels = batch
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# 로스 구함
loss = outputs[0]
# 총 로스 계산
total_loss += loss.item()
# Backward 수행으로 그래디언트 계산
loss.backward()
# 그래디언트 클리핑
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# 그래디언트를 통해 가중치 파라미터 업데이트
optimizer.step()
# 스케줄러로 학습률 감소
scheduler.step()
# 그래디언트 초기화
model.zero_grad()
# 평균 로스 계산
avg_train_loss = total_loss / len(train_dataloader)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(format_time(time.time() - t0)))
# ========================================
# Validation
# ========================================
print("")
print("Running Validation...")
#시작 시간 설정
t0 = time.time()
# 평가모드로 변경
model.eval()
# 변수 초기화
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# 데이터로더에서 배치만큼 반복하여 가져옴
for batch in validation_dataloader:
# 배치를 GPU에 넣음
batch = tuple(t.to(device) for t in batch)
# 배치에서 데이터 추출
b_input_ids, b_input_mask, b_labels = batch
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# 출력 로짓과 라벨을 비교하여 정확도 계산
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print(" Validation took: {:}".format(format_time(time.time() - t0)))
print("")
print("Training complete!")
#barch size가 64면 out of memory 문제 발생
#test 전처리밑 데이터로더 형성
BATCH_SIZE = 32
sentences = test['document']
sentences = ["[CLS] " + str(sentence) + " [SEP]" for sentence in sentences]
labels = test['label'].values
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
attention_masks = []
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
test_inputs = torch.tensor(input_ids)
test_labels = torch.tensor(labels)
test_masks = torch.tensor(attention_masks)
test_data = TensorDataset(test_inputs, test_masks, test_labels)
test_sampler = RandomSampler(test_data)
test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=BATCH_SIZE)
#test data에 확인
#시작 시간 설정
t0 = time.time()
# 평가모드로 변경
model.eval()
# 변수 초기화
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# 데이터로더에서 배치만큼 반복하여 가져옴
for step, batch in enumerate(test_dataloader):
# 경과 정보 표시
if step % 100 == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(test_dataloader), elapsed))
# 배치를 GPU에 넣음
batch = tuple(t.to(device) for t in batch)
# 배치에서 데이터 추출
b_input_ids, b_input_mask, b_labels = batch
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# 출력 로짓과 라벨을 비교하여 정확도 계산
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("")
print("Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print("Test took: {:}".format(format_time(time.time() - t0)))
```
| github_jupyter |
```
# default_exp optimizer
#hide
%load_ext autoreload
%autoreload 2
#hide
from nbdev.showdoc import *
#export
import re
import IPython, graphviz
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import export_graphviz
from mip import Model, xsum, minimize, BINARY
from oae.core import *
from oae.tree import *
SEED = 41
np.random.seed(SEED)
```
# Optimizer
> Module that helps solve ILP ( Integer Linear Programming ) problem.
```
def draw_tree(t, df, size=10, ratio=0.6, precision=0):
s=export_graphviz(t, out_file=None, feature_names=df.columns, filled=True,
special_characters=True, rotate=True, precision=precision)
IPython.display.display(graphviz.Source(re.sub('Tree {',
f'Tree {{ size={size}; ratio={ratio}', s)))
Xtr, Xte, ytr, yte = get_example_dataset(SEED)
clf = RandomForestClassifier(n_estimators=5, max_depth=3, random_state=SEED, n_jobs=-1)
clf.fit(Xtr, ytr)
df_trn = pd.DataFrame(Xtr, columns=[f'f_{i}' for i in range(5)])
#draw_tree(clf.estimators_[0], df_trn, precision=3)
#export
class Optimizer:
def __init__(self, c_i_j, combine, z, class_):
self.c_i_j = c_i_j
self.combine = combine
self.z = z
self.class_ = class_
def solve(self, atm:ATMSKLEARN, x:Instance):
partitions = atm.v_i_j(x)
cost_matrix = self.c_i_j(partitions, x.content)
model = Model()
trees = atm.get_trees()
# make v_i_j and phi_t_k as boolean variable in the integer linear programming problem
v_i_j = [[model.add_var(var_type=BINARY) for j in range(len(partitions[i]))] for i in range(len(x.content))]
phi_t_k = [[model.add_var(var_type=BINARY) for j in range(len(atm.get_leaves(t.tree_)))] for t in trees]
# objective
model.objective = minimize(xsum(v_i_j[i][j] * cost_matrix[i][j] for i in range(len(v_i_j)) \
for j in range(len(v_i_j[i]))))
# constraints
w_t = atm.calculate_tree_weights()
h_t_k = atm.h_t_k(combine, class_=self.class_)
model += (xsum(phi_t_k[i][j] * h_t_k[i][j] * w_t[i] for i in range(len(trees)) \
for j in range(len(h_t_k[i]))) >= self.z)
#check if feature value belongs to one and only one partition
for i in range(len(x.content)):
model += xsum(v_i_j[i][j] for j in range(len(v_i_j[i]))) == 1
for i in range(len(trees)):
tree = trees[i].tree_
leaves = atm.get_leaves(tree)
pi = {kidx:atm.find_ancestors(tree, 0, k, p=[])[1] for kidx, k in enumerate(leaves)}
for j in range(len(leaves)):
ancestors = pi[j]
n_ancestors = len(ancestors) # |pi_t_k|
model += xsum(atm.predicates_mask(tree, a, partitions, x.types)[m] * v_i_j[tree.feature[a[0]]][m] \
for a in ancestors for m in range(len(v_i_j[tree.feature[a[0]]])))\
>= (phi_t_k[i][j] * n_ancestors)
# check if instance is present in one and only one leaf node in
# all trees
for i in range(len(trees)):
tree = trees[i].tree_
leaves = atm.get_leaves(tree)
model += xsum(phi_t_k[i][j] for j in range(len(leaves))) == 1
# optimizing
model.optimize()
v_i_j_sol = [[int(v_i_j[i][j].x) for j in range(len(v_i_j[i]))] for i in range(len(v_i_j))]
phi_t_k_sol = [[int(phi_t_k[i][j].x) for j in range(len(phi_t_k[i]))] for i in range(len(phi_t_k))]
return v_i_j_sol, phi_t_k_sol
#export
def cost_matrix(partitions, x, p=0):
C_i_j = []
for i in range(len(x)):
s = partitions[i]
feat_cost = []
for j in range(len(s)):
if len(s[j]) > 1:
if (x[i] >= s[j][0]) and (x[i] < s[j][1]):
feat_cost.append(0)
else:
feat_cost.append(min((x[i] - s[j][0]) ** p, (x[i] - s[j][1]) ** p))
else:
if x[i] == s[j][0]:
feat_cost.append(0)
else:
feat_cost.append(1)
C_i_j.append(feat_cost)
return C_i_j
atm = ATMSKLEARN(clf, np.vstack((Xtr, Xte)))
instance = Instance(Xte[10], ['numerical'] * 5)
opt = Optimizer(cost_matrix, combine, z=0.55, class_=1)
v_i_j_sol, phi_t_k_sol = opt.solve(atm, instance)
partitions = atm.v_i_j(instance)
orig_mask = atm.v_i_j_mask(partitions, instance); orig_mask
atm.suggest_changes(v_i_j_sol, instance)
X_transformed = atm.transform(v_i_j_sol, instance); X_transformed
clf.predict_proba(Xte[10:11])
clf.predict_proba(X_transformed)
```
## Export
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
## What is Exploratory Data Analysis (EDA)?
Exploratory Data Analysis (EDA) is used on the one hand to answer questions, test business assumptions, generate hypotheses for further analysis. On the other hand, you can also use it to prepare the data for modeling. The thing that these two probably have in common is a good knowledge of your data to either get the answers that you need or to develop an intuition for interpreting the results of future modeling.
There are a lot of ways to reach these goals: you can get a basic description of the data, visualize it, identify patterns in it, identify challenges of using the data, etc.
One of the things that you’ll often see when you’re reading about EDA is Data profiling. Data profiling is concerned with summarizing your dataset through descriptive statistics. You want to use a variety of measurements to better understand your dataset. The goal of data profiling is to have a solid understanding of your data so you can afterwards start querying and visualizing your data in various ways. However, this doesn’t mean that you don’t have to iterate: exactly because data profiling is concerned with summarizing your dataset, it is frequently used to assess the data quality. Depending on the result of the data profiling, you might decide to correct, discard or handle your data differently.
```
from sklearn.datasets import load_iris
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Load in the data with `read_csv()`
digits = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/optdigits/optdigits.tra",
header=None)
digits.head(5)
digits.shape
```
Note that in this case, you made use of read_csv() because the data happens to be in a comma-separated format. If you have files that have another separator, you can also consider using other functions to load in your data, such as read_table(), read_excel(), read_fwf() and read_clipboard, to read in general delimited files, Excel files, Fixed-Width Formatted data and data that was copied to the Clipboard, respectively.
Also, you’ll find read_sql() as one of the options to read in an SQL query or a database table into a DataFrame.
```
# save load_iris() sklearn dataset to iris
# if you'd like to check dataset type use: type(load_iris())
# if you'd like to view list of attributes use: dir(load_iris())
iris = load_iris()
# np.c_ is the numpy concatenate function
# which is used to concat iris['data'] and iris['target'] arrays
# for pandas column argument: concat iris['feature_names'] list
# and string list (in this case one string); you can make this anything you'd like..
# the original dataset would probably call this ['Species']
iris_df = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
iris_df.head(15)
iris_df['target'].value_counts()
iris_df['petal width (cm)'].value_counts()
iris_df['petal width (cm)'].value_counts().hist()
iris_df['target'].value_counts()
iris_df['target'].value_counts().plot(kind='bar')
plt.xticks(rotation=25)
plt.show()
iris_df.describe()
```
You see that this function returns the count, mean, standard deviation, minimum and maximum values and the quantiles of the data. Note that, of course, there are many packages available in Python that can give you those statistics, including Pandas itself. Using this function is just one of the ways to get this information.
Also note that you certainly need to take the time to dive deeper into the descriptive statistics if you haven’t done this yet. You can use these descriptive statistics to begin to assess the quality of your data. Then you’ll be able to decide whether you need to correct, discard or deal with the data in anohter way. This is usually the data profiling step. This step in the EDA is meant to understand the data elements and its anomalies a bit better.
## First and Last DataFrame Rows
Now that you have got a general idea about your data set, it’s also a good idea to take a closer look at the data itself. With the help of the head() and tail() functions of the Pandas library, you can easily check out the first and last lines of your DataFrame, respectively.
```
iris_df.tail(5)
```
## Sampling The Data
If you have a large dataset, you might consider taking a sample of your data as an easy way to get a feel for your data quickly. As a first and easy way to do this, you can make use of the sample() function that is included in Pandas, just like this:
```
iris_df.sample(5)
```
## A Closer Look At Your Data: Queries
Now that you have taken a quick look at your data and have seen what it’s about, you’re ready to dive a little bit deeper: it’s time to inspect the data further by querying the data.
This goes easily with the query() function, which allows you to test some very simple hypotheses that you have about your data, such as “Is the petal length usually greater than the sepal length?” or “Is the petal length sometimes equal to the sepal length?”.
```
iris_df.columns
iris_df = iris_df.rename(columns={"sepal length (cm)": "sepal_length", "sepal width (cm)": "sepal_width",
"petal length (cm)": "petal_length", "petal width (cm)": "petal_width",
"target": "class"})
iris_df.columns
# Petal length greater than sepal length?
iris_df.query('petal_length > sepal_length')
# Petal length equals sepal length?
iris_df.query('petal_length == sepal_length')
```
You’ll see that this hypothesis doesn’t hold. You get an empty DataFrame back as a result.
Note that this function can also be expressed as iris[iris.Petal_length > iris.Sepal_length]
## The Challenges of Your Data
### Missing Values
Something that you also might want to check when you’re exploring your data is whether or not the data set has any missing values.
Examining this is important because when some of your data is missing, the data set can lose expressiveness, which can lead to weak or biased analyses. Practically, this means that when you’re missing values for certain features, the chances of your classification or predictions for the data being off only increase.
```
iris_df.isnull()
for col in iris_df.columns:
print(col, " : ", sum(iris_df[col].isnull()))
```
To identify the rows that contain missing values, you can use isnull(). In the result that you’ll get back, you’ll see True or False appearing in each cell: True will indicate that the value contained within the cell is a missing value, False means that the cell contains a ‘normal’ value.
### How to handle missing data
You can delete the missing data: you either delete the whole record or you can just keep the records in which the features of interest are still present. Of course, you have to be careful with this procedure, as deleting data might also bias your analysis. That’s why you should ask yourself the question of whether the probability of certain data that is missing for a record is the same as for all other records. If the probability doesn’t vary record-per-record, deleting the missing data is a valid option.
Besides deletion, there are also methods that you can use to fill up cells if they contain missing values with so-called “imputation methods”. If you already have a lot of experience with statistics, you’ll know that imputation is the process of replacing missing data with substituted values. You can either fill in the mean, the mode or the median. Of course, here you need to think about whether you want to take, for example, the mean or median for all missing values of a variable, or whether you want to replace the missing values based on another variable. For example, for data in which you have records that have features with categorical variables such as “male” or “female”, you might also want to consider those before replacing the missing values, as the observations might differ from males and females. If this is the case, you might just calculate the average of the female observations and then fill out the missing values for other “female” records with this average.
### Filling Missing Values
```
property_df = pd.read_csv('property data.csv')
property_df.shape
property_df
property_df.isnull()
for col in property_df.columns:
print(col, " : ", sum(property_df[col].isnull()))
# Making a list of missing value types
missing_values = ["n/a", "na", "--"]
property_df = pd.read_csv("property data.csv", na_values = missing_values)
for col in property_df.columns:
print(col, " : ", sum(property_df[col].isnull()))
property_df
int('age')
int('12')
# Detecting numbers
cnt=0
for row in property_df['OWN_OCCUPIED']:
try:
int(row)
property_df.loc[cnt, 'OWN_OCCUPIED']=np.nan
except ValueError:
pass
cnt+=1
property_df
# Calculate the mean
mean = np.mean(property_df['SQ_FT'])
# Replace missing values with the mean
property_df['SQ_FT'].fillna(mean)
mean
```
Pass in ffill or bfill to specify you want to fill the values backward or forward.
```
property_df['SQ_FT']
property_df['SQ_FT'].fillna(method='ffill')
property_df['SQ_FT'].fillna(method='bfill')
```
#### Drop Labels With Missing Values
To exclude columns or rows that contain missing values, you can make use of Pandas’ dropna() function:
```
property_df['SQ_FT'].dropna(axis=0)
```
#### Interpolation
Alternatively, you can also choose to interpolate missing values: the interpolate() function will perform a linear interpolation at the missing data points to “guess” the value that is most likely to be filled in.
```
property_df['SQ_FT'].interpolate()
```
## Correlation
```
# Pearson correlation
iris_df.corr()
# Kendall Tau correlation
iris_df.corr('kendall')
# Spearman Rank correlation
iris_df.corr('spearman')
corr = iris_df.corr()
corr.style.background_gradient(cmap='Wistia')
# 'RdBu_r' & 'BrBG' are other good diverging colormaps
```
## pandas-profiling
Git: https://github.com/pandas-profiling/pandas-profiling
Meteorites example: Source of data: https://data.nasa.gov/Space-Science/Meteorite-Landings/gh4g-9sfh
Generates profile reports from a pandas DataFrame. The pandas df.describe() function is great but a little basic for serious exploratory data analysis.
For each column the following statistics - if relevant for the column type - are presented in an interactive HTML report:
- Essentials: type, unique values, missing values
- Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range
- Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness
- Most frequent values
- Histogram
- Correlations highlighting of highly correlated variables, Spearman and Pearson matrixes
```
import pandas_profiling
df=pd.read_csv("Meteorite_Landings.csv", parse_dates=['year'], encoding='UTF-8')
# Note: Pandas does not support dates before 1880, so we ignore these for this analysis
df['year'] = pd.to_datetime(df['year'], errors='coerce')
# Example: Constant variable
df['source'] = "NASA"
# Example: Boolean variable
df['boolean'] = np.random.choice([True, False], df.shape[0])
# Example: Mixed with base types
df['mixed'] = np.random.choice([1, "A"], df.shape[0])
# Example: Highly correlated variables
df['reclat_city'] = df['reclat'] + np.random.normal(scale=5,size=(len(df)))
# Example: Duplicate observations
duplicates_to_add = pd.DataFrame(df.iloc[0:10])
duplicates_to_add['name'] = duplicates_to_add['name'] + " copy"
df = df.append(duplicates_to_add, ignore_index=True)
df.shape
df.head(5)
df.tail(5)
df['boolean'].value_counts()
```
The types supported are split in standard types and special types.
- Standard types:
* Categorical (`TYPE_CAT`): the default type if no other one can be determined
* Numerical (`TYPE_NUM`): if it contains numbers
* Boolean (`TYPE_BOOL`): at this time only detected if it contains boolean values, see todo
* Date (`TYPE_DATE`): if it contains datetime
- Special types:
* Constant (`S_TYPE_CONST`): if all values in the variable are equal
* Unique (`S_TYPE_UNIQUE`): if all values in the variable are different
* Unsupported (`S_TYPE_UNSUPPORTED`): if the variable is unsupported
### Inline report without saving object
```
pandas_profiling.ProfileReport(df)
```
### Save report to file
```
pfr = pandas_profiling.ProfileReport(df)
pfr.to_file("example.html")
```
#### Print existing ProfileReport object inline
```
pfr
```
### Mask Analysis custom function
The mask analysis or string pattern is useful for fields like city, postal code, phone, etc. It shows us how the fields have been populated and we can infer some data quality issues.
Rules:
- lower case letter returns 'l'
- Capital case letter returns 'L'
- Number returns 'D'
- Space returns 's'
- Special character returns itself
- Missing value returns '-null-'
- Examples:
- 'Van' returns 'Lll'
- 'VAN' returns 'LLL'
- 'Van BC' returns 'LllsLL'
- '+1 123-1234-5555 returns '+DsDDD-DDDD-DDDD'
The standard for the Canadian Postal Code should be 'LDLsDLD'
```
# Read the data set
df = pd.read_csv('2017business_licences.csv')
def mask_profile(series):
'''
Make a mask profile of a field by converting the ascii value of the character as below.
a to z --> returns "l" for letter
A to Z --> returns "L" for Letter
0 to 9 --> returns "D" for Digit
'space' --> returns "s" for space
Special characters --> keep original
Requirement: pandas
Input: pandas Series
'''
def getMask(field):
mask = ''
if str(field) == 'nan':
mask = '-null-'
else:
for character in str(field):
if 65 <= ord(character) <= 90: # ascii 65 to 90 are Capital letters
mask = mask + 'L'
elif 97 <= ord(character) <= 122: #ascii 97 to 122 are lower case letters
mask = mask + 'l'
elif 48 <= ord(character) <= 57: #ascii 48 to 57 are digits
mask = mask + 'D'
elif ord(character) == 32:
mask = mask + 's'
else:
mask = mask + character
return mask
value = series.apply(getMask).value_counts()
percentage = round(series.apply(getMask).value_counts(normalize=True)*100,2)
result = pd.DataFrame(value)
result['%'] = pd.DataFrame(percentage)
result.columns = ['Count','%']
return result
# Use mask_profile(pass a pandas series)
# I'm using head(10) to show only the top 10 results
mask_profile(df['PostalCode']).head(10)
mask_profile(df['PostalCode'])
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.preprocessing import minmax_scale
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesClassifier
from joblib import Parallel, delayed
from sklearn.preprocessing import LabelEncoder
from imblearn.over_sampling import *
__AUTHOR__ = 'Kirgsn'
class Reducer:
"""
Class that takes a dict of increasingly big numpy datatypes to transform
the data of a pandas dataframe to in order to save memory usage.
"""
memory_scale_factor = 1024**2 # memory in MB
def __init__(self, conv_table=None):
"""
:param conv_table: dict with np.dtypes-strings as keys
"""
if conv_table is None:
self.conversion_table = \
{'int': [np.int8, np.int16, np.int32, np.int64],
'uint': [np.uint8, np.uint16, np.uint32, np.uint64],
'float': [np.float16, np.float32, ]}
else:
self.conversion_table = conv_table
def _type_candidates(self, k):
for c in self.conversion_table[k]:
i = np.iinfo(c) if 'int' in k else np.finfo(c)
yield c, i
def reduce(self, df, verbose=False):
"""Takes a dataframe and returns it with all data transformed to the
smallest necessary types.
:param df: pandas dataframe
:param verbose: If True, outputs more information
:return: pandas dataframe with reduced data types
"""
ret_list = Parallel(n_jobs=-1)(delayed(self._reduce)
(df[c], c, verbose) for c in
df.columns)
return pd.concat(ret_list, axis=1)
def _reduce(self, s, colname, verbose):
# skip NaNs
if s.isnull().any():
if verbose:
print(colname, 'has NaNs - Skip..')
return s
# detect kind of type
coltype = s.dtype
if np.issubdtype(coltype, np.integer):
conv_key = 'int' if s.min() < 0 else 'uint'
elif np.issubdtype(coltype, np.floating):
conv_key = 'float'
else:
if verbose:
print(colname, 'is', coltype, '- Skip..')
print(colname, 'is', coltype, '- Skip..')
return s
# find right candidate
for cand, cand_info in self._type_candidates(conv_key):
if s.max() <= cand_info.max and s.min() >= cand_info.min:
if verbose:
print('convert', colname, 'to', str(cand))
return s.astype(cand)
# reaching this code is bad. Probably there are inf, or other high numbs
print(("WARNING: {} "
"doesn't fit the grid with \nmax: {} "
"and \nmin: {}").format(colname, s.max(), s.min()))
print('Dropping it..')
df = pd.read_csv("train.csv")
reducer = Reducer()
df = reducer.reduce(df)
c = df['DepartmentDescription'].isnull()
df.loc[c, 'DepartmentDescription'] = 'Na'
df['sum'] = df.assign(f=df.groupby(['VisitNumber', 'DepartmentDescription'])['ScanCount'].transform(sum))['f']
df_s = df.iloc[:, [0, 1, 5, -1]]
df_s = df_s.drop_duplicates(['VisitNumber', 'DepartmentDescription']).reset_index(drop=True)
# 가중치 테이블
data = df.iloc[:, [0, 1, 5, -1]].drop_duplicates(['VisitNumber', 'DepartmentDescription'])
sub = data.groupby(['TripType', 'DepartmentDescription'], as_index=False).agg('sum').iloc[:, [0,1,3]]
c = sub['TripType'] == sub['TripType'].unique()[0]
base = minmax_scale(sub[c]['sum'])
for i in range(1, 38):
c = sub['TripType'] == sub['TripType'].unique()[i]
base = np.hstack([base, minmax_scale(sub[c]['sum'])])
sub['minmax'] = pd.Series(base)
```
# TripType 41,42,43,44 구분
- 목표 Dept 만으로 41,42,43,44 분류해내기
- 결론: 불가 -> FN과 Upc의 영향을 많이 받는 것으로 추측
### test_x1: 41, .., 44, TheRest로 label
- 총 클래스 갯수: 5
- 각 TT별 Dept 가중치를 구함
- TT 41, ..., 44별 dept 가중치를 곱해서 합산한 ScanCount값을 컬럼으로 만듦
### test_x1.2: 41, .., 44 제외한 클래스는 배제
- 총 클래스 갯수: 4개
- feature 컬럼 상동
### test_x2: 41, .., 44, TheRest로 label
- 총 클래스 갯수: 5개
- TT 3, ..., 999별 dept 가중치를 곱해서 합산한 ScanCount값을 컬럼으로 만듦
### test_x3: 41, .., 44 제외한 클래스는 배제
- 총 클래스 갯수: 4개
- TT 41, ..., 44별 dept 가중치 곱한 ScanCount값으로 Dept Crosstab
# test_X 1
```
ls = [41, 42, 43, 44]
temp = df_s.copy()
for i in ls:
c = sub['TripType'] == i
temp = temp.merge(sub[c].iloc[:, [1, -1]], how='outer',on='DepartmentDescription')
col_name = "{}_sum".format(i)
temp.loc[:, col_name] = temp['sum'] * temp['minmax']
temp.drop('minmax', axis=1, inplace=True)
test = temp.fillna(-1.0).groupby(['VisitNumber', 'TripType'], as_index=False).agg(sum)
x = test.merge(df_s.pivot('VisitNumber', 'DepartmentDescription', 'sum').fillna(0), on='VisitNumber').set_index('VisitNumber').drop(['TripType', 'sum', 'HEALTH AND BEAUTY AIDS'], axis=1)
y = test['TripType']
# mapping
lab4 = {41: 1, 42: 2, 43:3, 44: 4}
y_lab = y.map(lab4).fillna(5)
x.sample(5)
x.info()
```
---
### oversampling
```
X_train, X_test, y_train, y_test = train_test_split(x, y_lab, random_state=0)
# # ADASYN
X_samp, y_samp = ADASYN(random_state=0).fit_sample(X_train, y_train)
# oversampling
# X_samp, y_samp = RandomOverSampler(random_state=0).fit_sample(X_train, y_train)
```
### gbm
```
import lightgbm
gbm3 = lightgbm.LGBMClassifier(n_estimators=200, max_depth=2, random_state=0)
gbm3.fit(X_samp, y_samp)
print(classification_report(y_test, gbm3.predict(X_test)))
gbm3.get_params
from sklearn.model_selection import GridSearchCV
grid_param = {
'n_estimators': [100, 200],
'max_depth': [2,3,4,6]
}
gd_sr2 = GridSearchCV(estimator=gbm3,
param_grid=grid_param,
scoring='neg_log_loss',
cv=5,
n_jobs=-1)
gd_sr2.fit(X_samp, y_samp)
print(classification_report(y_test, gd_sr2.predict(X_test)))
print(confusion_matrix(y_test, gd_sr2.best_estimator_.predict(X_test)))
```
### gbm2
```
gbm2 = lightgbm.LGBMClassifier(n_estimators=300, max_depth=2, random_state=0)
gbm2.fit(X_samp, y_samp)
print(classification_report(y_test, gbm2.predict(X_test)))
```
### random forest
```
rf2 = RandomForestClassifier(n_estimators=300, n_jobs=-1, criterion='gini', max_depth=3, )
rf2.fit(X_samp, y_samp)
print(classification_report(y_test, rf2.predict(X_test)))
```
### knn
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5, n_jobs=-2)
knn.fit(X_samp,y_samp)
print(classification_report(y_test, knn.predict(X_test)))
print(confusion_matrix(y_test, knn.predict(X_test)))
```
---
# test_X1.2
```
idx = np.where(y_lab != 5)[0]
x2 = x.iloc[idx, :]
y2 = y_lab[idx]
X_train, X_test, y_train, y_test = train_test_split(x2, y2, random_state=0)
# # ADASYN
# X_samp, y_samp = ADASYN(random_state=0).fit_sample(X_train, y_train)
x2.tail()
x2.info()
```
### gbm
```
import lightgbm
gbm4 = lightgbm.LGBMClassifier(n_estimators=300, max_depth=3, random_state=0)
gbm4.fit(X_train, y_train)
print(classification_report(y_test, gbm4.predict(X_test)))
```
### Random forest
```
rf2 = RandomForestClassifier(n_estimators=500, n_jobs=-1, criterion='gini', max_depth=3, )
rf2.fit(X_samp, y_samp)
print(classification_report(y_test, rf2.predict(X_test)))
```
### Extreme forest
```
# fitting
er = ExtraTreesClassifier(n_jobs=-1, n_estimators=300, random_state=0, max_depth=3)
er.fit(X_samp, y_samp)
print(classification_report(y_test, er.predict(X_test)))
```
---
# X_test2
```
ls = df['TripType'].unique()
temp = df_s.copy()
for i in ls:
c = sub['TripType'] == i
temp = temp.merge(sub[c].iloc[:, [1, -1]], how='outer',on='DepartmentDescription')
col_name = "{}_sum".format(i)
temp.loc[:, col_name] = temp['sum'] * temp['minmax']
temp.drop('minmax', axis=1, inplace=True)
test = temp.fillna(-0.000001).groupby(['VisitNumber', 'TripType'], as_index=False).agg(sum)
x = test.merge(df_s.pivot('VisitNumber', 'DepartmentDescription', 'sum').fillna(0), on='VisitNumber').set_index('VisitNumber').drop(['TripType', 'sum', 'HEALTH AND BEAUTY AIDS'], axis=1)
y = test['TripType']
# mapping
lab4 = {41: 1, 42: 2, 43:3, 44: 4}
y_lab = y.map(lab4).fillna(5)
x.tail()
x.info(), x.columns
```
# sampling
```
# mapping
lab4 = {41: 1, 42: 2, 43:3, 44: 4}
y_lab = y.map(lab4).fillna(5)
# split
X_train, X_test, y_train, y_test = train_test_split(x, y_lab, random_state=0)
# # ADASYN
# X_samp, y_samp = ADASYN(random_state=0).fit_sample(X_train, y_train)
# oversampling
X_samp, y_samp = RandomOverSampler(random_state=0).fit_sample(X_train, y_train)
```
# fitting
```
# fitting
er = ExtraTreesClassifier(n_jobs=-1, n_estimators=250, random_state=0)
er.fit(X_samp, y_samp)
```
# validation
## Extra
```
# fitting
er = ExtraTreesClassifier(n_jobs=-1, n_estimators=250, random_state=0)
er.fit(X_samp, y_samp)
print(classification_report(y_test, er.predict(X_test)))
# ADASYNC
er = ExtraTreesClassifier(n_jobs=-1, n_estimators=250, random_state=0)
er.fit(X_samp, y_samp)
print(classification_report(y_test, er.predict(X_test)))
```
## Random
```
# random over
rf = RandomForestClassifier(n_estimators=300, n_jobs=-1, criterion='gini', max_depth=3, )
rf.fit(X_samp, y_samp)
print(classification_report(y_test, rf.predict(X_test)))
# ADASYN
rf = RandomForestClassifier(n_estimators=300, n_jobs=-1, criterion='gini', max_depth=3, )
rf.fit(X_samp, y_samp)
print(classification_report(y_test, rf.predict(X_test)))
```
# linear
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=100)
lr.fit(X_samp, y_samp)
print(classification_report(y_test, lr.predict(X_test)))
```
# lightGBM
```
import lightgbm
gbm = lightgbm.LGBMClassifier(n_estimators=500, max_depth=2, random_state=0)
gbm.fit(X_samp, y_samp)
print(classification_report(y_test, gbm.predict(X_test)))
```
# grid search
```
from sklearn.model_selection import GridSearchCV
gbm.get_params
grid_param = {
'n_estimators': [100, 200, 250, 300, 400],
'criterion': ['gini', 'entropy'],
'bootstrap': [True, False]
}
gd_sr = GridSearchCV(estimator=gbm,
param_grid=grid_param,
scoring='accuracy',
cv=5,
n_jobs=-1)
gd_sr.fit(X_samp, y_samp)
gd_sr.best_estimator_
gd_sr.best_params_
gd_sr.best_score_
gd_gbm = gd_sr.best_estimator_
print(classification_report(y_test, gd_gbm.predict(X_test)))
gd_gbm.feature_importances_
gd_gbm.learning_rate
gd_gbm.get_params
X_test.columns
```
---
# X_test3
- 41,42,43,44
```
idx = np.where(y_lab != 5)[0]
x2 = x.iloc[idx, :]
y2 = y_lab[idx].reset_index(drop=True)
x3 = x2.drop(x2.columns[:4], axis=1).copy()
b = x3.stack().reset_index().rename(columns={'level_1': 'DepartmentDescription', 0:'ScanCount'}).reset_index(drop=True)
bc = b.copy()
for i in [41, 42, 43, 44]:
criteria = sub['TripType'] == i
a = sub[criteria].loc[:, ['DepartmentDescription', 'minmax']]
temp = b.merge(a, how='outer', on='DepartmentDescription')
col_name = "{}_sum".format(i)
bc[col_name] = temp['ScanCount'] * temp['minmax']
bc = bc.fillna(-0.00001)
x_test3 = pd.concat([bc.pivot('VisitNumber', 'DepartmentDescription', '41_sum'),\
bc.pivot('VisitNumber', 'DepartmentDescription', '42_sum'),\
bc.pivot('VisitNumber', 'DepartmentDescription', '43_sum'),\
bc.pivot('VisitNumber', 'DepartmentDescription', '44_sum'),], axis=1)
x_test3.tail()
x_test3.info(), x_test3.columns
X_train, X_test, y_train, y_test = train_test_split(x_test3, y2, random_state=0)
X_samp, y_samp = RandomOverSampler(random_state=0).fit_sample(X_train, y_train)
# random over
rf = RandomForestClassifier(n_estimators=500, n_jobs=-1, criterion='gini', max_depth=3, )
rf.fit(X_samp, y_samp)
print(classification_report(y_test, rf.predict(X_test)))
# extrem forest
er = ExtraTreesClassifier(n_jobs=-1, n_estimators=250, random_state=0)
er.fit(X_samp, y_samp)
print(classification_report(y_test, er.predict(X_test)))
import lightgbm
gbm5 = lightgbm.LGBMClassifier(n_estimators=300, max_depth=3, random_state=0)
gbm5.fit(X_samp, y_samp)
print(classification_report(y_test, gbm5.predict(X_test)))
# linear regression
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=0.1)
lr.fit(X_samp, y_samp)
print(classification_report(y_test, lr.predict(X_test)))
knn = KNeighborsClassifier(n_neighbors = 5, n_jobs=-1)
knn.fit(X_samp,y_samp)
print(classification_report(y_test, knn.predict(X_test)))
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=30000, min_samples_leaf=1)
tree.fit(X_samp,y_samp)
print(tree.score(X_samp,y_samp))
print(classification_report(y_test, tree.predict(X_test)))
```
---
# submission data
```
df_test = pd.read_csv("test.csv")
reducer = Reducer()
df_test = reducer.reduce(df_test)
df_test.drop(['Weekday', 'Upc', 'FinelineNumber'], inplace=True, axis=1)
c = df_test['DepartmentDescription'].isnull()
df_test.loc[c, 'DepartmentDescription'] = 'Na'
df_test['sum'] = df.assign(f=df.groupby(['VisitNumber', 'DepartmentDescription'])['ScanCount'].transform(sum))['f']
df_s1 = df_test.iloc[:, [0,2, -1]]
df_s1 = df_test.drop_duplicates(['VisitNumber', 'DepartmentDescription']).reset_index(drop=True)
temp1 = df_test.copy()
for i in ls:
c = sub['TripType'] == i
temp1 = temp1.merge(sub[c].iloc[:, [1, -1]], how='outer',on='DepartmentDescription')
col_name = "{}_sum".format(i)
temp1.loc[:, col_name] = temp1['sum'] * temp1['minmax']
temp1.drop('minmax', axis=1, inplace=True)
test1 = temp1[:-1].fillna(-0.000001).groupby('VisitNumber',).agg(sum).iloc[:, 2:]
test_x = test1.merge(df_s1.pivot('VisitNumber', 'DepartmentDescription', 'sum').fillna(0), on='VisitNumber')
not_in_train = [i for i in test_x.columns if i not in x.columns]
not_in_test = [i for i in x.columns if i not in test_x.columns]
pred = er.predict(test_x)
pred_proba = er.predict_proba(test_x)
ans = pd.concat([df_test['VisitNumber'].drop_duplicates().reset_index(drop=True), pd.Series(pred)], axis=1)
# ans.to_csv("clf_ans.csv", header=True, index=False)
# result = lightgbm_model.predict(total_test)
samplesub = pd.read_csv('sample_submission.csv')
subform_df_columns = samplesub.columns[1:]
result_df = pd.DataFrame(pred_proba)
# result_df.columns = subform_df_columns
# subform_df = pd.concat([test_x.index.reset_index()['VisitNumber'], result_df], axis=1)
# subform_df.set_index('VisitNumber', inplace=True)
# subform_df.tail()
proba = pd.merge(pd.DataFrame(pred_proba, test_x.index, subform_df_columns[-5:]), ans, on='VisitNumber')
c = proba[0] != 5
ans2 = proba[c].drop(0, axis=1)
ans2.head()
ans1 = pd.read_csv("clf.csv")
ans1.head()
ans_final = pd.merge(ans1, ans2, how='outer').fillna(0).sort_values('VisitNumber')
col_order = ['VisitNumber', 'TripType_3', 'TripType_4', 'TripType_5', 'TripType_6',
'TripType_7', 'TripType_8', 'TripType_9', 'TripType_12', 'TripType_14',
'TripType_15', 'TripType_18', 'TripType_19', 'TripType_20',
'TripType_21', 'TripType_22', 'TripType_23', 'TripType_24',
'TripType_25', 'TripType_26', 'TripType_27', 'TripType_28',
'TripType_29', 'TripType_30', 'TripType_31', 'TripType_32',
'TripType_33', 'TripType_34', 'TripType_35', 'TripType_36',
'TripType_37', 'TripType_38', 'TripType_39', 'TripType_40',
'TripType_41', 'TripType_42', 'TripType_43', 'TripType_44',
'TripType_999',]
ans_final1 = ans_final.reindex(col_order, axis=1)
ans_final1.to_csv('submission1.csv', index=False)
pd.read_csv("lightgbm_submission__plz_.csv")
X_train
```
| github_jupyter |
# Clustering TCR Sequences
Following featurization of the TCRSeq data, users will often want to cluster the TCRSeq data to identify possible antigen-specific clusters of sequences. In order to do this, we have provided multiple ways for clustering your TCR sequences.
## Phenograph Clustering
The first method we will explore is using a network-graph based clustering algorithm called Phenograph (https://github.com/jacoblevine/PhenoGraph). This method automatically determines the number of clusters in the data by maximizing the modularity of the network-graph asssembled from the data. Of note, this algorithm is very fast and will be useful for when there are possibly thousands to tens of thouands of sequences to cluster. However, clusters by this method tend to be quite large.
First, we will load data and train the VAE.
```
%%capture
import sys
sys.path.append('../../')
from DeepTCR.DeepTCR import DeepTCR_U
# Instantiate training object
DTCRU = DeepTCR_U('Tutorial')
#Load Data from directories
DTCRU.Get_Data(directory='../../Data/Murine_Antigens',Load_Prev_Data=False,aggregate_by_aa=True,
aa_column_beta=0,count_column=1,v_beta_column=2,j_beta_column=3)
#Train VAE
DTCRU.Train_VAE(Load_Prev_Data=False)
```
We will then run the clustering command.
```
DTCRU.Cluster(clustering_method='phenograph')
```
Following clustering, we can view the clustering solutions by looking at the object variable called Cluster_DFs.
```
DFs = DTCRU.Cluster_DFs
print(DFs[0])
```
We can also choose to save these results to a directory called Name of object + '_Results' by setting the write_to_sheets parameter to True. There, we can find the proportions of every sample in each cluster and csv files for every cluster detailing the sequence information with other information as well.
```
DTCRU.Cluster(clustering_method='phenograph',write_to_sheets=True)
```
We can also employ two other clustering algorithms (hierarchical clustering and DBSCAN). For these types of methods, we can either control the settings for the algorithm such as the threshold parameter (t), the criterion/linkage algorithm for heirarchical clustering, or we can allow the method to determine the optimal threshold parameter by maximizing the silhoutte score of the clustering solution. First, if we run hierarchial clustering letting the program determing the right threshold parameters:
## Hierarchical Clustering
```
DTCRU.Cluster(clustering_method='hierarchical')
```
Or we can set the parameters ourselves.
```
DTCRU.Cluster(clustering_method='hierarchical',criterion='distance',t=1.0)
```
## DBSCAN clustering
And to use DBSCAN...
```
DTCRU.Cluster(clustering_method='dbscan')
```
In the case there are perhaps too many sequences to efficiently cluster the data quickly, one can downsample the data and then use a k-nearest neighbor algorithm to classify the rest of the sequences like so .Here, we will downsample 500 sequenes for clustering and then assign the rest via KNN.
```
DTCRU.Cluster(clustering_method='phenograph',sample=500)
```
Finally, we can visualize the clustering results through a UMAP representation of the sequences.
## Clustering Visualization
```
DTCRU.UMAP_Plot(by_cluster=True)
```
For full description of options for clustering, see Documentation.
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Image segmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/segmentation">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/segmentation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/segmentation.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/segmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial focuses on the task of image segmentation, using a modified <a href="https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/" class="external">U-Net</a>.
## What is image segmentation?
In an image classification task the network assigns a label (or class) to each input image. However, suppose you want to know the shape of that object, which pixel belongs to which object, etc. In this case you will want to assign a class to each pixel of the image. This task is known as segmentation. A segmentation model returns much more detailed information about the image. Image segmentation has many applications in medical imaging, self-driving cars and satellite imaging to name a few.
This tutorial uses the [Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/) ([Parkhi et al, 2012](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf)). The dataset consists of images of 37 pet breeds, with 200 images per breed (~100 each in the training and test splits). Each image includes the corresponding labels, and pixel-wise masks. The masks are class-labels for each pixel. Each pixel is given one of three categories:
- Class 1: Pixel belonging to the pet.
- Class 2: Pixel bordering the pet.
- Class 3: None of the above/a surrounding pixel.
```
!pip install git+https://github.com/tensorflow/examples.git
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.models.pix2pix import pix2pix
from IPython.display import clear_output
import matplotlib.pyplot as plt
```
## Download the Oxford-IIIT Pets dataset
The dataset is [available from TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/oxford_iiit_pet). The segmentation masks are included in version 3+.
```
dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True)
```
In addition, the image color values are normalized to the `[0,1]` range. Finally, as mentioned above the pixels in the segmentation mask are labeled either {1, 2, 3}. For the sake of convenience, subtract 1 from the segmentation mask, resulting in labels that are : {0, 1, 2}.
```
def normalize(input_image, input_mask):
input_image = tf.cast(input_image, tf.float32) / 255.0
input_mask -= 1
return input_image, input_mask
def load_image(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
```
The dataset already contains the required training and test splits, so continue to use the same splits.
```
TRAIN_LENGTH = info.splits['train'].num_examples
BATCH_SIZE = 64
BUFFER_SIZE = 1000
STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE
train_images = dataset['train'].map(load_image, num_parallel_calls=tf.data.AUTOTUNE)
test_images = dataset['test'].map(load_image, num_parallel_calls=tf.data.AUTOTUNE)
```
The following class performs a simple augmentation by randomly-flipping an image.
Go to the [Image augmentation](data_augmentation.ipynb) tutorial to learn more.
```
class Augment(tf.keras.layers.Layer):
def __init__(self, seed=42):
super().__init__()
# both use the same seed, so they'll make the same random changes.
self.augment_inputs = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed)
self.augment_labels = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed)
def call(self, inputs, labels):
inputs = self.augment_inputs(inputs)
labels = self.augment_labels(labels)
return inputs, labels
```
Build the input pipeline, applying the Augmentation after batching the inputs.
```
train_batches = (
train_images
.cache()
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
.repeat()
.map(Augment())
.prefetch(buffer_size=tf.data.AUTOTUNE))
test_batches = test_images.batch(BATCH_SIZE)
```
Visualize an image example and its corresponding mask from the dataset.
```
def display(display_list):
plt.figure(figsize=(15, 15))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
for images, masks in train_batches.take(2):
sample_image, sample_mask = images[0], masks[0]
display([sample_image, sample_mask])
```
## Define the model
The model being used here is a modified [U-Net](https://arxiv.org/abs/1505.04597). A U-Net consists of an encoder (downsampler) and decoder (upsampler). In-order to learn robust features and reduce the number of trainable parameters, you will use a pretrained model - MobileNetV2 - as the encoder. For the decoder, you will use the upsample block, which is already implemented in the [pix2pix](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py) example in the TensorFlow Examples repo. (Check out the [pix2pix: Image-to-image translation with a conditional GAN](../generative/pix2pix.ipynb) tutorial in a notebook.)
As mentioned, the encoder will be a pretrained MobileNetV2 model which is prepared and ready to use in `tf.keras.applications`. The encoder consists of specific outputs from intermediate layers in the model. Note that the encoder will not be trained during the training process.
```
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, 3], include_top=False)
# Use the activations of these layers
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
base_model_outputs = [base_model.get_layer(name).output for name in layer_names]
# Create the feature extraction model
down_stack = tf.keras.Model(inputs=base_model.input, outputs=base_model_outputs)
down_stack.trainable = False
```
The decoder/upsampler is simply a series of upsample blocks implemented in TensorFlow examples.
```
up_stack = [
pix2pix.upsample(512, 3), # 4x4 -> 8x8
pix2pix.upsample(256, 3), # 8x8 -> 16x16
pix2pix.upsample(128, 3), # 16x16 -> 32x32
pix2pix.upsample(64, 3), # 32x32 -> 64x64
]
def unet_model(output_channels:int):
inputs = tf.keras.layers.Input(shape=[128, 128, 3])
# Downsampling through the model
skips = down_stack(inputs)
x = skips[-1]
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
concat = tf.keras.layers.Concatenate()
x = concat([x, skip])
# This is the last layer of the model
last = tf.keras.layers.Conv2DTranspose(
filters=output_channels, kernel_size=3, strides=2,
padding='same') #64x64 -> 128x128
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
```
Note that the number of filters on the last layer is set to the number of `output_channels`. This will be one output channel per class.
## Train the model
Now, all that is left to do is to compile and train the model.
Since this is a multiclass classification problem, use the `tf.keras.losses.CategoricalCrossentropy` loss function with the `from_logits` argument set to `True`, since the labels are scalar integers instead of vectors of scores for each pixel of every class.
When running inference, the label assigned to the pixel is the channel with the highest value. This is what the `create_mask` function is doing.
```
OUTPUT_CLASSES = 3
model = unet_model(output_channels=OUTPUT_CLASSES)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Have a quick look at the resulting model architecture:
```
tf.keras.utils.plot_model(model, show_shapes=True)
```
Try out the model to check what it predicts before training.
```
def create_mask(pred_mask):
pred_mask = tf.argmax(pred_mask, axis=-1)
pred_mask = pred_mask[..., tf.newaxis]
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for image, mask in dataset.take(num):
pred_mask = model.predict(image)
display([image[0], mask[0], create_mask(pred_mask)])
else:
display([sample_image, sample_mask,
create_mask(model.predict(sample_image[tf.newaxis, ...]))])
show_predictions()
```
The callback defined below is used to observe how the model improves while it is training.
```
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions()
print ('\nSample Prediction after epoch {}\n'.format(epoch+1))
EPOCHS = 20
VAL_SUBSPLITS = 5
VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS
model_history = model.fit(train_batches, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=test_batches,
callbacks=[DisplayCallback()])
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
plt.figure()
plt.plot(model_history.epoch, loss, 'r', label='Training loss')
plt.plot(model_history.epoch, val_loss, 'bo', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.ylim([0, 1])
plt.legend()
plt.show()
```
## Make predictions
Now, make some predictions. In the interest of saving time, the number of epochs was kept small, but you may set this higher to achieve more accurate results.
```
show_predictions(test_batches, 3)
```
## Optional: Imbalanced classes and class weights
Semantic segmentation datasets can be highly imbalanced meaning that particular class pixels can be present more inside images than that of other classes. Since segmentation problems can be treated as per-pixel classification problems, you can deal with the imbalance problem by weighing the loss function to account for this. It's a simple and elegant way to deal with this problem. Refer to the [Classification on imbalanced data](../structured_data/imbalanced_data.ipynb) tutorial to learn more.
To [avoid ambiguity](https://github.com/keras-team/keras/issues/3653#issuecomment-243939748), `Model.fit` does not support the `class_weight` argument for inputs with 3+ dimensions.
```
try:
model_history = model.fit(train_batches, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
class_weight = {0:2.0, 1:2.0, 2:1.0})
assert False
except Exception as e:
print(f"Expected {type(e).__name__}: {e}")
```
So, in this case you need to implement the weighting yourself. You'll do this using sample weights: In addition to `(data, label)` pairs, `Model.fit` also accepts `(data, label, sample_weight)` triples.
`Model.fit` propagates the `sample_weight` to the losses and metrics, which also accept a `sample_weight` argument. The sample weight is multiplied by the sample's value before the reduction step. For example:
```
label = [0,0]
prediction = [[-3., 0], [-3, 0]]
sample_weight = [1, 10]
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction=tf.losses.Reduction.NONE)
loss(label, prediction, sample_weight).numpy()
```
So to make sample weights for this tutorial you need a function that takes a `(data, label)` pair and returns a `(data, label, sample_weight)` triple. Where the `sample_weight` is a 1-channel image containing the class weight for each pixel.
The simplest possible implementation is to use the label as an index into a `class_weight` list:
```
def add_sample_weights(image, label):
# The weights for each class, with the constraint that:
# sum(class_weights) == 1.0
class_weights = tf.constant([2.0, 2.0, 1.0])
class_weights = class_weights/tf.reduce_sum(class_weights)
# Create an image of `sample_weights` by using the label at each pixel as an
# index into the `class weights` .
sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32))
return image, label, sample_weights
```
The resulting dataset elements contain 3 images each:
```
train_batches.map(add_sample_weights).element_spec
```
Now you can train a model on this weighted dataset:
```
weighted_model = unet_model(OUTPUT_CLASSES)
weighted_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
weighted_model.fit(
train_batches.map(add_sample_weights),
epochs=1,
steps_per_epoch=10)
```
## Next steps
Now that you have an understanding of what image segmentation is and how it works, you can try this tutorial out with different intermediate layer outputs, or even different pretrained models. You may also challenge yourself by trying out the [Carvana](https://www.kaggle.com/c/carvana-image-masking-challenge/overview) image masking challenge hosted on Kaggle.
You may also want to see the [Tensorflow Object Detection API](https://github.com/tensorflow/models/blob/master/research/object_detection/README.md) for another model you can retrain on your own data. Pretrained models are available on [TensorFlow Hub](https://www.tensorflow.org/hub/tutorials/tf2_object_detection#optional)
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#default_exp callback.data
```
# Data Callbacks
> Callbacks which work with a learner's data
```
#export
from fastai.basics import *
#hide
from nbdev.showdoc import *
from fastai.test_utils import *
#export
class CollectDataCallback(Callback):
"Collect all batches, along with `pred` and `loss`, into `self.data`. Mainly for testing"
def before_fit(self): self.data = L()
def after_batch(self):
self.data.append(self.learn.to_detach((self.xb,self.yb,self.pred,self.loss)))
#export
@delegates()
class WeightedDL(TfmdDL):
def __init__(self, dataset=None, bs=None, wgts=None, **kwargs):
super().__init__(dataset=dataset, bs=bs, **kwargs)
wgts = array([1.]*len(dataset) if wgts is None else wgts)
self.wgts = wgts/wgts.sum()
def get_idxs(self):
if self.n==0: return []
if not self.shuffle: return super().get_idxs()
return list(np.random.choice(self.n, self.n, p=self.wgts))
#export
@patch
@delegates(Datasets.dataloaders)
def weighted_dataloaders(self:Datasets, wgts, bs=64, **kwargs):
xtra_kwargs = [{}] * (self.n_subsets-1)
return self.dataloaders(bs=bs, dl_type=WeightedDL, dl_kwargs=({'wgts':wgts}, *xtra_kwargs), **kwargs)
n = 160
dsets = Datasets(torch.arange(n).float())
dls = dsets.weighted_dataloaders(wgts=range(n), bs=16)
learn = synth_learner(data=dls, cbs=CollectDataCallback)
learn.fit(1)
t = concat(*learn.collect_data.data.itemgot(0,0))
plt.hist(t.numpy());
#export
@delegates()
class PartialDL(TfmdDL):
"Select randomly partial quantity of data at each epoch"
def __init__(self, dataset=None, bs=None, partial_n=None, **kwargs):
super().__init__(dataset=dataset, bs=bs, **kwargs)
self.partial_n = min(partial_n, self.n) if partial_n else None
def get_idxs(self):
if self.partial_n is None: return super().get_idxs()
return list(np.random.choice(self.n, self.partial_n, replace=False))
def __len__(self):
if self.partial_n is None: return super().__len__()
return self.partial_n//self.bs + (0 if self.drop_last or self.partial_n%self.bs==0 else 1)
#export
@patch
@delegates(Datasets.dataloaders)
def partial_dataloaders(self:FilteredBase, partial_n, bs=64, **kwargs):
"Create a partial dataloader `PartialDL` for the training set"
xtra_kwargs = [{}] * (self.n_subsets-1)
return self.dataloaders(bs=bs, dl_type=PartialDL, dl_kwargs=({'partial_n':partial_n}, *xtra_kwargs), **kwargs)
dls = dsets.partial_dataloaders(partial_n=32, bs=16)
assert len(dls[0])==2
for batch in dls[0]:
assert len(batch[0])==16
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Moving average
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
```
## Trend and Seasonality
```
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
## Naive Forecast
```
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
```
Now let's compute the mean squared error between the forecasts and the predictions in the validation period:
```
keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()
```
That's our baseline, now let's try a moving average.
## Moving Average
```
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
This implementation is *much* faster than the previous one"""
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, moving_avg, label="Moving average (30 days)")
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
```
That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*.
```
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label="Series(t) – Series(t–365)")
plt.show()
```
Focusing on the validation period:
```
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
```
Great, the trend and seasonality seem to be gone, so now we can use the moving average:
```
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
```
Now let's bring back the trend and seasonality by adding the past values from t – 365:
```
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
```
Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
```
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
```
That's starting to look pretty good! Let's see if we can do better with a Machine Learning model.
| github_jupyter |
# Review lists and for loops
```
# Assign a list of prices
price_list = [3.89, 14.78, 20.01, 99.62, 0.47]
print(price_list)
print(type(price_list))
euro_conversion_factor = 0.93
# Use a for loop to multiply each item by the conversion factor
euro_list = []
for price in price_list:
euro_list.append(price * euro_conversion_factor)
print(euro_list)
```
# NumPy
ndarrays must contain only one type of object.
```
# This is the standard import style for the NumPy module
import numpy as np
# Turn the list into a NumPy N-dimensional array (ndarray) object
price_array = np.array(price_list)
print(price_array)
print(type(price_array))
```
## Vectorized computation
```
# Multiply each array element by a constant factor
euro_array = price_array * euro_conversion_factor
print(euro_array)
print(type(euro_array))
# Pairwise addition of two ndarrays
price_increase_list = [0.58, 2.95, 1.66, 12.40, 0.04]
price_increase_array = np.array([0.58, 2.95, 1.66, 12.40, 0.04])
print(price_array)
print(price_increase_array)
print(price_array + price_increase_array)
# Try to add together two lists
print(price_list)
print(price_increase_list)
print(price_list + price_increase_list) # a for loop would be required
```
## Dimensions
```
# Create an ndarray of sales data for 5 years
annual_sales_cars = [34506, 35446, 40190, 43824, 46456]
annual_sales_trucks = [45369, 46894, 43901, 44870, 45978]
annual_sales_suvs = [21554, 28745, 34369, 43593, 53982]
annual_sales_array = np.array([annual_sales_cars, annual_sales_trucks, annual_sales_suvs])
print(annual_sales_array)
print(price_array.ndim)
print(price_array.shape)
print(price_array.dtype)
print()
print(annual_sales_array.ndim)
print(annual_sales_array.shape)
print(annual_sales_array.dtype)
```
## Indexing
```
annual_sales_trucks = [45369, 46894, 43901, 44870, 45978]
trucks = np.array(annual_sales_trucks)
print(trucks)
# Indexing and slicing similar to lists
print(trucks[4])
print(trucks[2:4])
print(trucks[1:])
print(trucks[:])
```
## Two-dimensional indexing
<img src="https://learning.oreilly.com/library/view/python-for-data/9781491957653/assets/pyda_0401.png" width="300" height="300">
Image from "Python for data analysis: data wrangling with pandas, NumPy, and IPython" by Wes McKinney, Chapter 4
```
print(annual_sales_array)
print(annual_sales_array[0, 2])
print(annual_sales_array[0])
print(annual_sales_array[:, 2])
```
## Calculations with `ndarrays`
```
# Convert annual sales to daily sales
daily_sales_array = annual_sales_array/365
print(daily_sales_array)
print(daily_sales_array.dtype)
print(trucks)
# Vectorized conditions
assessment = np.where(trucks >= 45000, 'good year', 'bad year')
print(assessment)
print(assessment.dtype)
print(np.where(trucks >= trucks[0], 'increase', 'decrease'))
```
# Image examples
Still images are a type of two-dimensional array where every element is the same kind of object.
```
# Import modules for images and plotting
import skimage.data as data
import matplotlib.pyplot as plt
%matplotlib inline
# use a built-in image
camera = data.camera()
print(type(camera))
# 0 is black, 255 is white. Grayscale is between the two
print(camera)
plt.imshow(camera, cmap='gray')
plt.show();
```
## Creating a negative
```
inverse_camera = 255 - camera
print(inverse_camera)
plt.imshow(inverse_camera, cmap='gray')
plt.show();
```
## Thresholding (black and white)
```
threshold_camera = data.camera()
# Set pixels less than mid-gray to black
threshold_camera[threshold_camera < 128] = 0
# Set remaining pixels to white
threshold_camera[threshold_camera > 0] = 255
print(threshold_camera)
plt.imshow(threshold_camera, cmap='gray')
plt.show();
# Alternative method using np.where()
threshold_camera = np.where(camera < 128, 0, 255)
#threshold_camera = np.where(camera < 200, camera + 55, 255)
print(threshold_camera)
plt.imshow(threshold_camera, cmap='gray')
plt.show();
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
plt.rcParams["figure.figsize"] = (20, 20)
import re
import os
import io
import nltk
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
from tqdm import tqdm_notebook as tqdm
from nltk import word_tokenize, sent_tokenize
from sklearn.preprocessing import OneHotEncoder
import torch
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence, pad_sequence
nltk.download("punkt")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
# data
```
base_path = "/mnt/efs/wikipedia/dumps/text/"
paths = np.random.choice(os.listdir(base_path), size=1)
all_text = ""
for path in paths:
for filename in tqdm(os.listdir(base_path + path)):
with open(base_path + path + "/" + filename, "rb") as f:
all_text += f.read().decode("latin1")
pattern = r"(?:<doc.+>)((.|\s|\S)*?)(?:<\/doc>)"
articles = [article[0] for article in re.findall(pattern, all_text)]
articles = np.random.choice(articles, size=1000)
```
### cleaning pipeline
```
def tokenize(sentence):
"""moses tokeniser"""
seq = " ".join(word_tokenize(sentence))
seq = seq.replace(" n't ", "n 't ")
return seq.split()
def label_linkable_tokens(sentence, label_all=True):
parsed_html = BeautifulSoup(sentence, "html.parser")
link_text = [link.text for link in parsed_html.find_all("a")]
tokenised_links = [tokenize(link) for link in link_text]
tokenised_text = tokenize(parsed_html.text)
target_sequence = np.zeros(len(tokenised_text))
for link in tokenised_links:
start_positions = kmp(tokenised_text, link)
if label_all:
for pos in start_positions:
target_sequence[pos : pos + len(link)] = 1
elif label_all == False and len(start_positions) > 0:
pos = start_positions[0]
target_sequence[pos : pos + len(link)] = 1
else:
pass
return tokenised_text, target_sequence.reshape(-1, 1)
def kmp(sequence, sub):
"""
Knuth–Morris–Pratt algorithm, returning the starting position
of a specified subsequence within another, larger sequence.
Usually used for string matching.
"""
partial = [0]
for i in range(1, len(sub)):
j = partial[i - 1]
while j > 0 and sub[j] != sub[i]:
j = partial[j - 1]
partial.append(j + 1 if sub[j] == sub[i] else j)
positions, j = [], 0
for i in range(len(sequence)):
while j > 0 and sequence[i] != sub[j]:
j = partial[j - 1]
if sequence[i] == sub[j]:
j += 1
if j == len(sub):
positions.append(i - (j - 1))
j = 0
return positions
token_sequences, target_sequences = [], []
for i, article in enumerate(tqdm(articles)):
for j, sentence in enumerate(sent_tokenize(article)):
try:
tokenized_sentence, target_sequence = label_linkable_tokens(sentence)
token_sequences.append(tokenized_sentence)
target_sequences.append(target_sequence)
except:
pass
len(token_sequences)
```
# character level inputs
```
unique_characters = set(" ".join([token for seq in token_sequences for token in seq]))
special_cases = ["xxunk", "xxpad", "xxbos", "xxeos"]
for case in special_cases:
unique_characters.add(case)
char_to_ix = {char: ix for ix, char in enumerate(unique_characters)}
ix_to_char = {ix: char for ix, char in enumerate(unique_characters)}
```
# word level targets
```
article_vocabulary = set([tok for seq in token_sequences for tok in seq])
for case in special_cases:
article_vocabulary.add(case)
token_to_ix = {token: ix for ix, token in enumerate(article_vocabulary)}
ix_to_token = {ix: token for ix, token in enumerate(article_vocabulary)}
```
# dataset and dataloader
```
class SentenceDataset(Dataset):
def __init__(self, token_seqs):
self.token_seqs = np.array(token_seqs)
# impose length constraint
where_big_enough = np.where([len(seq) > 3 for seq in token_seqs])
self.token_seqs = self.token_seqs[where_big_enough]
# find prediction points for language model
self.exit_ix_seqs = [self.find_exit_points(seq) for seq in self.token_seqs]
# indexify
self.char_ix_seqs = [self.indexify_chars(seq) for seq in self.token_seqs]
self.token_wv_seqs = [self.indexify_tokens(seq) for seq in self.token_seqs]
def __getitem__(self, ix):
char_ix_seq = self.char_ix_seqs[ix]
token_ix_seq = self.token_ix_seqs[ix]
exit_ix_seq = self.exit_ix_seqs[ix]
return char_ix_seq, token_ix_seq, exit_ix_seq
def __len__(self):
return len(self.token_seqs)
def indexify_tokens(self, token_seq):
ix_seq = np.array(
[token_to_ix[token] for token in token_seq] + [token_to_ix["xxeos"]]
)
return torch.LongTensor(ix_seq)
def indexify_chars(self, token_seq):
ix_seq = np.array(
[char_to_ix[char] for char in " ".join(token_seq)]
+ [char_to_ix[" "], char_to_ix["xxeos"]]
)
return torch.LongTensor(ix_seq)
def find_exit_points(self, token_seq):
exit_positions = np.cumsum([len(token) + 1 for token in token_seq])
return torch.LongTensor(exit_positions) - 1
def collate_fn(batch):
char_ix_seqs, token_ix_seqs, exit_ix_seqs = zip(*batch)
char_seq_lens = torch.LongTensor([len(char_seq) for char_seq in char_ix_seqs])
sorted_lengths, sort_indicies = char_seq_lens.sort(dim=0, descending=True)
sorted_char_seqs = [char_ix_seqs[i] for i in sort_indicies]
sorted_token_seqs = [token_ix_seqs[i] for i in sort_indicies]
sorted_exit_seqs = [exit_ix_seqs[i] for i in sort_indicies]
padded_char_seqs = pad_sequence(
sequences=sorted_char_seqs, padding_value=char_to_ix["xxpad"], batch_first=True
)
padded_token_seqs = pad_sequence(
sequences=sorted_token_seqs,
padding_value=token_to_ix["xxpad"],
batch_first=True,
)
padded_exit_seqs = pad_sequence(
sequences=sorted_exit_seqs, padding_value=0, batch_first=True
)
return padded_char_seqs, padded_token_seqs, padded_exit_seqs, sorted_lengths
train_dataset = SentenceDataset(token_sequences)
train_loader = DataLoader(
dataset=train_dataset,
batch_size=32,
num_workers=5,
shuffle=True,
collate_fn=collate_fn,
)
```
# testing the thing
```
class LanguageModel(nn.Module):
def __init__(
self,
input_dim=len(unique_characters),
embedding_dim=200,
hidden_dim=512,
output_dim=len(article_vocabulary),
):
super(LanguageModel, self).__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.enc_lstm = nn.LSTM(
input_size=embedding_dim,
hidden_size=hidden_dim,
num_layers=1,
bidirectional=False,
dropout=0.2,
)
# self.head = nn.Sequential(
# nn.Linear(hidden_dim, hidden_dim),
# nn.ReLU(),
# nn.Dropout(0.3),
# nn.Linear(hidden_dim, output_dim),
# )
self.head = nn.Linear(hidden_dim, output_dim)
def forward(self, padded_char_seqs, exit_ix_seqs, sorted_lengths):
x = self.embedding(padded_char_seqs)
x = pack_padded_sequence(x, lengths=sorted_lengths, batch_first=True)
x, _ = self.enc_lstm(x)
x, _ = pad_packed_sequence(x, batch_first=True)
# pop out the character embeddings at position of the end of each token
x = torch.stack([x[i, exit_ix_seqs[i]] for i in range(len(x))])
return self.head(x)
model = LanguageModel().to(device)
loss = nn.CrossEntropyLoss()
```
```
losses = []
torch.backends.cudnn.benchmark = True
trainable_parameters = filter(lambda p: p.requires_grad, model.parameters())
optimiser = optim.Adam(trainable_parameters, lr=0.0001)
loss_function = nn.CrossEntropyLoss()
def train(model, train_loader, loss_function, optimiser, n_epochs):
model.train()
for epoch in range(n_epochs):
loop = tqdm(train_loader)
for char_seqs, token_seqs, exit_ix_seqs, lengths in loop:
char_seqs = torch.LongTensor(char_seqs).cuda(non_blocking=True)
token_seqs = torch.LongTensor(token_seqs).cuda(non_blocking=True)[:, 1:]
exit_ix_seqs = torch.LongTensor(exit_ix_seqs).cuda(non_blocking=True)
lengths = torch.LongTensor(lengths).cuda(non_blocking=True)
optimiser.zero_grad()
preds = model(char_seqs, exit_ix_seqs, lengths)
preds = preds.permute(0, 2, 1)
loss = loss_function(preds, token_seqs)
loss.backward()
optimiser.step()
losses.append(loss.item())
loop.set_description("Epoch {}/{}".format(epoch + 1, n_epochs))
loop.set_postfix(loss=np.mean(losses[-100:]))
train(
model=model,
train_loader=train_loader,
loss_function=loss_function,
optimiser=optimiser,
n_epochs=1,
)
torch.cuda.empty_cache()
```
| github_jupyter |
# Tossing Dominos
_Following http://images.math.cnrs.fr/Pavages-aleatoires-par-touillage?lang=fr_
```
from sage.graphs.generators.families import AztecDiamondGraph
class OrderedDomino:
def __init__(self, first, second):
self.first = first
self.second = second
if first[0] == second[0]:
self.direction = 'horizontal'
else:
self.direction = 'vertical'
def parity(self, i):
return (self.first[0] % 2 + self.first[1] % 2 + i) % 2
def __repr__(self):
return "OrderedDomino from %s to %s" % (self.first, self.second)
def apply_matching(m, size, n):
matching = {}
shift_val = size - n
def shift(t):
return (t[0] + shift_val, t[1] + shift_val)
for first, second in m:
if first[0] < second[0] or first[1] < second[1]:
d = OrderedDomino(shift(first), shift(second))
else:
d = OrderedDomino(shift(second), shift(first))
matching[shift(first)] = d
matching[shift(second)] = d
return matching
def figure(size):
g = AztecDiamondGraph(size)
m = (((0,0),(0,1)), ((1,0), (1,1)))
return g, m, apply_matching(m, size, 1)
def parity(pos):
return (pos[0]%2 + pos[1]%2)%2
def similar_position(pos, ref):
return ((pos[0]%2 + pos[1]%2)%2 == (ref[0]%2 + ref[1]%2)%2)
def tossing(g, size, n, matching):
verts = AztecDiamondGraph(n).vertices()
shift_val = size - n
def shift(t):
return (t[0] + shift_val, t[1] + shift_val)
shifted_verts = [shift(v) for v in verts]
ref_corner = shift((0, min([v[1] for v in verts if v[0]==0])))
new_matching = {}
for v in shifted_verts:
# is it an "active cell"?
if not similar_position(v, ref_corner):
continue
bottom_left = (v[0]+1, v[1])
if not bottom_left in shifted_verts:
continue
top_right = (v[0], v[1]+1)
if not top_right in shifted_verts:
continue
bottom_right = (v[0]+1, v[1]+1)
if not bottom_right in shifted_verts:
continue
if v in matching.keys() and matching[v].first == v and \
((matching[v].direction == 'horizontal' and bottom_left in matching.keys() \
and matching[bottom_left].first == bottom_left and matching[bottom_left].second == bottom_right) \
or (matching[v].direction == 'vertical' and top_right in matching.keys() \
and matching[top_right].first == top_right and matching[top_right].second == bottom_right)):
# Ignore both dominos
pass
elif v in matching.keys() and matching[v].first == v:
# Slide
if matching[v].direction == 'horizontal':
new_matching[bottom_left] = new_matching[bottom_right] = OrderedDomino(
bottom_left, bottom_right)
else:
new_matching[top_right] = new_matching[bottom_right] = OrderedDomino(
top_right, bottom_right)
elif bottom_left in matching.keys() and matching[bottom_left].first == bottom_left \
and matching[bottom_left].direction == 'horizontal':
# Slide
new_matching[v] = new_matching[top_right] = OrderedDomino(v, top_right)
elif top_right in matching.keys() and matching[top_right].first == top_right \
and matching[top_right].direction == 'vertical':
# Slide
new_matching[v] = new_matching[bottom_left] = OrderedDomino(v, bottom_left)
else:
# Create 2 dominos
i = randint(0,1)
if i == 0:
new_matching[v] = new_matching[top_right] = OrderedDomino(v, top_right)
new_matching[bottom_left] = new_matching[bottom_right] = OrderedDomino(bottom_left, bottom_right)
else:
new_matching[v] = new_matching[bottom_left] = OrderedDomino(v, bottom_left)
new_matching[top_right] = new_matching[bottom_right] = OrderedDomino(top_right, bottom_right)
return new_matching
def make_cell_widget_class_index(matching, n):
def cell_widget_class_index(pos):
def calc_index_for_domino(d, n):
if d.direction == 'horizontal':
if not d.parity(n):
return 1
else:
return 2
else:
if not d.parity(n):
return 3
else:
return 4
if pos in matching.keys():
d = matching[pos]
return calc_index_for_domino(d, n)
return 0
return cell_widget_class_index
%%html
<style>
.b0 {}
.b1 {background-color: green}
.b2 {background-color: blue}
.b3 {background-color: red}
.b4 {background-color: yellow}
</style>
from sage_combinat_widgets.grid_view_widget import GridViewWidget, BlankButton, styled_push_button
from ipywidgets import Layout
smallblyt = Layout(width='12px',height='12px', margin='0', padding='0')
Button0 = styled_push_button(disabled=True)
Button1 = styled_push_button(disabled=True, style_name='b1')
Button2 = styled_push_button(disabled=True, style_name='b2')
Button3 = styled_push_button(disabled=True, style_name='b3')
Button4 = styled_push_button(disabled=True, style_name='b4')
ORDER = 12
g, m, md = figure(ORDER)
w = GridViewWidget(g, cell_layout=smallblyt,
cell_widget_classes=[Button0, Button1, Button2, Button3, Button4],
css_classes=['b0', 'b1', 'b2', 'b3', 'b4'],
css_class_index=make_cell_widget_class_index(md, ORDER),
cell_widget_class_index=make_cell_widget_class_index(md, ORDER),
blank_widget_class=BlankButton)
w
for i in range(2,ORDER+1):
md = tossing(g, ORDER, i, md)
w.update_style(css_class_index=make_cell_widget_class_index(md, i))
```
| github_jupyter |
```
!nvidia-smi
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/231-roBERTa_base/'
import os
os.mkdir(MODEL_BASE_PATH)
```
## Dependencies
```
import json, warnings, shutil
from scripts_step_lr_schedulers import *
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
```
# Load data
```
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training samples: {len(k_fold)}')
display(k_fold.head())
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 64,
"BATCH_SIZE": 32,
"EPOCHS": 7,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 2,
"N_FOLDS": 5,
"question_size": 4,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
## Learning rate schedule
```
lr_min = 1e-6
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = total_steps * 0.1
decay = .9985
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.SpatialDropout1D(.1)(h11)
logits = layers.Dense(2, name="qa_outputs")(x)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Train
```
def get_training_dataset(x_train, y_train, batch_size, buffer_size, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train[0], 'attention_mask': x_train[1]},
(y_train[0], y_train[1])))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size, repeated=False, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid[0], 'attention_mask': x_valid[1]},
(y_valid[0], y_valid[1])))
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
AUTO = tf.data.experimental.AUTOTUNE
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay))
model.compile(optimizer, loss=[losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True),
losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)])
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
# model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
```
# Model loss graph
```
#@title
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation
```
#@title
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
#@title
k_fold['jaccard_mean'] = 0
for n in range(config['N_FOLDS']):
k_fold['jaccard_mean'] += k_fold[f'jaccard_fold_{n+1}'] / config['N_FOLDS']
display(k_fold[['text', 'selected_text', 'sentiment', 'text_tokenCnt',
'selected_text_tokenCnt', 'jaccard', 'jaccard_mean'] + [c for c in k_fold.columns if (c.startswith('prediction_fold'))]].head(15))
```
| github_jupyter |
# 常用的 DataFrame 操作
* merge / transform
* subset
* groupby
# [教學目標]
- 開始的第一堂課 : 我們先一些機器學習的基礎開始, 需要用到一些 Python 語法
- 如果不熟 Python, 但是至少熟悉過一門語言, 可以從這些範例開始熟悉
- 所謂評價函數 (Metric), 就是機器學習的計分方式, 範例會展示平均絕對誤差 (MAE) 的寫法
- 我們來了解意義並寫作一個函數吧!!
# [範例重點]
- DataFrame 的黏合(concat) (In[3]~In[5], Out[3]~Out[5])
- 使用條件篩選出 DataFrame 的子集合(Subset) (In[9], Out[9], In[10], Out[10])
- DataFrame 的群聚(groupby) 的各種應用方式 (In[11]~In[15], Out[11]~Out[15])
```
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
## Merge / Transform
```
# 生成範例用的資料 ()
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
# 沿縱軸合併
result = pd.concat([df1, df2, df3])
result
# 沿橫軸合併
result = pd.concat([df1, df4], axis = 1)
result
df1
df4
# 沿橫軸合併
result = pd.concat([df1, df4], axis = 1, join = 'inner') # 硬串接
print(result)
result = pd.merge(df1, df4, how='inner')
print(result)
# 將 欄-列 逐一解開
print(df1)
df1.melt()
```
## Subset
```
# 設定 data_path
dir_data = '../data/'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
app_train.head()
# 取 TARGET 為 1 的
sub_df = app_train[app_train['TARGET'] == 1]
sub_df.head()
# 取 AMT_INCOME_TOTAL 大於平均資料中,SK_ID_CURR, TARGET 兩欄
sub_df = app_train.loc[app_train['AMT_INCOME_TOTAL'] > app_train['AMT_INCOME_TOTAL'].mean(), ['SK_ID_CURR', 'TARGET']]
sub_df.head()
```
## Groupby
```
app_train.groupby(['NAME_CONTRACT_TYPE']).size()
app_train.groupby(['NAME_CONTRACT_TYPE'])['AMT_INCOME_TOTAL'].describe()
app_train.groupby(['NAME_CONTRACT_TYPE'])['TARGET'].mean()
# 取前 10000 筆作範例: 分別將 AMT_INCOME_TOTAL, AMT_CREDIT, AMT_ANNUITY 除以根據 NAME_CONTRACT_TYPE 分組後的平均數,
app_train.loc[0:10000, ['NAME_CONTRACT_TYPE', 'AMT_INCOME_TOTAL', 'AMT_CREDIT', 'AMT_ANNUITY']].groupby(['NAME_CONTRACT_TYPE']).apply(lambda x: x / x.mean())
app_train.groupby(['NAME_CONTRACT_TYPE'])['TARGET'].hist()
plt.show()
```
## 作業
1. 請將 app_train 中的 CNT_CHILDREN 依照下列規則分為四組,並將其結果在原本的 dataframe 命名為 CNT_CHILDREN_GROUP
* 0 個小孩
* 有 1 - 2 個小孩
* 有 3 - 5 個小孩
* 有超過 5 個小孩
2. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,列出各組的平均 AMT_INCOME_TOTAL,並繪製 baxplot
3. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,對 AMT_INCOME_TOTAL 計算 [Z 轉換](https://en.wikipedia.org/wiki/Standard_score) 後的分數
| github_jupyter |
```
import pandas as pd
import numpy as np
import nbimporter
from functions import *
from numpy import *
import matplotlib.pyplot as plt
%matplotlib inline
import statistics
from sklearn.metrics import mean_squared_error
D7_dir='/home/mahdi/Desktop/data_selection_D7'
n,name=count(D7_dir)
out_dir='/home/mahdi/Desktop/valid'
n1,name1=count(out_dir)
for i in range(n1):
n2,name2=count_endwith('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center','.csv')
source = pd.read_csv('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center/centerline_volume0000.csv', header=None)
if source.shape[0]<10:
continue
source.columns=['x','y','delete']
source = source[['x','y']]
for v in range(1,n2):
df = pd.read_csv('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center/'+name2[v][0], header=None)
df.columns=['x','y','delete']
df=df[['x','y']]
if i==0 :
source=np.concatenate([source,df],axis=1)
source1=source
elif i>0 :
source=np.concatenate([source,df],axis=1)
if i>=1:
source1=np.concatenate([source1,source],axis=1)
y_data=source1[:, 1::2]
x_data=source1[:, ::2]
y_data
x_data
x_var=np.zeros((10,81))
y_var=np.zeros((10,81))
start=0
for i in range(81):
n2,name2=count_endwith('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center','.csv')
source = pd.read_csv('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center/centerline_volume0000.csv', header=None)
if source.shape[0]<10:
continue
for s in range(10):
x_var[s,i]=(statistics.variance(x_data[s,start:n2+start]))
y_var[s,i]=(statistics.variance(y_data[s,start:n2+start]))
start=n2
x_mse=np.zeros((10,81))
y_mse=np.zeros((10,81))
start=0
for i in range(81):
n2,name2=count_endwith('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center','.csv')
source = pd.read_csv('/home/mahdi/Desktop/centerline/'+name1[i][0]+'/center/centerline_volume0000.csv', header=None)
if source.shape[0]<10:
continue
for s in range(10):
Yx_true =np.ones((n2+start)-start)*source[0][s]
Yy_true =np.ones((n2+start)-start)*source[1][s]
x_mse[s,i]=mean_squared_error(Yx_true ,x_data[s,start:n2+start])
y_mse[s,i]=mean_squared_error(Yy_true ,y_data[s,start:n2+start])
start=n2
mean_mse_x=np.zeros(10)
mean_mse_y=np.zeros(10)
for s in range(10):
mean_mse_x[s]=sum(x_mse[s])/(81-3)
mean_mse_y[s]=sum(y_mse[s])/(81-3)
print('mse_x=',mean_mse_x)
print('mse_y=',mean_mse_y)
print(np.mean(mean_mse_x))
np.mean(mean_mse_y)
```
# three data are outlier
```
mean_var_x=np.zeros(10)
mean_var_y=np.zeros(10)
for s in range(10):
mean_var_x[s]=sum(x_var[s,:])/(81-3)
mean_var_y[s]=sum(y_var[s,:])/(81-3)
print('mx=',mean_var_x)
print('my=',mean_var_y)
np.mean(mean_var_x)
np.mean(mean_var_y)
```
| github_jupyter |
# 1.3「転移学習」で少量データの分類を実現する方法
- 本ファイルでは、学習済みのVGGモデルを使用し、転移学習でアリとハチの画像を分類するモデルを学習します
# 学習目標
1. 画像データからDatasetを作成できるようになる
2. DataSetからDataLoaderを作成できるようになる
3. 学習済みモデルの出力層を任意の形に変更できるようになる
4. 出力層の結合パラメータのみを学習させ、転移学習が実装できるようになる
# 事前準備
1. 書籍の指示に従い、本章で使用するデータをダウンロード
2. forループの経過時間と残り時間を計測するパッケージtqdmをインストールします。
conda install -c conda-forge tqdm
```
# パッケージのimport
import glob
import os.path as osp
import random
import numpy as np
import json
from PIL import Image
from tqdm import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torchvision
from torchvision import models, transforms
# 乱数のシードを設定
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
```
# DataSetを作成
```
# 入力画像の前処理をするクラス
# 訓練時と推論時で処理が異なる
class ImageTransform():
"""
画像の前処理クラス。訓練時、検証時で異なる動作をする。
画像のサイズをリサイズし、色を標準化する。
訓練時はRandomResizedCropとRandomHorizontalFlipでデータオーギュメンテーションする。
Attributes
----------
resize : int
リサイズ先の画像の大きさ。
mean : (R, G, B)
各色チャネルの平均値。
std : (R, G, B)
各色チャネルの標準偏差。
"""
def __init__(self, resize, mean, std):
self.data_transform = {
'train': transforms.Compose([
transforms.RandomResizedCrop(
resize, scale=(0.5, 1.0)), # データオーギュメンテーション
transforms.RandomHorizontalFlip(), # データオーギュメンテーション
transforms.ToTensor(), # テンソルに変換
transforms.Normalize(mean, std) # 標準化
]),
'val': transforms.Compose([
transforms.Resize(resize), # リサイズ
transforms.CenterCrop(resize), # 画像中央をresize×resizeで切り取り
transforms.ToTensor(), # テンソルに変換
transforms.Normalize(mean, std) # 標準化
])
}
def __call__(self, img, phase='train'):
"""
Parameters
----------
phase : 'train' or 'val'
前処理のモードを指定。
"""
return self.data_transform[phase](img)
# 訓練時の画像前処理の動作を確認
# 実行するたびに処理結果の画像が変わる
# 1. 画像読み込み
image_file_path = './data/goldenretriever-3724972_640.jpg'
img = Image.open(image_file_path) # [高さ][幅][色RGB]
# 2. 元の画像の表示
plt.imshow(img)
plt.show()
# 3. 画像の前処理と処理済み画像の表示
size = 224
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
transform = ImageTransform(size, mean, std)
img_transformed = transform(img, phase="train") # torch.Size([3, 224, 224])
# (色、高さ、幅)を (高さ、幅、色)に変換し、0-1に値を制限して表示
img_transformed = img_transformed.numpy().transpose((1, 2, 0))
img_transformed = np.clip(img_transformed, 0, 1)
plt.imshow(img_transformed)
plt.show()
# アリとハチの画像へのファイルパスのリストを作成する
def make_datapath_list(phase="train"):
"""
データのパスを格納したリストを作成する。
Parameters
----------
phase : 'train' or 'val'
訓練データか検証データかを指定する
Returns
-------
path_list : list
データへのパスを格納したリスト
"""
rootpath = "./data/hymenoptera_data/"
target_path = osp.join(rootpath+phase+'/**/*.jpg')
print(target_path)
path_list = [] # ここに格納する
# globを利用してサブディレクトリまでファイルパスを取得する
for path in glob.glob(target_path):
path_list.append(path)
return path_list
# 実行
train_list = make_datapath_list(phase="train")
val_list = make_datapath_list(phase="val")
train_list
# アリとハチの画像のDatasetを作成する
class HymenopteraDataset(data.Dataset):
"""
アリとハチの画像のDatasetクラス。PyTorchのDatasetクラスを継承。
Attributes
----------
file_list : リスト
画像のパスを格納したリスト
transform : object
前処理クラスのインスタンス
phase : 'train' or 'val'
訓練か検証かを設定
"""
def __init__(self, file_list, transform=None, phase='train'):
self.file_list = file_list # ファイルパスのリスト
self.transform = transform # 前処理クラスのインスタンス
self.phase = phase # train or valの指定
def __len__(self):
'''画像の枚数を返す'''
return len(self.file_list)
def __getitem__(self, index):
'''
前処理をした画像のTensor形式のデータとラベルを取得
'''
# index番目の画像をロード
img_path = self.file_list[index]
img = Image.open(img_path) # [高さ][幅][色RGB]
# 画像の前処理を実施
img_transformed = self.transform(
img, self.phase) # torch.Size([3, 224, 224])
# 画像のラベルをファイル名から抜き出す
if self.phase == "train":
label = img_path[30:34]
elif self.phase == "val":
label = img_path[28:32]
# ラベルを数値に変更する
if label == "ants":
label = 0
elif label == "bees":
label = 1
return img_transformed, label
# 実行
train_dataset = HymenopteraDataset(
file_list=train_list, transform=ImageTransform(size, mean, std), phase='train')
val_dataset = HymenopteraDataset(
file_list=val_list, transform=ImageTransform(size, mean, std), phase='val')
# 動作確認
index = 0
print(train_dataset.__getitem__(index)[0].size())
print(train_dataset.__getitem__(index)[1])
```
# DataLoaderを作成
```
# ミニバッチのサイズを指定
batch_size = 32
# DataLoaderを作成
train_dataloader = torch.utils.data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True)
val_dataloader = torch.utils.data.DataLoader(
val_dataset, batch_size=batch_size, shuffle=False)
# 辞書型変数にまとめる
dataloaders_dict = {"train": train_dataloader, "val": val_dataloader}
# 動作確認
batch_iterator = iter(dataloaders_dict["train"]) # イテレータに変換
inputs, labels = next(
batch_iterator) # 1番目の要素を取り出す
print(inputs.size())
print(labels)
```
# ネットワークモデルの作成する
```
# 学習済みのVGG-16モデルをロード
# VGG-16モデルのインスタンスを生成
use_pretrained = True # 学習済みのパラメータを使用
net = models.vgg16(pretrained=use_pretrained)
# VGG16の最後の出力層の出力ユニットをアリとハチの2つに付け替える
net.classifier[6] = nn.Linear(in_features=4096, out_features=2)
# 訓練モードに設定
net.train()
print('ネットワーク設定完了:学習済みの重みをロードし、訓練モードに設定しました')
```
# 損失関数を定義
```
# 損失関数の設定
criterion = nn.CrossEntropyLoss()
```
# 最適化手法を設定
```
# 転移学習で学習させるパラメータを、変数params_to_updateに格納する
params_to_update = []
# 学習させるパラメータ名
update_param_names = ["classifier.6.weight", "classifier.6.bias"]
# 学習させるパラメータ以外は勾配計算をなくし、変化しないように設定
for name, param in net.named_parameters():
if name in update_param_names:
param.requires_grad = True
params_to_update.append(param)
print(name)
else:
param.requires_grad = False
# params_to_updateの中身を確認
print("-----------")
print(params_to_update)
# 最適化手法の設定
optimizer = optim.SGD(params=params_to_update, lr=0.001, momentum=0.9)
```
# 学習・検証を実施
```
# モデルを学習させる関数を作成
def train_model(net, dataloaders_dict, criterion, optimizer, num_epochs):
# epochのループ
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-------------')
# epochごとの学習と検証のループ
for phase in ['train', 'val']:
if phase == 'train':
net.train() # モデルを訓練モードに
else:
net.eval() # モデルを検証モードに
epoch_loss = 0.0 # epochの損失和
epoch_corrects = 0 # epochの正解数
# 未学習時の検証性能を確かめるため、epoch=0の訓練は省略
if (epoch == 0) and (phase == 'train'):
continue
# データローダーからミニバッチを取り出すループ
for inputs, labels in tqdm(dataloaders_dict[phase]):
# optimizerを初期化
optimizer.zero_grad()
# 順伝搬(forward)計算
with torch.set_grad_enabled(phase == 'train'):
outputs = net(inputs)
loss = criterion(outputs, labels) # 損失を計算
_, preds = torch.max(outputs, 1) # ラベルを予測
# 訓練時はバックプロパゲーション
if phase == 'train':
loss.backward()
optimizer.step()
# イタレーション結果の計算
# lossの合計を更新
epoch_loss += loss.item() * inputs.size(0)
# 正解数の合計を更新
epoch_corrects += torch.sum(preds == labels.data)
# epochごとのlossと正解率を表示
epoch_loss = epoch_loss / len(dataloaders_dict[phase].dataset)
epoch_acc = epoch_corrects.double(
) / len(dataloaders_dict[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# 学習・検証を実行する
num_epochs=2
train_model(net, dataloaders_dict, criterion, optimizer, num_epochs=num_epochs)
```
以上
| github_jupyter |
# GPyTorch Regression Tutorial
## Introduction
In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
\begin{align}
y &= \sin(2\pi x) + \epsilon \\
\epsilon &\sim \mathcal{N}(0, 0.04)
\end{align}
with 100 training examples, and testing on 51 test examples.
**Note:** this notebook is not necessarily intended to teach the mathematical background of Gaussian processes, but rather how to train a simple one and make predictions in GPyTorch. For a mathematical treatment, Chapter 2 of Gaussian Processes for Machine Learning provides a very thorough introduction to GP regression (this entire text is highly recommended): http://www.gaussianprocess.org/gpml/chapters/RW2.pdf
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
### Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
```
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * math.sqrt(0.04)
```
## Setting up the model
The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide *the tools necessary to quickly construct one*. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, this allows the user great flexibility in designing custom models.
For most GP regression models, you will need to construct the following GPyTorch objects:
1. A **GP Model** (`gpytorch.models.ExactGP`) - This handles most of the inference.
1. A **Likelihood** (`gpytorch.likelihoods.GaussianLikelihood`) - This is the most common likelihood used for GP regression.
1. A **Mean** - This defines the prior mean of the GP.(If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.)
1. A **Kernel** - This defines the prior covariance of the GP.(If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start).
1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions.
### The GP Model
The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:
1. An `__init__` method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's `forward` method. This will most commonly include things like a mean module and a kernel module.
2. A `forward` method that takes in some $n \times d$ data `x` and returns a `MultivariateNormal` with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:
```python
self.covar_module = ScaleKernel(RBFKernel() + LinearKernel())
```
Or you can add the outputs of the kernel in the forward method:
```python
covar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)
```
### The likelihood
The simplest likelihood for regression is the `gpytorch.likelihoods.GaussianLikelihood`. This assumes a heteroskedastsic noise model (i.e. all inputs have the same observational noise).
There are other options for exact GP regression, such as the [FixedNoiseGaussianLikelihood](http://docs.gpytorch.ai/likelihoods.html#fixednoisegaussianlikelihood), which assigns a different observed noise value to different training inputs.
```
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
```
### Model modes
Like most PyTorch modules, the `ExactGP` has a `.train()` and `.eval()` mode.
- `.train()` mode is for optimizing model hyperameters.
- `.eval()` mode is for computing predictions through the model posterior.
## Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. Because GP models directly extend `torch.nn.Module`, calls to methods like `model.parameters()` or `model.named_parameters()` function as you might expect coming from PyTorch.
In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
1. Zero all parameter gradients
2. Call the model and compute the loss
3. Call backward on the loss to fill in gradients
4. Take a step on the optimizer
However, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
```
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
```
## Make predictions with the model
In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
Just as a user defined GP model returns a `MultivariateNormal` containing the prior mean and covariance from forward, a trained GP model in eval mode returns a `MultivariateNormal` containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
```python
f_preds = model(test_x)
y_preds = likelihood(model(test_x))
f_mean = f_preds.mean
f_var = f_preds.variance
f_covar = f_preds.covariance_matrix
f_samples = f_preds.sample(sample_shape=torch.Size(1000,))
```
The `gpytorch.settings.fast_pred_var` context is not needed, but here we are giving a preview of using one of our cool features, getting faster predictive distributions using [LOVE](https://arxiv.org/abs/1803.06058).
```
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
```
## Plot the model fit
In the next cell, we plot the mean and confidence region of the Gaussian process model. The `confidence_region` method is a helper method that returns 2 standard deviations above and below the mean.
```
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
| github_jupyter |
# Plotting a static treemap
--------
This Jupyter notebook will generate a static version of the treemap. In the first step each parent in the JSON gets all of its children's values (read or contig counts) are summed to the parent. In the second step we're assigning a lineage to each treemap compartment to help later. In the last step we produce a copy of the nested treemap structure that's flat for use later.
```
%matplotlib inline
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import gridspec
typeface='Helvetica Neue'
mpl.rcParams['mathtext.fontset']='custom'
mpl.rcParams['font.sans-serif']=typeface
mpl.rcParams['mathtext.default']='sf'
mpl.rcParams['axes.labelweight']=300
mpl.rcParams['font.family']=typeface
mpl.rcParams['font.size']=22
import numpy as np
import json,copy
import ete3
import squarify
ncbi=ete3.ncbi_taxonomy.NCBITaxa()
treemap_path='/Users/evogytis/Documents/manuscripts/skeeters/treemap/skeeters.json'
J=json.load(open(treemap_path,'r'))
def sumValues(node,stat,level=None,order=None):
'''
Sum children's values and assign to parent (currently annotations are branch-specific).
'''
if level==None:
level=0
if order==None:
order=0
node['attrs']['height']=level ## remember height
order+=1 ## remember order of visitation
if 'children' in node: ## node with children
level+=1 ## increment level
for child in node['children']: ## iterate over children
if stat not in node['attrs']:
node['attrs'][stat]=0
order=sumValues(child,stat,level,order) ## traverse first (reaches tips first)
node['attrs'][stat]+=child['attrs'][stat] ## only then add stat
node['attrs']['order']=order ## assign order of visitation
return order
def flatten(node,container=None): ## flat list of JSON branches
"""
Unroll JSON tree structure into a flat list.
"""
if container==None:
container=[]
container.append(node)
if 'children' in node:
for child in node['children']:
flatten(child,container)
return container
def treemapLineages(node,lineage=None):
"""
Assign lineages to each compartment of the treemap.
"""
if lineage==None:
lineage=[]
lin=list(lineage)
if 'taxid' in node:
node['lineage']=lin+[node['taxid']]
if 'children' in node:
if 'taxid' in node:
lin.append(node['taxid'])
for child in node['children']:
lineage=treemapLineages(child,lin)
return lineage
sumValues(J,'read_count') ## sum read counts across children
# sumValues(J,'contig_count')
treemapLineages(J) ## assign lineages to each treemap compartment
flatJ=flatten(copy.deepcopy(J)) ## get a flat version of the JSON
print(len(flatJ)) ## report number of branches in JSON treemap
print(flatJ[0]['attrs']['read_count'])
```
## Plotting treemap
In this cell we define the `size` function which will be used to determine the size of treemap compartments (read counts by default). The parameter `edge_length` by default is computed as the square of the value at the root and passed on as both the `dx` (width) and `dy` (height) of the treemap (_i.e._ treemap is a square).
```
mpl.rcParams['pdf.fonttype'] = 42
mpl.rcParams['ps.fonttype'] = 42 ## TrueType fonts for manageable export
size=lambda k: k['attrs']['read_count'] ## area will be read counts
# size=lambda k: k['attrs']['contig_count'] ## area will be number of contigs
edge_length=np.sqrt(size(J['children'][0])) ## assuming
print(edge_length)
def computeCoordinates(node,x,y,dx,dy):
"""
Compute treemap compartment coordinates using the squarify library.
"""
node['attrs']['x']=x ## assign coordinates to compartment
node['attrs']['y']=y
node['attrs']['dx']=dx
node['attrs']['dy']=dy
if 'children' in node: ## there are children, need to traverse further
children=sorted(node['children'],key=lambda q: size(q)) ## get children sorted by read count
children_values=[size(child) for child in children] ## get read counts for each child
sizes=squarify.normalize_sizes(children_values+[size(node)-sum(children_values)],dx,dy) ## normalize children values
rects=squarify.padded_squarify(sizes,x,y,dx,dy) ## compute rectangles (padded)
# rects=squarify.squarify(sizes,x,y,dx,dy) ## compute rectangles (unpadded)
for ch,rec in zip(children,rects): ## iterate over children
x=rec['x']
y=rec['y']
dx=rec['dx']
dy=rec['dy']
computeCoordinates(ch,x,y,dx,dy) ## recurse
def plot(ax,node,parent,previous=None,level=None):
"""
Plot treemap.
"""
if previous==None:
previous=0.0
if level==None:
level=0
if 'x' in node['attrs']:
x=node['attrs']['x']
y=node['attrs']['y']
w=node['attrs']['dx']
h=node['attrs']['dy']
if w==0 or h==0:
print('box %s has zero width or height: %s %s'%(node['taxonomy'],w,h))
c=node['attrs']['colour']
# lw=max([1,(7-level)])
lw=1
rect=plt.Rectangle((x,y),w,h,facecolor=c,edgecolor='w',
alpha=1.0,lw=lw,zorder=level) ## add rectangle
rotation=0
if h>(w*2):
rotation=90 ## rotate labels if rectangle is taller than wider
if 'taxonomy' in node and size(node)>10000.0: ## only add names for compartments big enough
name=node['taxonomy']
print('%s (of %s)\t%d'%(node['taxonomy'],parent['taxonomy'] if 'taxonomy' in parent else '',size(node)))
if 'children' in node and name!='root':
if level<5: ## when dealing with deep compartments plot label bottom right
ax.text(x+w-0.0015,y+0.0015,name,ha='right',va='bottom',
size=22-level,zorder=10000,rotation=rotation)
else: ## otherwise top left
ax.text(x+0.0015,y+h-0.0015,name,ha='left',va='top',
size=25-level,zorder=level+10,rotation=rotation)
elif 'children' not in node: ## terminal compartment, text is centered
ax.text(x+w/2,y+h/2,name,ha='center',va='center',
size=16,zorder=level+10,rotation=rotation)
ax.add_patch(rect)
ax.plot()
if 'children' in node:
level+=1
for child in node['children']:
plot(ax,child,node,previous,level) ## recurse
fig = plt.figure(figsize=(30, 30),facecolor='w')
gs = gridspec.GridSpec(1,1,wspace=0.01,hspace=0.01)
ax=plt.subplot(gs[0],facecolor='w')
computeCoordinates(J,0.0,0.0,edge_length,edge_length) ## compute treemap coordinates, start at (0,0), width and height are edge_length (square treemap)
plot(ax,J,None) ## plot treemap
if edge_length>3000.0:
unit=100.0 ## legend unit
u='%d reads'%(int(unit)) ## legend label
X=edge_length+100 ## x-axis offset
else:
unit=4.0
u='%d contigs'%(int(unit))
X=-7
ax.text(X+unit*1.1,unit/2,u,size=40,va='center',ha='left') ## add text explaining what the legend square represents
print(unit,edge_length)
legend=plt.Rectangle((X,0.0),unit,unit,lw=2,facecolor='k',edgecolor='none',zorder=100,clip_on=False)
ax.add_patch(legend) ## add legend square
branchesByLevel=sorted(flatJ[2:], key=lambda k: (-k['attrs']['order'],k['attrs']['height']))
colours={}
for b in branchesByLevel:
if b['attrs']['colour'] not in colours: ## plot legend squares until colours are exhausted (needs to be iterated over in the correct order)
y=edge_length*0.1+len(colours)*edge_length/25.0 # y coordinate
ax.add_patch(plt.Rectangle((X,y-unit),unit,unit,lw=2,facecolor=b['attrs']['colour'],edgecolor='none',zorder=1000,clip_on=False)) ## add legend rectangle
ax.text(X+unit*1.1,y-unit/2.0,b['taxonomy'],size=26,ha='left',va='center') ## add name for compartment
colours[b['attrs']['colour']]=b['taxonomy'] ## remember colour
[ax.spines[loc].set_visible(False) for loc in ax.spines]
ax.tick_params(size=0,labelsize=0)
ax.set_aspect(1)
plt.savefig('/Users/evogytis/Documents/manuscripts/skeeters/figures/fig2_treemap.pdf',dpi=300,bbox_inches='tight')
plt.savefig('/Users/evogytis/Documents/manuscripts/skeeters/figures/fig2_treemap.png',dpi=300,bbox_inches='tight')
plt.show()
```
## Display read counts for treemap compartments split by sample
This cell will collect specified compartments of the treemap and display all the reads under the compartment split by sample, excluding nested compartments included in the plot.
```
def split_reads(node,container=None,exclude=None):
"""
Traverse subtree from a node, accumulating sample-specific read counts.
"""
if container==None:
container={}
if exclude==None:
exclude=lambda node: True
for sample in [attr for attr in node['attrs'] if 'CMS' in attr]:
if sample not in container:
container[sample]=0
container[sample]+=node['attrs'][sample]['read_count'] ## add reads to sample
if 'children' in node: ## need to recurse
for child in node['children']: ## iterate over children
if exclude(child)==True: ## child not excluded
split_reads(child,container,exclude=exclude) ## recurse
else:
print('%s reads excluded from %s'%(child['taxonomy'],node['taxonomy']))
return container
fig = plt.figure(figsize=(40, 35),facecolor='w')
gs = gridspec.GridSpec(1,1,wspace=0.01,hspace=0.01)
ax=plt.subplot(gs[0],facecolor='w')
ylabels=[]
read_counts=[]
colours=[]
picked=[b for b in flatJ[1:] if b['attrs']['read_count']>1e3 and ('children' not in b or b['taxonomy']=='Bacteria')] ## programmatically get things that have high reads, are terminal (except if it's Bacteria)
include=[b['taxid'] for b in picked] ## list of taxids from branches
for k in sorted(picked,key=lambda w: w['attrs']['read_count']): ## iterate over picked branches, sorted by read count
taxid_reads_per_sample=split_reads(k,exclude=lambda x: x in k['lineage'] and x not in include) ## do traversals accumulating reads (excluding other picked branches)
read_counts.append(list(filter(lambda w:w!=0,sorted(taxid_reads_per_sample.values(),reverse=True)))) ## convert dict to list, sort by read count
cur_clade=set(k['lineage'])
include_clades={b['taxid']: set(b['lineage']) for b in flatJ[1:] if b['taxid'] in include}
taxid_label=[]
for t in include_clades: ## iterate over all other treemap compartments designated for display here
if cur_clade.issubset(include_clades[t]): ## if another compartment's lienage is a subset of current one's (it's nested within current one)
if isinstance(t,str):
taxid_label.append(t)
elif isinstance(t,int):
taxid_label.append(ncbi.get_taxid_translator([t])[t]) ## continue adding to taxid label, first element is what's being displayed, every other element will be a compartment that's excluded
if len(taxid_label)==1:
ylabels.append(taxid_label[0])
else:
ylabels.append('%s (excluding %s)'%(taxid_label[0],', '.join(taxid_label[1:]))) ## format label
colours.append(k['attrs']['colour']) ## remember colour
for i in range(len(ylabels)): ## iterate over compartments
for j in range(len(read_counts[i])): ## iterate over sample-specific reads
x=read_counts[i][j] ## current sample's read counts
sx=sum(read_counts[i][:j]) ## sum of all other samples up to current one
ax.barh(i,x,left=sx,fc=colours[i],ec='k',lw=1,alpha=1.0,align='center',zorder=10) ## plot stack
ax.set_yticks(range(len(ylabels)))
ax.set_yticklabels(ylabels)
ax.set_ylim(-0.5,len(ylabels)-0.5)
ax.set_xlabel('read count',size=30)
customfmt=lambda y,pos: r'$%.2f\times10^{%d}$'%(y*(10**-int(np.log10(y))),int(np.log10(y))) if y!=0.0 else '0.0'
ax.xaxis.set_major_formatter(mpl.ticker.FuncFormatter(customfmt)) ## custom log formatted tick labels
ax.tick_params(size=0,labelsize=22)
ax.tick_params(axis='x',labelsize=28)
ax.grid(axis='x',ls='--',zorder=0)
plt.show()
```
| github_jupyter |
---
# Data Mining:<br>Statistical Modeling and Learning from Data
## Dr. Ciro Cattuto<br>Dr. Laetitia Gauvin<br>Dr. André Panisson
### Exercises - Introduction to Python programming
---
## Python program files
* Python code is usually stored in text files with the file ending "`.py`":
myprogram.py
* Every line in a Python program file is assumed to be a Python statement, or part thereof.
* The only exception is comment lines, which start with the character `#` (optionally preceded by an arbitrary number of white-space characters, i.e., tabs or spaces). Comment lines are usually ignored by the Python interpreter.
* To run our Python program from the command line we use:
$ python myprogram.py
* On UNIX systems it is common to define the path to the interpreter on the first line of the program (note that this is a comment line as far as the Python interpreter is concerned):
#!/usr/bin/env python
If we do, and if we additionally set the file script to be executable, we can run the program like this:
$ myprogram.py
## IPython notebooks
This file - an IPython notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the [JSON](http://en.wikipedia.org/wiki/JSON) format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook.
## Modules
Most of the functionality in Python is provided by *modules*. The Python Standard Library is a large collection of modules that provides *cross-platform* implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
### References
* The Python Language Reference: http://docs.python.org/2/reference/index.html
* The Python Standard Library: http://docs.python.org/2/library/
To use a module in a Python program it first has to be imported. A module can be imported using the `import` statement. For example, to import the module `math`, which contains many standard mathematical functions, we can do:
```
import math
```
This includes the whole module and makes it available for use later in the program. For example, we can do:
```
import math
x = math.cos(2 * math.pi)
print(x)
```
Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "`math.`" every time we use something from the `math` module:
```
from math import *
x = cos(2 * pi)
print(x)
```
This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the `import math` pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character `*`:
```
from math import cos, pi
x = cos(2 * pi)
print(x)
```
### Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the `dir` function:
```
import math
print(dir(math))
```
And using the function `help` we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
```
help(math.log)
log(10)
log(10, 2)
```
We can also use the `help` function directly on modules: Try
help(math)
Some very useful modules form the Python standard library are `os`, `sys`, `math`, `shutil`, `re`, `subprocess`, `multiprocessing`, `threading`.
A complete lists of standard modules for Python 2 and Python 3 are available at http://docs.python.org/2/library/ and http://docs.python.org/3/library/, respectively.
## Variables and types
### Symbol names
Variable names in Python can contain alphanumerical characters `a-z`, `A-Z`, `0-9` and some special characters such as `_`. Normal variable names must start with a letter.
By convension, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Note: Be aware of the keyword `lambda`, which could easily be a natural variable name in a scientific program. But being a keyword, it cannot be used as a variable name.
### Assignment
The assignment operator in Python is `=`. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
```
# variable assignments
x = 1.0
my_variable = 12.2
```
Although not explicitly specified, a variable do have a type associated with it. The type is derived form the value it was assigned.
```
type(x)
```
If we assign a new value to a variable, its type can change.
```
x = 1
type(x)
```
If we try to use a variable that has not yet been defined we get an `NameError`:
```
print(y)
```
### Fundamental types
```
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
```
### Type utility functions
The module `types` contains a number of type name definitions that can be used to test if variables are of certain types:
```
import types
# print all types defined in the `types` module
print(dir(types))
x = 1.0
# check if the variable x is a float
type(x) is float
# check if the variable x is an int
type(x) is int
```
We can also use the `isinstance` method for testing types of variables:
```
isinstance(x, float)
```
### Type casting
```
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
z = complex(x)
print(z, type(z))
x = float(z)
```
Complex variables cannot be cast to floats or integers. We need to use `z.real` or `z.imag` to extract the part of the complex number we want:
```
y = bool(z.real)
print(z.real, " -> ", y, type(y))
y = bool(z.imag)
print(z.imag, " -> ", y, type(y))
```
## Operators and comparisons
Most operators and comparisons in Python work as one would expect:
* Arithmetic operators `+`, `-`, `*`, `/`, `//` (integer division), '**' power
```
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# Integer division of float numbers
3.0 // 2.0
# Note! The power operators in python isn't ^, but **
2 ** 2
```
* The boolean operators are spelled out as words `and`, `not`, `or`.
```
True and False
not False
True or False
```
* Comparison operators `>`, `<`, `>=` (greater or equal), `<=` (less or equal), `==` equality, `is` identical.
```
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
```
## Compound types: Strings, List and dictionaries
### Strings
Strings are the variable type that is used for storing text messages.
```
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
# replace a substring in a string with somethign else
s2 = s.replace("world", "test")
print(s2)
```
We can index a character in a string using `[]`:
```
s[0]
```
**Heads up MATLAB users:** Indexing start at 0!
We can extract a part of a string using the syntax `[start:stop]`, which extracts characters between index `start` and `stop`:
```
s[0:5]
```
If we omit either (or both) of `start` or `stop` from `[start:stop]`, the default is the beginning and the end of the string, respectively:
```
s[:5]
s[6:]
s[:]
```
We can also define the step size using the syntax `[start:end:step]` (the default value for `step` is 1, as we saw above):
```
s[::1]
s[::2]
```
This technique is called *slicing*. Read more about the syntax here: http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python has a very rich set of functions for text processing. See for example http://docs.python.org/2/library/string.html for more information.
#### String formatting examples
```
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("str1" + "str2" + "str3") # strings added with + are concatenated without space
print("value = %f" % 1.0) # we can use C-style string formatting
# this formatting creates a string
s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5)
print(s2)
# alternative, more intuitive way of formatting a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)
print(s3)
```
### List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is `[...]`:
```
l = [1,2,3,4]
print(type(l))
print(l)
```
We can use the same slicing techniques to manipulate lists as we could use on strings:
```
print(l)
print(l[1:3])
print(l[::2])
```
**Heads up MATLAB users:** Indexing starts at 0!
```
l[0]
```
Elements in a list do not all have to be of the same type:
```
l = [1, 'a', 1.0, 1-1j]
print(l)
```
Python lists can be inhomogeneous and arbitrarily nested:
```
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
```
Lists play a very important role in Python, and are for example used in loops and other flow control structures (discussed below). There are number of convenient functions for generating lists of various types, for example the `range` function:
```
start = 10
stop = 30
step = 2
range(start, stop, step)
# in python 3 range generates an interator, which can be converted to a list using 'list(...)'.
# It has no effect in python 2
list(range(start, stop, step))
list(range(-10, 10))
s
# convert a string to a list by type casting:
s2 = list(s)
s2
# sorting lists
s2.sort()
print(s2)
```
#### Adding, inserting, modifying, and removing elements from lists
```
# create a new empty list
l = []
# add an elements using `append`
l.append("A")
l.append("d")
l.append("d")
print(l)
```
We can modify lists by assigning new values to elements in the list. In technical jargon, lists are *mutable*.
```
l[1] = "p"
l[2] = "p"
print(l)
l[1:3] = ["d", "d"]
print(l)
```
Insert an element at an specific index using `insert`
```
l.insert(0, "i")
l.insert(1, "n")
l.insert(2, "s")
l.insert(3, "e")
l.insert(4, "r")
l.insert(5, "t")
print(l)
```
Remove first element with specific value using 'remove'
```
l.remove("A")
print(l)
```
Remove an element at a specific location using `del`:
```
del l[7]
del l[6]
print(l)
```
See `help(list)` for more details, or read the online documentation
### Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are *immutable*.
In Python, tuples are created using the syntax `(..., ..., ...)`, or even `..., ...`:
```
point = (10, 20)
print(point, type(point))
point = 10, 20
print(point, type(point))
```
We can unpack a tuple by assigning it to a comma-separated list of variables:
```
x, y = point
print("x =", x)
print("y =", y)
```
If we try to assign a new value to an element in a tuple we get an error:
```
point[0] = 20
```
### Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is `{key1 : value1, ...}`:
```
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
params["parameter1"] = "A"
params["parameter2"] = "B"
# add a new entry
params["parameter4"] = "D"
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
print("parameter4 = " + str(params["parameter4"]))
```
## Control Flow
### Conditional statements: if, elif, else
The Python syntax for conditional execution of code use the keywords `if`, `elif` (else if), `else`:
```
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
```
For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets `{` and `}`. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
#### Examples:
```
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
```
## Loops
In Python, loops can be programmed in a number of different ways. The most common is the `for` loop, which is used together with iterable objects, such as lists. The basic syntax is:
### **`for` loops**:
```
for x in [1,2,3]:
print(x)
```
The `for` loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the `for` loop. For example:
```
for x in range(4): # by default range start at 0
print(x)
```
Note: `range(4)` does not include 4 !
```
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
```
To iterate over key-value pairs of a dictionary:
```
for key in params:
print(key + " = " + str(params[key]))
```
Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the `enumerate` function for this:
```
for idx, x in enumerate(range(-3,3)):
print(idx, x)
```
### List comprehensions: Creating lists using `for` loops:
A convenient and compact way to initialize lists:
```
l1 = [x**2 for x in range(0,5)]
print(l1)
```
### `while` loops:
```
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
```
Note that the `print("done")` statement is not part of the `while` loop body because of the difference in indentation.
## Functions
A function in Python is defined using the keyword `def`, followed by a function name, a signature within parentheses `()`, and a colon `:`. The following code, with one additional level of indentation, is the function body.
```
def func0():
print("test")
func0()
```
Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
```
def func1(s):
"""
Print a string 's' and tell how many characters it has
"""
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
```
Functions that returns a value use the `return` keyword:
```
def square(x):
"""
Return the square of x.
"""
return x ** 2
square(4)
```
We can return multiple values from a function using tuples (see above):
```
def powers(x):
"""
Return a few powers of x.
"""
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
```
### Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes:
```
def myfunc(x, p=2, debug=False):
if debug:
print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p))
return x**p
```
If we don't provide a value of the `debug` argument when calling the the function `myfunc` it defaults to the value provided in the function definition:
```
myfunc(5)
myfunc(5, debug=True)
```
If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called *keyword* arguments, and is often very useful in functions that takes a lot of optional arguments.
```
myfunc(p=3, debug=True, x=7)
```
### Unnamed functions (lambda function)
In Python we can also create unnamed functions, using the `lambda` keyword:
```
f1 = lambda x: x**2
# is equivalent to
def f2(x):
return x**2
f1(2), f2(2)
```
This technique is useful for example when we want to pass a simple function as an argument to another function, like this:
```
# map is a built-in python function
# Good only for python2, in python3 it doesn't return a list anymore but an iterator (prints memory address)
map(lambda x: x**2, range(-3,4))
# in python 3 we can use `list(...)` to convert the iterator to an explicit list
list(map(lambda x: x**2, range(-3,4)))
```
## Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain *attributes* (variables) and *methods* (functions).
A class is defined almost like a function, but using the `class` keyword, and the class definition usually contains a number of class method definitions (a function in a class).
* Each class method should have an argument `self` as it first argument. This object is a self-reference.
* Some class method names have special meaning, for example:
* `__init__`: The name of the method that is invoked when the object is first created.
* `__str__` : A method that is invoked when a simple string representation of the class is needed, as for example when printed.
* There are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names
```
class Point:
"""
Simple class for representing a point in a Cartesian coordinate system.
"""
def __init__(self, x, y):
"""
Create a new Point at x, y.
"""
self.x = x
self.y = y
def translate(self, dx, dy):
"""
Translate the point by dx and dy in the x and y direction.
"""
self.x += dx
self.y += dy
def __str__(self):
return("Point at [%f, %f]" % (self.x, self.y))
```
To create a new instance of a class:
```
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class
print(p1) # this will invoke the __str__ method
```
To invoke a class method in the class instance `p`:
```
p2 = Point(1, 1)
p1.translate(0.25, 1.5)
print(p1)
print(p2)
```
Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities.
## Modules
One of the most important concepts in good programming is to reuse code and avoid repetitions.
The idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot.
Python supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending `.py`), and it can be made accessible to other Python modules and programs using the `import` statement.
Consider the following example: the file `mymodule.py` contains simple example implementations of a variable, function and a class:
```
%%file mymodule.py
"""
Example of a python module. Contains a variable called my_variable,
a function called my_function, and a class called MyClass.
"""
my_variable = 0
def my_function():
"""
Example function
"""
return my_variable
class MyClass:
"""
Example class.
"""
def __init__(self):
self.variable = my_variable
def set_variable(self, new_value):
"""
Set self.variable to a new value
"""
self.variable = new_value
def get_variable(self):
return self.variable
```
We can import the module `mymodule` into our Python program using `import`:
```
import mymodule
```
Use `help(module)` to get a summary of what the module provides:
```
help(mymodule)
mymodule.my_variable
mymodule.my_function()
my_class = mymodule.MyClass()
my_class.set_variable(10)
my_class.get_variable()
```
If we make changes to the code in `mymodule.py`, we need to reload it using `reload`:
```
#in python3 reload has been moved to importlib
from importlib import reload
reload(mymodule)
```
## Exceptions
In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the `raise` statement, which takes an argument that must be an instance of the class `BaseException` or a class derived from it.
```
raise Exception("description of the error")
```
A typical use of exceptions is to abort functions when some error condition occurs, for example:
def my_function(arguments):
if not verify(arguments):
raise Exception("Invalid arguments")
# rest of the code goes here
To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the `try` and `except` statements:
try:
# normal code goes here
except:
# code for error handling goes here
# this code is not executed unless the code
# above generated an error
For example:
```
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except:
print("Caught an exception")
```
To get information about the error, we can access the `Exception` class instance that describes the exception by using for example:
except Exception as e:
```
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except Exception as e:
print("Caught an exception:" + str(e))
```
## Further reading
* http://www.python.org - The official web page of the Python programming language.
* http://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended.
* http://www.greenteapress.com/thinkpython/ - A free book on Python programming.
* [Python Essential Reference](http://www.amazon.com/Python-Essential-Reference-4th-Edition/dp/0672329786) - A good reference book on Python programming.
# Exercises
1- Define a function max_of_three() that takes three numbers as arguments and returns the largest of them.
2- Define a function sum() and a function multiply() that sums and multiplies (respectively) all the numbers in a list of numbers. For example, sum([1, 2, 3, 4]) should return 10, and multiply([1, 2, 3, 4]) should return 24.
3- Define a function is_palindrome() that recognizes palindromes (i.e. words that look the same written backwards). For example, is_palindrome("radar") should return True.
4- Define a function factorial() that calculates the factorial of a number.
5- Define a function matrix() that creates a matrix with N rows and M columns.
6- Create a matrix $a$ with 5 rows and 10 columns and fill the positions of the matrix such that $a_{ij} = i^2 + j^2$
7- Create a matrix $b$ with 10 rows and 3 columns and fill the positions of the matrix such that $a_{ij} = i + j$
8- Create a function dot() that calculates the dot product between the matrices a and b.
| github_jupyter |
## ROC Analysis to Determine of Optimum Threshold for NHISS
In this notebook, ROC curve was analyzed to determine optimum NHISS threshold for classifying INP former and non-former drugs. We included all experimental data we have gathered until now (N=60).
Our criteria for optimum threshold value was the point that maximizes sum of specificity and sensitivity.
sensitivity + specificity = true positive rate - false positive rate.
```
import pandas as pd
import numpy as np
import os
import re
from __future__ import print_function, division
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Import experimental data and NHISS values
```
df_molecules = pd.read_csv("df_molecules.csv")
df_molecules.head()
df_molecules["Experimental Category"]=None
for i,row in enumerate(df_molecules.iterrows()):
if df_molecules.ix[i,"Experimental INP Formation"] == "Yes" :
df_molecules.ix[i, "Experimental Category"] = int(1)
else:
df_molecules.ix[i, "Experimental Category"] = int(0)
df_molecules.head()
```
### 1. NHISS Logistic Regression
```
# independent variable
X_train = df_molecules["NHISS"].reshape(-1, 1)
# dependent classification
y_train = np.array(df_molecules.ix[:,"Experimental Category"], dtype=int)
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# make class predictions for the training set
y_pred_class = logreg.predict(X_train)
# print first 35 observations and prediction classes
print('True:', y_train[0:35])
print('Pred:', y_pred_class[0:35])
# store the predicted probabilities for class 1
y_pred_prob = logreg.predict_proba(X_train)[:, 1]
y_pred_prob
```
### 2. NHISS ROC Curve Analysis for threshold determination
```
from sklearn import metrics
# IMPORTANT: first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(y_train, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.title('ROC curve for diabetes classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print("TPR:", tpr)
print("FPR:", fpr)
print("Tresholds:", thresholds)
# Optimum threshold criteria is maximum specificity and sensitivity.
sum_of_specificity_and_sensitivity = np.full(thresholds.size, np.nan)
for i, threshold in enumerate(np.nditer(thresholds)):
# print(i, threshold)
sensitivity = tpr[i]
specificity = 1 - fpr[i]
sum_of_specificity_and_sensitivity[i] = sensitivity + specificity
# Determine optimum threshold based on maximum specificity and sensitivity
optimum_threshold_probability = thresholds[sum_of_specificity_and_sensitivity == max(sum_of_specificity_and_sensitivity)]
print("Optimum threshold probability: ", optimum_threshold_probability[0])
# Let's find the NHISS threshold value that corresponds to threshold probability
NHISS_range = np.arange(min(X_train)[0], max(X_train)[0], 0.01)
for NHISS in NHISS_range:
probability = logreg.predict_proba(NHISS)[0][1]
if probability >= optimum_threshold_probability:
print("NHISS threshold: ", NHISS)
break
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools as it
#from evalutils import *
#from lccv import lccv
from tqdm.notebook import tqdm
plt.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
plt.rc('text', usetex=True)
```
# Reading in Results and Checking Completeness
```
dfResults = pd.read_csv("results.csv")
dfResults
def plot_result_availability(dfResults):
algorithms = ["5cv", "80lccv-flex", "10cv", "90lccv-flex"]
datasets = sorted(list(pd.unique(dfResults["openmlid"])))
Z = np.zeros((len(algorithms), len(datasets)))
for i, algorithm in enumerate(algorithms):
for j, openmlid in enumerate(datasets):
dfProjected = dfResults[(dfResults["algorithm"] == algorithm) & (dfResults["openmlid"] == openmlid)]
dfProjected = dfProjected[dfProjected["exception"].isna()]
Z[i,j] = len(dfProjected)
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
ax.imshow(-Z, cmap="seismic", vmax=0, vmin=-10)
ax.set_xticks(range(len(datasets)))
ax.set_xticklabels(datasets, rotation=90)
ax.set_yticks(range(len(algorithms)))
ax.set_yticklabels(algorithms)
plt.show()
plot_result_availability(dfResults)
dfResults80 = dfResults[dfResults["algorithm"].isin(["80lccv-flex", "5cv"]) & (dfResults["exception"].isna())]
dfResults90 = dfResults[dfResults["algorithm"].isin(["90lccv-flex", "10cv"]) & (dfResults["exception"].isna())]
dfResults80.query("algorithm == '5cv' and openmlid == 1111")
```
# How many cases of performance loss
```
def create_deviation_list(dfResults):
# absolute runtimes
algorithms = ["5cv", "80lccv-flex", "10cv", "90lccv-flex"]
algorithm_names = {"5cv": "5CV", "80lccv-flex": "80LCCV", "10cv": "10CV", "90lccv-flex": "90LCCV"}
datasets = pd.unique(dfResults["openmlid"])
# pairwise comparisons
deviations_time = []
deviations_error = []
for algo_pair in [("5cv", "80lccv-flex"), ("10cv", "90lccv-flex")]:
reductions_abs_pair = []
reductions_rel_pair = []
performance_diffs_pair = []
for key, dfForKey in dfResults[dfResults["algorithm"].isin(list(algo_pair))].groupby(["openmlid", "seed"]):
if len(dfForKey) == 2:
# compute comparison
row_baseline = dfForKey[dfForKey["algorithm"] == algo_pair[0]]
row_lccv = dfForKey[dfForKey["algorithm"] == algo_pair[1]]
runtime_baseline = row_baseline["runtime"].values
runtime_lccv = row_lccv["runtime"].values
score_baseline = row_baseline["errorrate"].values
score_lccv = row_lccv["errorrate"].values
# check undesired cases
if runtime_baseline < runtime_lccv:
deviations_time.append([key[0], key[1], row_baseline["algorithm"].values[0], row_lccv["algorithm"].values[0], row_baseline["runtime"].values[0], row_lccv["runtime"].values[0]])
if score_baseline < score_lccv:
deviations_error.append([key[0], key[1], row_baseline["algorithm"].values[0], row_lccv["algorithm"].values[0], row_baseline["errorrate"].values[0], row_lccv["errorrate"].values[0]])
cols_time = ["openmlid", "seed", "baseline", "lccv", "runtime baseline", "runtime lccv"]
cols_err = ["openmlid", "seed", "baseline", "lccv", "error rate baseline", "error rate lccv"]
return (pd.DataFrame(deviations_time, columns=cols_time), pd.DataFrame(deviations_error, columns=cols_err))
dfDeviationsTime, dfDeviationsScore = create_deviation_list(dfResults)
dfDeviationsScore = dfDeviationsScore[dfDeviationsScore["error rate baseline"] < dfDeviationsScore["error rate lccv"] - 0.005]
dfDeviationsScore.query("openmlid == 188")
for (openmlid, baseline), dfDeviationsScoreOnDataset in dfDeviationsScore.groupby(["openmlid", "baseline"]):
if np.mean(dfDeviationsScoreOnDataset["error rate lccv"] - dfDeviationsScoreOnDataset["error rate baseline"]) > 0.02:
print(f"{openmlid} ({baseline}): {len(dfDeviationsScoreOnDataset)}")
```
# Find out "Expensive" Datasets
```
expensive_datasets = []
for openmlid, dfDataset in dfResults80.groupby("openmlid"):
if np.mean(dfDataset[dfDataset["algorithm"] == "80lccv-flex"]["runtime"]) > 3600 * 6:
expensive_datasets.append(openmlid)
print(f"There are {len(expensive_datasets)} expensive datastes: {expensive_datasets}")
```
# Comparison Plots
```
datasets_dense = [1485, 1515, 1475, 1468, 1489, 23512, 23517, 40981, 40982, 40983, 40984, 40701, 40685, 40900, 1111, 40498, 41161, 41162, 41163, 41164, 41165, 41166, 41167, 41168, 41169, 41142, 41143, 41144, 41145, 41146, 41150, 41156, 41157, 41158, 41159, 41138, 54, 181, 188, 1461, 1494, 1464, 12, 23, 3, 1487, 40668, 1067, 1049, 40975, 31]
#1457
datasets_sparse = [1590, 1486, 4534, 4541, 4538, 4134, 4135, 40978, 40996, 41027, 40670, 42732, 42733, 42734, 41147]
datasets = datasets_dense + datasets_sparse
def create_summary_scatterplot(dfResults, suffix):
fig, ax = plt.subplots(figsize=(15, 15))
i = 0
for openmlid, dfDataset in dfResults.groupby("openmlid"):
dfLCCV = dfDataset[dfDataset["algorithm"].isin(["80lccv-flex", "90lccv-flex"])]
dfBaseline = dfDataset[dfDataset["algorithm"].isin(["5cv", "10cv"])]
lccv = [dfLCCV["errorrate"], np.mean(dfLCCV["runtime"])]
mccv = [dfBaseline["errorrate"], np.mean(dfBaseline["runtime"])]
mccv_has_result = not all(np.isnan(mccv[0]))
lccv_name = list(pd.unique(dfLCCV["algorithm"]))[0][:-5].upper()
base_name = "5CV" if lccv_name == "80LCCV" else "10CV"
lccv_mean_error = np.median(lccv[0])
mccv_mean_error = np.median(mccv[0])
ax.scatter(lccv[1], lccv_mean_error, color="C0", label=lccv_name if i == 0 else None)
ax.plot([lccv[1], lccv[1]], [np.percentile(lccv[0], 20), np.percentile(lccv[0], 80)], color="C0", linewidth=1, alpha=0.5)
if mccv_has_result:
ax.scatter(mccv[1], mccv_mean_error, color="C1", label=base_name if i == 0 else None)
ax.plot([mccv[1], mccv[1]], [np.percentile(mccv[0], 20), np.percentile(mccv[0], 80)], color="C1", linewidth=1, alpha=0.5) # vertical line for range
ax.plot([lccv[1], mccv[1]], [lccv_mean_error, mccv_mean_error], color="green" if lccv[1] < mccv[1] else "red", linestyle="--", linewidth=1) # connecting line
ax.text((lccv[1] + mccv[1]) / 2.1, (lccv_mean_error + mccv_mean_error) / 2, int(openmlid))
avg_time_saving = int((mccv[1] - lccv[1]) / 60)
avg_reduction = np.round((1 - lccv[1] / mccv[1]) * 100)
ax.text((lccv[1] + mccv[1]) / 2.5, (lccv_mean_error + mccv_mean_error) / 2 - 0.01, str(avg_time_saving) + "m (" + str(avg_reduction) + "%)")
else:
ax.text(lccv[1], lccv_mean_error, int(openmlid))
i += 1
ax.set_xlabel("Runtime (s)")
ax.set_ylabel("Error Rate")
for y in np.linspace(0, 0.7, 71):
ax.axhline(y, alpha=0.05, color="black")
ax.axvline(1800, linestyle="--", color="black", linewidth=1)
ax.axvline(3600, linestyle="--", color="black", linewidth=1)
ax.axvline(36000, linestyle="--", color="black", linewidth=1)
ax.set_ylim([0, 0.7])
ax.set_xlim([5 * 10**2, 10**5])
ax.set_xscale("log")
ax.legend()
fig.tight_layout()
fig.savefig(f"plots/results-randomsearch-scatter-{suffix}.pdf")
plt.show()
def create_summary_boxplots(dfResults, ax = None):
if ax is None:
fig, ax = plt.subplots(1, 4, figsize=(8, 3), gridspec_kw={'width_ratios': [1.2, 1, 1, 1]})
else:
fig = None
# absolute runtimes
algorithms = ["5cv", "80lccv-flex", "10cv", "90lccv-flex"]
algorithm_names = {"5cv": "5CV", "80lccv-flex": "80LCCV", "10cv": "10CV", "90lccv-flex": "90LCCV"}
datasets = pd.unique(dfResults["openmlid"])
ax[0].boxplot([dfResults[dfResults["algorithm"] == a]["runtime"].values / 60 for a in algorithms])
ax[0].set_title("Absolute\nRuntimes (m)")
ax[0].set_ylim([0, 60 * 24])
ax[0].set_xticks(range(1, 5))
ax[0].set_xticklabels([algorithm_names[a] for a in algorithms], rotation=45)
# pairwise comparisons
reductions_abs = []
reductions_rel = []
performance_diffs = []
for algo_pair in [("5cv", "80lccv-flex"), ("10cv", "90lccv-flex")]:
reductions_abs_pair = []
reductions_rel_pair = []
performance_diffs_pair = []
for openmlid, dfDataset in dfResults[dfResults["algorithm"].isin(list(algo_pair))].groupby("openmlid"):
runtime_baseline = np.mean(dfDataset[dfDataset["algorithm"] == algo_pair[0]]["runtime"])
runtime_lccv = np.mean(dfDataset[dfDataset["algorithm"] == algo_pair[1]]["runtime"])
score_baseline = np.median(dfDataset[dfDataset["algorithm"] == algo_pair[0]]["errorrate"])
score_lccv = np.mean(dfDataset[dfDataset["algorithm"] == algo_pair[1]]["errorrate"])
if not np.isnan(runtime_baseline):
reductions_abs_pair.append(runtime_baseline - runtime_lccv)
reductions_rel_pair.append(runtime_lccv / runtime_baseline)
performance_diffs_pair.append(score_lccv - score_baseline)
reductions_abs.append(reductions_abs_pair)
reductions_rel.append(reductions_rel_pair)
performance_diffs.append(performance_diffs_pair)
for i, (name, reductions) in enumerate([("Absolute\nReduction (m)", reductions_abs), ("Reduction\nRatio (\%)", reductions_rel)], 1):
reductions = np.array(reductions).T
if i == 1:
reductions /= 60
print(f"Median of absolute reduction is {np.median(reductions[:,0])} for 80LCCV and {np.median(reductions[:,1])} for 90LCCV")
print(f"Mean of absolute reduction is {np.mean(reductions[:,0])} for 80LCCV and {np.mean(reductions[:,1])} for 90LCCV")
else:
print(f"Median of relative reduction is {np.median(reductions[:,0])} for 80LCCV and {np.median(reductions[:,1])} for 90LCCV")
print(f"Mean of relative reduction is {np.mean(reductions[:,0])} for 80LCCV and {np.mean(reductions[:,1])} for 90LCCV")
ax[i].violinplot(reductions, showmedians=True)
ax[i].set_title(name)
ax[i].axhline(np.mean(reductions[:,0]), linestyle="--", linewidth=1, color="black", xmin=0.1, xmax=.3)
ax[i].axhline(np.mean(reductions[:,1]), linestyle="--", linewidth=1, color="black", xmin=.7, xmax=.9)
# info on percentage of observations with runtime reductions of at least 50%
print(np.count_nonzero(np.array(reductions_rel) <= 0.5, axis=1) / len(reductions_rel[1]))
print(np.mean(np.array(reductions_abs), axis=1)/3600)
# deviations in performance
ax[3].boxplot(performance_diffs)
ax[3].set_title("Absolute\nPerformance Diff.")
for threshold in [0.005, 0.01, 0.015]:
for i, diffs in enumerate(performance_diffs):
print(f"{np.round(100 * np.count_nonzero(np.array(diffs) <= threshold) / len(diffs), 2)}% of the observed error rates of lccv deviate by at most {threshold} from the {'5CV' if i == 0 else '10CV'} baseline.")
# set x labels for comparative plots
for i in range(1, 4):
ax[i].set_xticks(range(1, 3))
ax[i].set_xticklabels(["80\%", "90\%"])
ax[i].set_xlabel("Max Tr. Port.")
if fig is not None:
fig.tight_layout()
fig.savefig("plots/results-randomsearch-boxplots.pdf")
plt.show()
for algorithm in pd.unique(dfResults["algorithm"]):
print(f"Avg. Total Runtime of {algorithm}: {np.nanmean([np.nanmean(group['runtime']) for openmlid, group in dfResults[dfResults['algorithm'] == algorithm].groupby('openmlid')])}")
fig, ax = plt.subplots(2, 4, figsize=(6, 6), gridspec_kw={'width_ratios': [1, 1, 1, .6]})
create_summary_boxplots(dfResults[dfResults["exception"].isna()], ax[0])
create_summary_boxplots(dfResults[(dfResults["exception"].isna()) & (dfResults["openmlid"].isin(expensive_datasets))], ax[1])
fig.tight_layout()
fig.subplots_adjust(wspace = .5)
fig.savefig("plots/results-randomsearch-boxplots.pdf")
plt.show()
create_summary_scatterplot(dfResults80, "80")
create_summary_scatterplot(dfResults90, "90")
def get_latex_table(dfResults):
rows = []
for openmlid in tqdm(sorted(pd.unique(dfResults["openmlid"]))):
df_ds = dfResults[dfResults["openmlid"] == openmlid]
mask_lccv = df_ds["algorithm"].isin(["80lccv-flex", "90lccv-flex"])
df_lccv = df_ds[mask_lccv]
df_10cv = df_ds[~mask_lccv]
performance_lccv = str(np.round(np.mean(df_lccv["errorrate"]), 2)) + "$\pm$" + '{:04.2f}'.format(np.std(df_lccv["errorrate"]), 2)
runtime_lccv = str(int(np.round(np.mean(df_lccv["runtime"])))) + "$\pm$" + str(int(np.round(np.std(df_lccv["runtime"]))))
# get results for baseline
if len(df_10cv) > 0:
performance_10cv = str(np.round(np.mean(df_10cv["errorrate"]), 2)) + "$\pm$" + '{:04.2f}'.format(np.std(df_10cv["errorrate"]), 2)
runtime_10cv = str(int(np.round(np.mean(df_10cv["runtime"])))) + "$\pm$" + str(int(np.round(np.std(df_10cv["runtime"]))))
else:
performance_10cv, runtime_10cv = np.nan, 86400
rows.append([openmlid, performance_lccv, runtime_lccv, performance_10cv, runtime_10cv])
columns = ["openmlid", "LCCV Performance", "LCCV Runtime", "Baseline CV Perf", "Baseline CV Runtime"]
return pd.DataFrame(rows, columns=columns).to_latex(index=False, escape=False)
print(get_latex_table(dfResults80))
print(get_latex_table(dfResults90))
```
| github_jupyter |
```
import warnings
warnings.simplefilter("ignore", category=DeprecationWarning)
warnings.filterwarnings('ignore')
```
# Parts of Speech (POS) Tagging
```
sentence = "US unveils world's most powerful supercomputer, beats China."
import pandas as pd
import spacy
nlp = spacy.load('en_core', parse=True, tag=True, entity=True)
sentence_nlp = nlp(sentence)
# POS tagging with Spacy
spacy_pos_tagged = [(word, word.tag_, word.pos_) for word in sentence_nlp]
pd.DataFrame(spacy_pos_tagged, columns=['Word', 'POS tag', 'Tag type']).T
# POS tagging with nltk
import nltk
nltk_pos_tagged = nltk.pos_tag(nltk.word_tokenize(sentence))
pd.DataFrame(nltk_pos_tagged, columns=['Word', 'POS tag']).T
from nltk.corpus import treebank
data = treebank.tagged_sents()
train_data = data[:3500]
test_data = data[3500:]
print(train_data[0])
# default tagger
from nltk.tag import DefaultTagger
dt = DefaultTagger('NN')
# accuracy on test data
dt.evaluate(test_data)
# tagging our sample headline
dt.tag(nltk.word_tokenize(sentence))
# regex tagger
from nltk.tag import RegexpTagger
# define regex tag patterns
patterns = [
(r'.*ing$', 'VBG'), # gerunds
(r'.*ed$', 'VBD'), # simple past
(r'.*es$', 'VBZ'), # 3rd singular present
(r'.*ould$', 'MD'), # modals
(r'.*\'s$', 'NN$'), # possessive nouns
(r'.*s$', 'NNS'), # plural nouns
(r'^-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers
(r'.*', 'NN') # nouns (default) ...
]
rt = RegexpTagger(patterns)
# accuracy on test data
rt.evaluate(test_data)
# tagging our sample headline
rt.tag(nltk.word_tokenize(sentence))
## N gram taggers
from nltk.tag import UnigramTagger
from nltk.tag import BigramTagger
from nltk.tag import TrigramTagger
ut = UnigramTagger(train_data)
bt = BigramTagger(train_data)
tt = TrigramTagger(train_data)
# testing performance of unigram tagger
print(ut.evaluate(test_data))
print(ut.tag(nltk.word_tokenize(sentence)))
# testing performance of bigram tagger
print(bt.evaluate(test_data))
print(bt.tag(nltk.word_tokenize(sentence)))
# testing performance of trigram tagger
print(tt.evaluate(test_data))
print(tt.tag(nltk.word_tokenize(sentence)))
def combined_tagger(train_data, taggers, backoff=None):
for tagger in taggers:
backoff = tagger(train_data, backoff=backoff)
return backoff
ct = combined_tagger(train_data=train_data,
taggers=[UnigramTagger, BigramTagger, TrigramTagger],
backoff=rt)
# evaluating the new combined tagger with backoff taggers
print(ct.evaluate(test_data))
print(ct.tag(nltk.word_tokenize(sentence)))
from nltk.classify import NaiveBayesClassifier, MaxentClassifier
from nltk.tag.sequential import ClassifierBasedPOSTagger
nbt = ClassifierBasedPOSTagger(train=train_data,
classifier_builder=NaiveBayesClassifier.train)
# evaluate tagger on test data and sample sentence
print(nbt.evaluate(test_data))
print(nbt.tag(nltk.word_tokenize(sentence)))
# try this out for fun!
#met = ClassifierBasedPOSTagger(train=train_data,
# classifier_builder=MaxentClassifier.train)
#print(met.evaluate(test_data))
#print(met.tag(nltk.word_tokenize(sentence)))
```
# Shallow Parsing or Chunking
```
from nltk.corpus import treebank_chunk
data = treebank_chunk.chunked_sents()
train_data = data[:3500]
test_data = data[3500:]
# view sample data
print(train_data[7])
from nltk.chunk import RegexpParser
# get POS tagged sentence
tagged_simple_sent = nltk.pos_tag(nltk.word_tokenize(sentence))
print('POS Tags:', tagged_simple_sent)
# illustrate NP chunking based on explicit chunk patterns
chunk_grammar = """
NP: {<DT>?<JJ>*<NN.*>}
"""
rc = RegexpParser(chunk_grammar)
c = rc.parse(tagged_simple_sent)
# print and view chunked sentence using chunking
print(c)
c
# illustrate NP chunking based on explicit chink patterns
chink_grammar = """
NP:
{<.*>+} # Chunk everything as NP
}<VBZ|VBD|JJ|IN>+{ # Chink sequences of VBD\VBZ\JJ\IN
"""
rc = RegexpParser(chink_grammar)
c = rc.parse(tagged_simple_sent)
# print and view chunked sentence using chinking
print(c)
c
# create a more generic shallow parser
grammar = """
NP: {<DT>?<JJ>?<NN.*>}
ADJP: {<JJ>}
ADVP: {<RB.*>}
PP: {<IN>}
VP: {<MD>?<VB.*>+}
"""
rc = RegexpParser(grammar)
c = rc.parse(tagged_simple_sent)
# print and view shallow parsed sample sentence
print(c)
c
# Evaluate parser performance on test data
print(rc.evaluate(test_data))
from nltk.chunk.util import tree2conlltags, conlltags2tree
# look at a sample training tagged sentence
train_sent = train_data[7]
print(train_sent)
# get the (word, POS tag, Chunk tag) triples for each token
wtc = tree2conlltags(train_sent)
wtc
# get shallow parsed tree back from the WTC triples
tree = conlltags2tree(wtc)
print(tree)
def conll_tag_chunks(chunk_sents):
tagged_sents = [tree2conlltags(tree) for tree in chunk_sents]
return [[(t, c) for (w, t, c) in sent] for sent in tagged_sents]
def combined_tagger(train_data, taggers, backoff=None):
for tagger in taggers:
backoff = tagger(train_data, backoff=backoff)
return backoff
from nltk.tag import UnigramTagger, BigramTagger
from nltk.chunk import ChunkParserI
class NGramTagChunker(ChunkParserI):
def __init__(self, train_sentences,
tagger_classes=[UnigramTagger, BigramTagger]):
train_sent_tags = conll_tag_chunks(train_sentences)
self.chunk_tagger = combined_tagger(train_sent_tags, tagger_classes)
def parse(self, tagged_sentence):
if not tagged_sentence:
return None
pos_tags = [tag for word, tag in tagged_sentence]
chunk_pos_tags = self.chunk_tagger.tag(pos_tags)
chunk_tags = [chunk_tag for (pos_tag, chunk_tag) in chunk_pos_tags]
wpc_tags = [(word, pos_tag, chunk_tag) for ((word, pos_tag), chunk_tag)
in zip(tagged_sentence, chunk_tags)]
return conlltags2tree(wpc_tags)
# train the shallow parser
ntc = NGramTagChunker(train_data)
# test parser performance on test data
print(ntc.evaluate(test_data))
# parse our sample sentence
sentence_nlp = nlp(sentence)
tagged_sentence = [(word.text, word.tag_) for word in sentence_nlp]
tree = ntc.parse(tagged_sentence)
print(tree)
tree
from nltk.corpus import conll2000
wsj_data = conll2000.chunked_sents()
train_wsj_data = wsj_data[:10000]
test_wsj_data = wsj_data[10000:]
# look at a sample sentence in the corpus
print(train_wsj_data[10])
# train the shallow parser
tc = NGramTagChunker(train_wsj_data)
# test performance on the test data
print(tc.evaluate(test_wsj_data))
# parse our sample sentence
tree = tc.parse(tagged_sentence)
print(tree)
tree
```
# Dependency Parsing
```
dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------'
for token in sentence_nlp:
print(dependency_pattern.format(word=token.orth_,
w_type=token.dep_,
left=[t.orth_
for t
in token.lefts],
right=[t.orth_
for t
in token.rights]))
from spacy import displacy
displacy.render(sentence_nlp, jupyter=True,
options={'distance': 110,
'arrow_stroke': 2,
'arrow_width': 8})
from nltk.parse.stanford import StanfordDependencyParser
sdp = StanfordDependencyParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
# perform dependency parsing
result = list(sdp.raw_parse(sentence))[0]
# generate annotated dependency parse tree
result
# generate dependency triples
[item for item in result.triples()]
# print simple dependency parse tree
dep_tree = result.tree()
print(dep_tree)
# visualize simple dependency parse tree
dep_tree
```
# Constituency Parsing
```
# set java path
import os
java_path = r'C:\Program Files\Java\jdk1.8.0_102\bin\java.exe'
os.environ['JAVAHOME'] = java_path
# create parser object
from nltk.parse.stanford import StanfordParser
scp = StanfordParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
# get parse tree
result = list(scp.raw_parse(sentence))[0]
# print the constituency parse tree
print(result)
# visualize the parse tree
from IPython.display import display
display(result)
import nltk
from nltk.grammar import Nonterminal
from nltk.corpus import treebank
# load and view training data
training_set = treebank.parsed_sents()
print(training_set[1])
# extract the productions for all annotated training sentences
treebank_productions = list(
set(production
for sent in training_set
for production in sent.productions()
)
)
# view some production rules
treebank_productions[0:10]
# add productions for each word, POS tag
for word, tag in treebank.tagged_words():
t = nltk.Tree.fromstring("("+ tag + " " + word +")")
for production in t.productions():
treebank_productions.append(production)
# build the PCFG based grammar
treebank_grammar = nltk.grammar.induce_pcfg(Nonterminal('S'),
treebank_productions)
# build the parser
viterbi_parser = nltk.ViterbiParser(treebank_grammar)
# get sample sentence tokens
tokens = nltk.word_tokenize(sentence)
# get parse tree for sample sentence
result = list(viterbi_parser.parse(tokens))
# get tokens and their POS tags and check it
tagged_sent = nltk.pos_tag(nltk.word_tokenize(sentence))
print(tagged_sent)
# extend productions for sample sentence tokens
for word, tag in tagged_sent:
t = nltk.Tree.fromstring("("+ tag + " " + word +")")
for production in t.productions():
treebank_productions.append(production)
# rebuild grammar
treebank_grammar = nltk.grammar.induce_pcfg(Nonterminal('S'),
treebank_productions)
# rebuild parser
viterbi_parser = nltk.ViterbiParser(treebank_grammar)
# get parse tree for sample sentence
result = list(viterbi_parser.parse(tokens))[0]
# print parse tree
print(result)
# visualize parse tree
result
```
| github_jupyter |
```
%matplotlib notebook
import control as c
import ipywidgets as w
import numpy as np
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.gridspec as gridspec
display(HTML('<script> $(document).ready(function() { $("div.input").hide(); }); </script>'))
```
## Sistemi in feedback negativo
Nel seguente esempio, esamineremo gli effetti del feedback negativo su un sistema LTI (Lineare Tempo-Invariante). L'uscita del sistema originale ($G(s)$) viene sottratta al segnale di ingresso, creando ciò che viene chiamato feedback negativo. Opzionalmente la linea di feedback può contenere un proprio sistema LTI ($H(s)$) che modifica il segnale di uscita prima di tornare sul lato di ingresso.
<img src="Images/NFB.png" width="40%" />
La funzione di trasferimento del sistema a ciclo chiuso può essere calcolata come:
$$G_{cl}(s)=\frac{G(s)}{1+H(s)G(s)},$$
dove il segno $+$ corrisponde al feedback negativo nella figura sopra. I sistemi con feedback positivo sono calcolati da un segno negativo.<br>
Il sistema ad anello aperto è definito come il sistema dall'ingresso al punto di feedback:
$$G_{ol}(s)=H(s)G(s)$$
Quando si analizzano le proprietà di un sistema in feedback negativo, alcuni dei test devono essere eseguiti sul sistema in anello aperto.
<br><b>Seleziona il metodo di analisi!</b>
```
# Example mode selector
typeSelect = w.ToggleButtons(
options=[('Mappa poli-zeri', 0), ('Diagramma di Bode', 1), ('Diagramma di Nyquist', 2)],
description='Metodo: ', layout=w.Layout(width='100%'))
display(typeSelect)
```
<br><b>Assembla le funzioni di trasferimento $G(s)$ e $H(s)$!</b>
```
b = {}
a = {}
e = {}
f = {}
b[0] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
b[1] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
b[2] = w.FloatText(value=0.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
a[0] = w.FloatText(value=10.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
a[1] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
a[2] = w.FloatText(value=0.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
a[3] = w.FloatText(value=0.0, description='', disabled=False, step=0.1, layout=w.Layout(width='19%'))
f[0] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='24%'))
f[1] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='24%'))
e[0] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='24%'))
e[1] = w.FloatText(value=1.0, description='', disabled=False, step=0.1, layout=w.Layout(width='24%'))
e[2] = w.FloatText(value=0.0, description='', disabled=False, step=0.1, layout=w.Layout(width='24%'))
def transfer_function(a0, a1, a2, a3, b0, b1, b2, e0, e1, e2, f0, f1):
b1c = b1
b2c = b2
f1c = f1
global b, f
if a3 == 0:
b[2].disabled=True
b2c = 0
else:
b[2].disabled=False
if a3 == 0 and a2 == 0:
b[1].disabled=True
b1c = 0
else:
b[1].disabled=False
if e2 == 0:
f[1].disabled=True
f1c = 0
else:
f[1].disabled=False
G = c.tf([b2c, b1c, b0], [a3, a2, a1, a0]) # Feedforward transfer function
H = c.tf([f1c, f0], [e2, e1, e0]) # Feedback transfer function
print('Funzione di trasferimento G:')
print(G)
print('Funzione di trasferimento H:')
print(H)
input_data = w.interactive_output(transfer_function, {'a0':a[0], 'a1':a[1], 'a2':a[2], 'a3':a[3],
'b0':b[0], 'b1':b[1], 'b2':b[2],
'e0':e[0], 'e1':e[1], 'e2':e[2],
'f0':f[0], 'f1':f[1]})
display(w.VBox([
w.HBox([w.VBox([w.Label('$G(s)=$')], layout=w.Layout(justify_content="center", align_items='flex-start')),
w.VBox([w.HBox([b[2], w.Label('$s^2+$'), b[1], w.Label('$s+$'), b[0]],
layout=w.Layout(justify_content='center')),
w.HBox([w.HTML(value='<hr style="border-top: 1px solid black">', layout=w.Layout(width='100%'))],
layout=w.Layout(justify_content='center')),
w.HBox([a[3], w.Label('$s^3+$'), a[2], w.Label('$s^2+$'), a[1], w.Label('$s+$'), a[0]],
layout=w.Layout(justify_content='center')) ],
layout=w.Layout(width='50%'))], layout=w.Layout(justify_content='center') ),
w.HTML(value='<br><br>'),
w.HBox([w.VBox([w.Label('$H(s)=$')], layout=w.Layout(justify_content="center", align_items='flex-start')),
w.VBox([w.HBox([f[1], w.Label('$s+$'), f[0]],
layout=w.Layout(justify_content='center')),
w.HBox([w.HTML(value='<hr style="border-top: 1px solid black">', layout=w.Layout(width='100%'))],
layout=w.Layout(justify_content='center')),
w.HBox([e[2], w.Label('$s^2+$'), e[1], w.Label('$s+$'), e[0]],
layout=w.Layout(justify_content='center')) ],
layout=w.Layout(width='35%'))], layout=w.Layout(justify_content='center') )
]), input_data)
```
Sulla base di questi componenti del sistema, è possibile calcolare i modelli a ciclo aperto e chiuso.
```
def feedback_function(a0, a1, a2, a3, b0, b1, b2, e0, e1, e2, f0, f1):
b1c = b1
b2c = b2
f1c = f1
global b, f
if a3 == 0:
b2c = 0
if a3 == 0 and a2 == 0:
b1c = 0
if e2 == 0:
f1c = 0
G = c.tf([b2c, b1c, b0], [a3, a2, a1, a0]) # Feedforward transfer function
H = c.tf([f1c, f0], [e2, e1, e0]) # Feedback transfer function
Wol = c.series(G, H)
Wcl = c.feedback(G, H, -1)
print('Funzione di trasferimento in anello aperto:')
print(Wol)
print('Funzione di trasferimento in anello chiuso:')
print(Wcl)
w.interactive_output(feedback_function, {'a0':a[0], 'a1':a[1], 'a2':a[2], 'a3':a[3],
'b0':b[0], 'b1':b[1], 'b2':b[2],
'e0':e[0], 'e1':e[1], 'e2':e[2],
'f0':f[0], 'f1':f[1]})
```
<b>Osserva le differenze negli andamenti nel dominio del tempo dei due sistemi!</b>
```
# Figure definition
fig1, (f1_ax1, f1_ax2) = plt.subplots(2, 1)
fig1.set_size_inches((9.8, 5))
fig1.set_tight_layout(True)
f1_line1, = f1_ax1.plot([], [], lw=1, color='blue')
f1_line2, = f1_ax1.plot([], [], lw=1, color='red')
f1_line3, = f1_ax2.plot([], [], lw=1, color='blue')
f1_line4, = f1_ax2.plot([], [], lw=1, color='red')
f1_ax1.grid(which='both', axis='both', color='lightgray')
f1_ax2.grid(which='both', axis='both', color='lightgray')
f1_ax1.autoscale(enable=True, axis='x', tight=True)
f1_ax2.autoscale(enable=True, axis='x', tight=True)
f1_ax1.autoscale(enable=True, axis='y', tight=False)
f1_ax2.autoscale(enable=True, axis='y', tight=False)
f1_ax1.set_title('Risposta al gradino', fontsize=11)
f1_ax1.set_xlabel(r'$t\/\/$[s]', labelpad=0, fontsize=10)
f1_ax1.set_ylabel(r'$v\/\/$[/]', labelpad=0, fontsize=10)
f1_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax2.set_title('Risposta all\'impulso', fontsize=11)
f1_ax2.set_xlabel(r'$t\/\/$[s]', labelpad=0, fontsize=10)
f1_ax2.set_ylabel(r'$w\/\/$[/]', labelpad=0, fontsize=10)
f1_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax1.legend([f1_line1, f1_line2], ['Anello aperto', 'Anello chiuso'], loc='lower right')
f1_ax2.legend([f1_line3, f1_line4], ['Anello aperto', 'Anello chiuso'], loc='upper right')
def time_analysis(a0, a1, a2, a3, b0, b1, b2, e0, e1, e2, f0, f1):
b1c = b1
b2c = b2
f1c = f1
global b, f
if a3 == 0:
b2c = 0
if a3 == 0 and a2 == 0:
b1c = 0
if e2 == 0:
f1c = 0
G = c.tf([b2c, b1c, b0], [a3, a2, a1, a0]) # Feedforward transfer function
H = c.tf([f1c, f0], [e2, e1, e0]) # Feedback transfer function
Wol = c.series(G, H)
Wcl = c.feedback(G, H, -1)
Ts_ol, step_ol = c.step_response(Wol)
Ts_cl, step_cl = c.step_response(Wcl)
Ti_ol, imp_ol = c.impulse_response(Wol)
Ti_cl, imp_cl = c.impulse_response(Wcl)
global f1_line1, f1_line2, f1_line3, f1_line4
f1_ax1.lines.remove(f1_line1)
f1_ax1.lines.remove(f1_line2)
f1_ax2.lines.remove(f1_line3)
f1_ax2.lines.remove(f1_line4)
f1_line1, = f1_ax1.plot(Ts_ol, step_ol, lw=1, color='blue')
f1_line2, = f1_ax1.plot(Ts_cl, step_cl, lw=1, color='red')
f1_line3, = f1_ax2.plot(Ti_ol, imp_ol, lw=1, color='blue')
f1_line4, = f1_ax2.plot(Ti_cl, imp_cl, lw=1, color='red')
f1_ax1.relim()
f1_ax2.relim()
f1_ax1.autoscale_view()
f1_ax2.autoscale_view()
w.interactive_output(time_analysis, {'a0':a[0], 'a1':a[1], 'a2':a[2], 'a3':a[3],
'b0':b[0], 'b1':b[1], 'b2':b[2],
'e0':e[0], 'e1':e[1], 'e2':e[2],
'f0':f[0], 'f1':f[1]})
```
<b>Analizza le differenti proprietà nel dominio della frequenza dei due sistemi!</b>
```
fig2, (f2_ax1, f2_ax2) = plt.subplots(2, 1)
fig2.set_tight_layout(True)
grid2 = f2_ax1.get_gridspec()
f2_line1, = f2_ax1.plot([], [])
f2_line2, = f2_ax1.plot([], [])
f2_line3, = f2_ax1.plot([], [])
f2_line4, = f2_ax1.plot([], [])
f2_line5, = f2_ax2.plot([], [])
f2_line6, = f2_ax2.plot([], [])
f2_line7 = f2_ax1.axhline(y=0, color='k', lw=0.5)
f2_line8 = f2_ax1.axvline(x=0, color='k', lw=0.5)
f2_ax1.grid(which='both', axis='both', color='lightgray')
f2_ax2.grid(which='both', axis='both', color='lightgray')
def type_analysis(a0, a1, a2, a3, b0, b1, b2, e0, e1, e2, f0, f1, typeSelect):
b1c = b1
b2c = b2
f1c = f1
global b, f
if a3 == 0:
b2c = 0
if a3 == 0 and a2 == 0:
b1c = 0
if e2 == 0:
f1c = 0
G = c.tf([b2c, b1c, b0], [a3, a2, a1, a0]) # Feedforward transfer function
H = c.tf([f1c, f0], [e2, e1, e0]) # Feedback transfer function
Wol = c.series(G, H)
Wcl = c.feedback(G, H, -1)
global fig2, grid2, f2_ax1, f2_ax2, f2_line1, f2_line2, f2_line3, f2_line4, f2_line5, f2_line6, f2_line7, f2_line8
try:
f2_ax1.lines.remove(f2_line1)
f2_ax1.lines.remove(f2_line2)
f2_ax1.lines.remove(f2_line3)
f2_ax1.lines.remove(f2_line4)
f2_ax2.lines.remove(f2_line5)
f2_ax2.lines.remove(f2_line6)
except:
pass
if typeSelect == 0: # Pole-Zero map
fig2.set_size_inches((5, 5))
f2_ax2.set_visible(False)
grid2.set_height_ratios([100, 1])
p_ol, z_ol = c.pzmap(Wol, Plot=False)
p_cl, z_cl = c.pzmap(Wcl, Plot=False)
f2_ax1.autoscale(enable=True, axis='both', tight=False)
px_ol = [x.real for x in p_ol]
py_ol = [x.imag for x in p_ol]
zx_ol = [x.real for x in z_ol]
zy_ol = [x.imag for x in z_ol]
px_cl = [x.real for x in p_cl]
py_cl = [x.imag for x in p_cl]
zx_cl = [x.real for x in z_cl]
zy_cl = [x.imag for x in z_cl]
f2_line1, = f2_ax1.plot(zx_ol, zy_ol, 'rs', fillstyle='none')
f2_line2, = f2_ax1.plot(px_ol, py_ol, 'bo', fillstyle='none')
f2_line3, = f2_ax1.plot(zx_cl, zy_cl, 'm^', fillstyle='none')
f2_line4, = f2_ax1.plot(px_cl, py_cl, 'cD', fillstyle='none')
f2_line5, = f2_ax2.plot([], [])
f2_line6, = f2_ax2.plot([], [])
f2_line7.set_visible(True)
f2_line8.set_visible(True)
f2_ax1.set_title('Mappa poli-zeri', fontsize=11)
f2_ax1.set_xscale('linear')
f2_ax1.set_xlabel(r'Re', labelpad=0, fontsize=10)
f2_ax1.set_ylabel(r'Im', labelpad=0, fontsize=10)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f2_ax1.legend((f2_line1, f2_line2, f2_line3, f2_line4),
('Zeri (Anello aperto)', 'Poli (Anello aperto)', 'Zeri (Anello chiuso)', 'Poli (Anello chiuso)'))
elif typeSelect == 1: # Bode plot
fig2.set_size_inches((9.6, 5))
f2_ax2.set_visible(True)
grid2.set_height_ratios([1, 1])
m_ol, p_ol, o_ol = c.bode(Wol, Plot=False)
m_cl, p_cl, o_cl = c.bode(Wcl, Plot=False)
f2_ax1.autoscale(enable=True, axis='x', tight=True)
f2_ax2.autoscale(enable=True, axis='x', tight=True)
f2_ax1.autoscale(enable=True, axis='y', tight=False)
f2_ax2.autoscale(enable=True, axis='y', tight=False)
f2_line1, = f2_ax1.plot(o_ol, m_ol, lw=1, color='blue')
f2_line2, = f2_ax1.plot(o_cl, m_cl, lw=1, color='red')
f2_line3, = f2_ax1.plot([], [])
f2_line4, = f2_ax1.plot([], [])
f2_line5, = f2_ax2.plot(o_ol, p_ol, lw=1, color='blue')
f2_line6, = f2_ax2.plot(o_cl, p_cl, lw=1, color='red')
f2_line7.set_visible(False)
f2_line8.set_visible(False)
f2_ax1.set_title('Diagramma del modulo', fontsize=11)
f2_ax1.set_xscale('log')
f2_ax1.set_xlabel(r'$\omega\/[\frac{rad}{s}]$', labelpad=0, fontsize=10)
f2_ax1.set_ylabel(r'$A\/[dB]$', labelpad=0, fontsize=10)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f2_ax2.set_title('Diagramma della fase', fontsize=11)
f2_ax2.set_xscale('log')
f2_ax2.set_xlabel(r'$\omega\/[\frac{rad}{s}]$', labelpad=0, fontsize=10)
f2_ax2.set_ylabel(r'$\phi\/[°]$', labelpad=0, fontsize=10)
f2_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
f2_ax1.legend((f2_line1, f2_line2), ('Anello aperto', 'Anello chiuso'))
f2_ax2.legend((f2_line5, f2_line6), ('Anello aperto', 'Anello chiuso'))
else: # Nyquist plot
fig2.set_size_inches((5, 5))
f2_ax2.set_visible(False)
grid2.set_height_ratios([100, 1])
_, _, ob_ol = c.nyquist_plot(Wol, Plot=False) # Small resolution plot to determine bounds
_, _, ob_cl = c.nyquist_plot(Wcl, Plot=False)
r_ol, i_ol, _ = c.nyquist(Wol, omega=np.logspace(np.log10(ob_ol[0]), np.log10(ob_ol[-1]), 1000), Plot=False)
r_cl, i_cl, _ = c.nyquist(Wcl, omega=np.logspace(np.log10(ob_cl[0]), np.log10(ob_cl[-1]), 1000), Plot=False)
f2_ax1.autoscale(enable=True, axis='both', tight=False)
f2_line1, = f2_ax1.plot(r_ol, i_ol, lw=1, color='blue')
f2_line2, = f2_ax1.plot(r_cl, i_cl, lw=1, color='red')
f2_line3, = f2_ax1.plot([], [])
f2_line4, = f2_ax1.plot([], [])
f2_line5, = f2_ax2.plot([], [])
f2_line6, = f2_ax2.plot([], [])
f2_line7.set_visible(True)
f2_line8.set_visible(True)
f2_ax1.set_title('Diagramma di Nyquist', fontsize=11)
f2_ax1.set_xscale('linear')
f2_ax1.set_xlabel(r'Re', labelpad=0, fontsize=10)
f2_ax1.set_ylabel(r'Im', labelpad=0, fontsize=10)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f2_ax1.legend((f2_line1, f2_line2), ('Anello aperto', 'Anello chiuso'))
f2_ax1.relim(visible_only=True)
f2_ax2.relim(visible_only=True)
f2_ax1.autoscale_view()
f2_ax2.autoscale_view()
w.interactive_output(type_analysis, {'a0':a[0], 'a1':a[1], 'a2':a[2], 'a3':a[3],
'b0':b[0], 'b1':b[1], 'b2':b[2],
'e0':e[0], 'e1':e[1], 'e2':e[2],
'f0':f[0], 'f1':f[1], 'typeSelect':typeSelect})
```
| github_jupyter |
# Streaming Log Processing on GPUs
Almost since the coining of the phrase "big data", log-processing has been a primary use-case for analytics platforms.
Logs are *voluminous*:
A single website visit can result in 10s to 100s of log entries, each with lengthy strings of duplicated client information.
They're *complex*:
Extracting user activities often requires combining multiple records by time and unique session identifier(s).
They're *time-sensitive*:
When something goes wrong, you need to know quickly.
While early big data architectures were oriented towards batch jobs, the focus has shifted to lower-latency solutions. Distributed data processing tools and APIs have made it easier for developers to write _streaming_ applications.
Below we provide an example of how to do streaming web-log processing with RAPIDS, Dask, and Streamz.
## Pre-Requisites
We assume you're running in a RAPIDS nightly or release container, and thus already have cuDF and Dask installed.
Make sure you have [streamz](https://github.com/python-streamz/streamz) installed.
```
!conda install -c conda-forge -y streamz ipywidgets
```
## The Data
For demonstration purposes, we'll use a [publicly available web-log dataset from NASA](http://opensource.indeedeng.io/imhotep/docs/sample-data/).
```
import os, urllib.request, gzip, io
data_dir = '/data/'
if not os.path.exists(data_dir):
os.mkdir(data_dir)
url = 'http://indeedeng.github.io/imhotep/files/nasa_19950630.22-19950728.12.tsv.gz'
fn = 'logs_noheader.tsv'
fileStream = io.BytesIO(urllib.request.urlopen(url).read())
# We remove the header line to avoid sending it in some batches and not others
with gzip.open(fileStream, 'rb') as f_in, open(data_dir + fn, 'wb') as fout:
# This is a latin character set so we must re-encode it
data = f_in.read().decode('iso-8859-1')
p_data = data.partition('\n')
names = p_data[0].split()
fout.write(p_data[2].encode('utf8'))
```
## Inspect the Data
The Google SRE HandBook says it's a good idea to track the [4 Golden Signals](https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals) for any important system.
```
import cudf
df = cudf.read_csv(data_dir + fn, sep='\t', names=names, quoting=3)
# The input data has quotation marks which should not be incorretly interpreted as a string with delimiters. Hence quoting is set to 3 (no quoting)
df.head().to_pandas()
```
The data above doesn't tell us anything about request latency, but we can aggregate it to get a view into traffic, errors, and saturation.
```
# calculate total requests served per host system
traffic = df.groupby(['host']).host.count()
traffic[traffic > 5].head().to_pandas()
# count HTTP error codes per host system
errors = df[df['response'] >= 500].groupby(['host', 'response']).host.count()
errors.to_pandas()
# measure possible saturation of host network cards
mb_sent = df.groupby(['host']).bytes.sum()/1000000
mb_sent[mb_sent > 100].head().to_pandas()
```
You can see from the above that there are not many errors which is great and we can also see hits per host and total MBs sent per host
### Single GPU Streaming with RAPIDS and Streamz
A single GPU can process a lot of data quickly. Thanks to the Streamz API, it's also easy to do it in streaming fashion.
In many streaming systems you return events of interest for ops teams to investigate. That is what we will do.
```
from io import StringIO
# calculate traffic, errors, and saturation per batch
def process_on_gpu(messages):
message_stream = StringIO(
'\n'.join(msg.decode('utf-8') if isinstance(msg,bytes) else msg for msg in messages)
)
df = cudf.read_csv(message_stream, sep='\t', names=names, quoting=3)
traffic = df.groupby(['host']).host.count()
errors = df[df['response'] >= 500].groupby(['host', 'response']).host.count()
mb_sent = df.groupby(['host']).bytes.sum()/1000000
# Return - TSV versions of each metric
return {'traffic': traffic[traffic > 200].to_string(), 'errors': errors.to_string(), 'mb_sent': mb_sent[mb_sent > 120].to_string()}
import time, datetime
# save each metric type to its own file, instead of dumping lots of output to Jupyter
def save_to_file(events):
dt = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
with open(data_dir + 'traffic.txt', 'w+') as fp:
fp.write(str(dt) + ':' + events['traffic'] + '\n')
with open(data_dir + 'errors.txt', 'w+') as fp:
fp.write(str(dt) + ':' + events['errors'] + '\n')
with open(data_dir + 'mb_sent.txt', 'w+') as fp:
fp.write(str(dt) + ':' + events['mb_sent'] + '\n')
print(str(dt) + ': metrics batch written..')
```
Note that the function above opens the file in overwrite mode for simplicity (which means the data in the file would correspond to the last batch only). Typically a workflow would append to the existing file, write to a seperate file with a timestamp based filename or produce to another kafka topic.
```
from streamz import Stream
# setup the stream
# Streamz allows streaming directly from a text file
source = Stream.from_textfile(data_dir + fn)
# process 250k lines per batch
out = source.partition(250000).map(process_on_gpu).sink(save_to_file)
source.start()
!echo "Error Log:"
!head -n5 {data_dir}errors.txt
!echo "\nTraffic Log:"
!head -n5 {data_dir}traffic.txt
!echo "\nMB Sent Log:"
!head -n5 {data_dir}mb_sent.txt
```
### Scaling Streamz to multiple GPUs with Dask & Kafka
As opposed to streaming from files a very common pattern is to read from distributed log systems like Apache Kafka.
The below example assumes you have a running Kafka instance/cluster.
For help setting up your own, follow the [Kafka Quickstart guide](http://kafka.apache.org/quickstart).
```
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
# create a Dask cluster with 1 worker per GPU
cluster = LocalCUDACluster()
client = Client(cluster)
```
Streamz uses the [python-confluent-kafka](https://github.com/confluentinc/confluent-kafka-python) library for handling interactions with kafka. Make sure this library installed
```
!conda install -c conda-forge -y python-confluent-kafka
from streamz import Stream
import confluent_kafka
# Kafka specific configurations
topic = "haproxy-topic"
bootstrap_servers = 'localhost:9092'
consumer_conf = {'bootstrap.servers': bootstrap_servers, 'group.id': 'custreamz', 'session.timeout.ms': 60000}
stream = Stream.from_kafka_batched(topic, consumer_conf, poll_interval='1s', npartitions=1, asynchronous=True, dask=False)
final_output = stream.map(process_on_gpu).sink(save_to_file)
stream.start()
```
Typically, upstream applications produce messages to Kafka. For the sake of a self contained example you can experiment with, we'll use the Confluent Kafka library to produce messages that our Streamz app immediately consumes.
```
producer = confluent_kafka.Producer({'bootstrap.servers': bootstrap_servers})
with open(data_dir+fn, 'rb') as fp:
for line in fp.readlines():
try:
producer.produce(topic, line)
except BufferError:
# Wait for the specified timeout as the queue is full
producer.flush(0.2)
producer.flush()
print("Producer queue is now empty!")
!echo "Error Log:"
!head -n5 {data_dir}errors.txt
!echo "\nTraffic Log:"
!head -n5 {data_dir}traffic.txt
!echo "\nMB Sent Log:"
!head -n5 {data_dir}mb_sent.txt
```
| github_jupyter |
# Create a Learner for inference
```
from fastai import *
from fastai.gen_doc.nbdoc import *
```
In this tutorial, we'll see how the same API allows you to create an empty [`DataBunch`](/basic_data.html#DataBunch) for a [`Learner`](/basic_train.html#Learner) at inference time (once you have trained your model) and how to call the `predict` method to get the predictions on a single item.
```
jekyll_note("""As usual, this page is generated from a notebook that you can find in the docs_srs folder of the
[fastai repo](https://github.com/fastai/fastai). We use the saved models from [this tutorial](/tutorial.data.html) to
have this notebook run fast.
""")
```
## Vision
To quickly get acces to all the vision functions inside fastai, we use the usual import statements.
```
from fastai import *
from fastai.vision import *
```
### A classification problem
Let's begin with our sample of the MNIST dataset.
```
mnist = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
```
It's set up with an imagenet structure so we use it to split our training and validation set, then labelling.
```
data = (ImageItemList.from_folder(mnist)
.split_by_folder()
.label_from_folder()
.transform(tfms, size=32)
.databunch()
.normalize(imagenet_stats))
```
Now that our data has been properly set up, we can train a model. Once the time comes to deploy it for inference, we'll need to save the information this [`DataBunch`](/basic_data.html#DataBunch) contains (classes for instance), to do this, we call `data.export()`. This will create an 'export.pkl' file that you'll need to copy with your model file if you want do deploy pn another device.
```
data.export()
```
To create the [`DataBunch`](/basic_data.html#DataBunch) for inference, you'll need to use the `load_empty` method. Note that for now, transforms and normalization aren't saved inside the export file. This is going to be integrated in a future version of the library. For now, we pass the transforms we applied on the validation set, along with all relevant kwargs, and we normalize with the same statistics as during training.
Then, we use it to create a [`Learner`](/basic_train.html#Learner) and load the model we trained before.
```
empty_data = ImageDataBunch.load_empty(mnist, tfms=tfms[1],size=32).normalize(imagenet_stats)
learn = create_cnn(empty_data, models.resnet18)
learn.load('mini_train');
```
You can now get the predictions on any image via `learn.predict`.
```
img = data.train_ds[0][0]
learn.predict(img)
```
It returns a tuple of three things: the object predicted (with the class in this instance), the underlying data (here the corresponding index) and the raw probabilities.
### A multilabel problem
Now let's try these on the planet dataset, which is a little bit different in the sense that each image can have multiple tags (and not jsut one label).
```
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
```
Here each images is labelled in a file named 'labels.csv'. We have to add 'train' as a prefix to the filenames, '.jpg' as a suffix and he labels are separated by spaces.
```
data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
.random_split_by_pct()
.label_from_df(sep=' ')
.transform(planet_tfms, size=128)
.databunch()
.normalize(imagenet_stats))
```
Again, we call `data.export()` to export our data object properties.
```
data.export()
```
We can then create the [`DataBunch`](/basic_data.html#DataBunch) for inference, by using the `load_empty` method as before.
```
empty_data = ImageDataBunch.load_empty(planet, tfms=tfms[1],size=32).normalize(imagenet_stats)
learn = create_cnn(empty_data, models.resnet18)
learn.load('mini_train');
```
And we get the predictions on any image via `learn.predict`.
```
img = data.train_ds[0][0]
learn.predict(img)
```
Here we can specify a particular theshold to consider the predictions are a hit or not. The default is 0.5 but we can change it.
```
learn.predict(img, thresh=0.3)
```
### A regression example
For the next example, we are going to use the [BIWI head pose](https://data.vision.ee.ethz.ch/cvl/gfanelli/head_pose/head_forest.html#db) dataset. On pictures of persons, we have to find the center of their face. For the fastai docs, we have built a small subsample of the dataset (200 images) and prepared a dictionary for the correspondance fielname to center.
```
biwi = untar_data(URLs.BIWI_SAMPLE)
fn2ctr = pickle.load(open(biwi/'centers.pkl', 'rb'))
```
To grab our data, we use this dictionary to label our items. We also use the [`PointsItemList`](/vision.data.html#PointsItemList) class to have the targets be of type [`ImagePoints`](/vision.image.html#ImagePoints) (which will make sure the data augmentation is properly applied to them). When calling [`transform`](/tabular.transform.html#tabular.transform) we make sure to set `tfm_y=True`.
```
data = (ImageItemList.from_folder(biwi)
.random_split_by_pct()
.label_from_func(lambda o:fn2ctr[o.name], label_cls=PointsItemList)
.transform(get_transforms(), tfm_y=True, size=(120,160))
.databunch()
.normalize(imagenet_stats))
```
As before, the road to inference is pretty straightforward: export the data, then load an empty [`DataBunch`](/basic_data.html#DataBunch).
```
data.export()
empty_data = ImageDataBunch.load_empty(biwi, tfms=get_transforms()[1], tfm_y=True, size=(120,60)).normalize(imagenet_stats)
learn = create_cnn(empty_data, models.resnet18)
learn.load('mini_train');
```
And now we can a prediction on an image.
```
img = data.train_ds[0][0]
learn.predict(img)
```
To visualize the predictions, we can use the [`Image.show`](/vision.image.html#Image.show) method.
```
img.show(y=learn.predict(img)[0])
```
### A segmentation example
Now we are going to look at the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/) (at least a small sample of it), where we have to predict the class of each pixel in an image. Each image in the 'images' subfolder as an equivalent in 'labels' that is its segmentations mask.
```
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
```
We read the classes in 'codes.txt' and the function maps each image filename with its corresponding mask filename.
```
codes = np.loadtxt(camvid/'codes.txt', dtype=str)
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
```
The data block API allows us to uickly get everything in a [`DataBunch`](/basic_data.html#DataBunch) and then we can have a look with `show_batch`.
```
data = (SegmentationItemList.from_folder(path_img)
.random_split_by_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch(bs=16, path=camvid)
.normalize(imagenet_stats))
```
As before, we export the data then create an empty [`DataBunch`](/basic_data.html#DataBunch) that we pass to a [`Learner`](/basic_train.html#Learner).
```
data.export()
empty_data = ImageDataBunch.load_empty(camvid, tfms=get_transforms()[1], tfm_y=True, size=128).normalize(imagenet_stats)
learn = Learner.create_unet(empty_data, models.resnet18)
learn.load('mini_train');
```
And now we can a prediction on an image.
```
img = data.train_ds[0][0]
learn.predict(img)
```
To visualize the predictions, we can use the [`Image.show`](/vision.image.html#Image.show) method.
```
img.show(y=learn.predict(img)[0])
```
## Text
Next application is text, so let's start by importing everything we'll need.
```
from fastai import *
from fastai.text import *
```
### Language modelling
First let's look a how to get a language model ready for inference. Since we'll load the model trained in the [visualize data tutorial](/tutorial.data.html), we load the vocabulary used there.
```
imdb = untar_data(URLs.IMDB_SAMPLE)
vocab = Vocab(pickle.load(open(imdb/'tmp'/'itos.pkl', 'rb')))
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text', vocab=vocab)
.random_split_by_pct()
.label_for_lm()
.databunch())
```
Like in vision, we just have to type `data_lm.export()` to save all the information inside the [`DataBunch`](/basic_data.html#DataBunch) we'll need. In this case, this includes all the vocabulary we created.
```
data_lm.export()
```
Now let's define a language model learner from an empty data object.
```
empty_data = TextLMDataBunch.load_empty(imdb)
learn = language_model_learner(empty_data)
learn.load('mini_train_lm');
```
Then we can predict with the usual method, here we can specify how many words we want the model to predict.
```
learn.predict('This is a simple test of', n_words=20)
```
### Classification
Now let's see a classification example. We have to use the same vocabulary as for the language model if we want to be able to use the encoder we saved.
```
data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text', vocab=vocab)
.split_from_df(col='is_valid')
.label_from_df(cols='label')
.databunch(bs=42))
```
Again we export the data.
```
data_clas.export()
```
Now let's define a text classifier from an empty data object.
```
empty_data = TextClasDataBunch.load_empty(imdb)
learn = text_classifier_learner(empty_data)
learn.load('mini_train_clas');
```
Then we can predict with the usual method.
```
learn.predict('I really loved that movie!')
```
# Tabular
Last application brings us to tabular data. First let's import everything we'll need.
```
from fastai import *
from fastai.tabular import *
```
We'll use a sample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult) here. Once we read the csv file, we'll need to specify the dependant variable, the categorical variables, the continuous variables and the processors we want to use.
```
adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = '>=50k'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
```
Then we can use the data block API to grab everything together before using `data.show_batch()`
```
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_idx(valid_idx=range(800,1000))
.label_from_df(cols=dep_var)
.databunch())
```
We define a [`Learner`](/basic_train.html#Learner) object that we fit and then save the model.
```
learn = tabular_learner(data, layers=[200,100], metrics=accuracy)
learn.fit(1, 1e-2)
learn.save('mini_train')
```
As in the other applications, we just have to type `data.export()` to save everything we'll need for inference (here the inner state of each processor).
```
data.export()
```
Then we create an empty data object and a learner from it like before.
```
data = TabularDataBunch.load_empty(adult)
learn = tabular_learner(data, layers=[200,100])
learn.load('mini_train');
```
And we can predict on a row of dataframe that has the right `cat_names` and `cont_names`.
```
learn.predict(df.iloc[0])
```
| github_jupyter |
# MNIST generator by RBM
- Provided sample notebook to train single character generator on Colab, I checked what affects the generation quality.
- Comparison of 2 different models by changing CD steps
- $k=1$ for training
- $k=200$ for training
- Generation quality is also affected by
- CD step $k$ for sampling
- Initial state for sampling
```
from RBM_helper import RBM
import torch
import matplotlib.pyplot as plt
rbm_1 = RBM.from_weights('../params/MNIST_k_1_e_500')
rbm_2 = RBM.from_weights('../params/MNIST_k_200_e_500')
```
## Comparing different models with same initial input and CD steps
- We see that from same random initial input, the model trained larger CD-k step produces better quality image.
```
n_vis = rbm_1.n_vis
initial_state = torch.rand(n_vis)
k = 500
image_1 = rbm_1.draw_samples(k, initial_state.clone()).cpu().detach().numpy().reshape(28, 28)
image_2 = rbm_2.draw_samples(k, initial_state.clone()).cpu().detach().numpy().reshape(28, 28)
f, axarr = plt.subplots(1,2)
axarr[0].imshow(image_1)
axarr[0].set_xlabel('k=1, e=500')
axarr[1].imshow(image_2)
axarr[1].set_xlabel('k=200, e=1000')
```
## PCD-k to sample from low quality model
- Persistent contrastive divergence: using training data as initial state, we can draw high quality sample from less trained model.
- In this case, the image quality for both models do not differ much.
```
import numpy as np
from urllib import request
import gzip
import pickle
filename = [
["training_images","train-images-idx3-ubyte.gz"],
["test_images","t10k-images-idx3-ubyte.gz"],
["training_labels","train-labels-idx1-ubyte.gz"],
["test_labels","t10k-labels-idx1-ubyte.gz"]
]
def download_mnist():
base_url = "http://yann.lecun.com/exdb/mnist/"
for name in filename:
print("Downloading "+name[1]+"...")
request.urlretrieve(base_url+name[1], name[1])
print("Download complete.")
def save_mnist():
mnist = {}
for name in filename[:2]:
with gzip.open(name[1], 'rb') as f:
mnist[name[0]] = np.frombuffer(f.read(), np.uint8, offset=16).reshape(-1,28*28)
for name in filename[-2:]:
with gzip.open(name[1], 'rb') as f:
mnist[name[0]] = np.frombuffer(f.read(), np.uint8, offset=8)
with open("mnist.pkl", 'wb') as f:
pickle.dump(mnist,f)
print("Save complete.")
def init():
# download_mnist()
save_mnist()
def load():
with open("mnist.pkl",'rb') as f:
mnist = pickle.load(f)
return mnist["training_images"], mnist["training_labels"], mnist["test_images"], mnist["test_labels"]
load()
init()
data = load()
labels = data[1]
data = data[0]
data = data/255 # normalize grey scale values between 0 and 255 to values between 0 and 1
data = np.where(data > 0.5, 1, 0)
data = data[labels == 0]
training_initial_state = torch.Tensor(data[0])
image_3 = rbm_1.draw_samples(k, training_initial_state).cpu().detach().numpy().reshape(28, 28)
image_4 = rbm_2.draw_samples(k, training_initial_state).cpu().detach().numpy().reshape(28, 28)
f, axarr = plt.subplots(1,2)
axarr[0].imshow(image_3)
axarr[0].set_xlabel('k=1, e=500')
axarr[1].imshow(image_4)
axarr[1].set_xlabel('k=200, e=1000')
```
## Likelihood convergence comparison between different CD steps
- To see how CD step difference affects training, I compare learning curve of different $k$'s
- It clearly shows CD step causes a bottle neck of the training.
```
import pandas as pd
df_k1 = pd.read_csv('../training_logs/mnist_k_1_ll.csv', names=['likelihood_k1'], skiprows=1)
df_k100 = pd.read_csv('../training_logs/mnist_k100_ll.csv', names=['likelihood_k100'], skiprows=1)
df_k1.index = list(range(1, 501))
df_k100.index = list(range(1, 101))
df_k1.index.name = 'Epoch'
df_k100.index.name = 'Epoch'
combined_df = pd.concat([df_k1.iloc[:100], df_k100], axis=1)
ax = combined_df.plot()
ax.set_ylabel('Log likelihood')
```
| github_jupyter |
```
from skimage.io import imread
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
img = rgb2gray(imread("handwritten1.jpg"))
plt.figure(figsize=(10,10))
plt.axis("off")
plt.imshow(img, cmap="gray")
plt.show()
```
### Find the Horizontal projection profile and find the window where line segment can be created.
One of the common ways of finding the line-height of a document is by analyzing its Horizontal projection profile. Horizontal projection profile (HPP) is the array of sum or rows of a two dimentional image. Where there are more white spaces we see more peaks. These peaks give us an idea of where the segmentation between two lines can be done.
```
from skimage.filters import sobel
import numpy as np
def horizontal_projections(sobel_image):
return np.sum(sobel_image, axis=1)
sobel_image = sobel(img)
hpp = horizontal_projections(sobel_image)
plt.plot(hpp)
plt.show()
```
As you can see, where there were more white spaces there are peaks in the graph. We will use this information further to locate the regions where we can find the seperation line.
```
#find the midway where we can make a threshold and extract the peaks regions
#divider parameter value is used to threshold the peak values from non peak values.
def find_peak_regions(hpp, divider=2):
threshold = (np.max(hpp)-np.min(hpp))/divider
peaks = []
peaks_index = []
for i, hppv in enumerate(hpp):
if hppv < threshold:
peaks.append([i, hppv])
return peaks
peaks = find_peak_regions(hpp)
peaks_index = np.array(peaks)[:,0].astype(int)
segmented_img = np.copy(img)
r,c = segmented_img.shape
for ri in range(r):
if ri in peaks_index:
segmented_img[ri, :] = 0
plt.figure(figsize=(20,20))
plt.imshow(segmented_img, cmap="gray")
plt.show()
```
The above black regions indicate where we would need to run our path planning algorithm for line segmentation.
```
#group the peaks into walking windows
def get_hpp_walking_regions(peaks_index):
hpp_clusters = []
cluster = []
for index, value in enumerate(peaks_index):
cluster.append(value)
if index < len(peaks_index)-1 and peaks_index[index+1] - value > 1:
hpp_clusters.append(cluster)
cluster = []
#get the last cluster
if index == len(peaks_index)-1:
hpp_clusters.append(cluster)
cluster = []
return hpp_clusters
hpp_clusters = get_hpp_walking_regions(peaks_index)
#a star path planning algorithm
from heapq import *
def heuristic(a, b):
return (b[0] - a[0]) ** 2 + (b[1] - a[1]) ** 2
def astar(array, start, goal):
neighbors = [(0,1),(0,-1),(1,0),(-1,0),(1,1),(1,-1),(-1,1),(-1,-1)]
close_set = set()
came_from = {}
gscore = {start:0}
fscore = {start:heuristic(start, goal)}
oheap = []
heappush(oheap, (fscore[start], start))
while oheap:
current = heappop(oheap)[1]
if current == goal:
data = []
while current in came_from:
data.append(current)
current = came_from[current]
return data
close_set.add(current)
for i, j in neighbors:
neighbor = current[0] + i, current[1] + j
tentative_g_score = gscore[current] + heuristic(current, neighbor)
if 0 <= neighbor[0] < array.shape[0]:
if 0 <= neighbor[1] < array.shape[1]:
if array[neighbor[0]][neighbor[1]] == 1:
continue
else:
# array bound y walls
continue
else:
# array bound x walls
continue
if neighbor in close_set and tentative_g_score >= gscore.get(neighbor, 0):
continue
if tentative_g_score < gscore.get(neighbor, 0) or neighbor not in [i[1]for i in oheap]:
came_from[neighbor] = current
gscore[neighbor] = tentative_g_score
fscore[neighbor] = tentative_g_score + heuristic(neighbor, goal)
heappush(oheap, (fscore[neighbor], neighbor))
return []
#Scan the paths to see if there are any blockers.
from skimage.filters import threshold_otsu
from skimage.util import invert
def get_binary(img):
mean = np.mean(img)
if mean == 0.0 or mean == 1.0:
return img
thresh = threshold_otsu(img)
binary = img <= thresh
binary = binary*1
return binary
def path_exists(window_image):
#very basic check first then proceed to A* check
if 0 in horizontal_projections(window_image):
return True
padded_window = np.zeros((window_image.shape[0],1))
world_map = np.hstack((padded_window, np.hstack((window_image,padded_window)) ) )
path = np.array(astar(world_map, (int(world_map.shape[0]/2), 0), (int(world_map.shape[0]/2), world_map.shape[1])))
if len(path) > 0:
return True
return False
def get_road_block_regions(nmap):
road_blocks = []
needtobreak = False
for col in range(nmap.shape[1]):
start = col
end = col+20
if end > nmap.shape[1]-1:
end = nmap.shape[1]-1
needtobreak = True
if path_exists(nmap[:, start:end]) == False:
road_blocks.append(col)
if needtobreak == True:
break
return road_blocks
def group_the_road_blocks(road_blocks):
#group the road blocks
road_blocks_cluster_groups = []
road_blocks_cluster = []
size = len(road_blocks)
for index, value in enumerate(road_blocks):
road_blocks_cluster.append(value)
if index < size-1 and (road_blocks[index+1] - road_blocks[index]) > 1:
road_blocks_cluster_groups.append([road_blocks_cluster[0], road_blocks_cluster[len(road_blocks_cluster)-1]])
road_blocks_cluster = []
if index == size-1 and len(road_blocks_cluster) > 0:
road_blocks_cluster_groups.append([road_blocks_cluster[0], road_blocks_cluster[len(road_blocks_cluster)-1]])
road_blocks_cluster = []
return road_blocks_cluster_groups
binary_image = get_binary(img)
for cluster_of_interest in hpp_clusters:
nmap = binary_image[cluster_of_interest[0]:cluster_of_interest[len(cluster_of_interest)-1],:]
road_blocks = get_road_block_regions(nmap)
road_blocks_cluster_groups = group_the_road_blocks(road_blocks)
#create the doorways
for index, road_blocks in enumerate(road_blocks_cluster_groups):
window_image = nmap[:, road_blocks[0]: road_blocks[1]+10]
binary_image[cluster_of_interest[0]:cluster_of_interest[len(cluster_of_interest)-1],:][:, road_blocks[0]: road_blocks[1]+10][int(window_image.shape[0]/2),:] *= 0
#now that everything is cleaner, its time to segment all the lines using the A* algorithm
line_segments = []
for i, cluster_of_interest in enumerate(hpp_clusters):
nmap = binary_image[cluster_of_interest[0]:cluster_of_interest[len(cluster_of_interest)-1],:]
path = np.array(astar(nmap, (int(nmap.shape[0]/2), 0), (int(nmap.shape[0]/2),nmap.shape[1]-1)))
offset_from_top = cluster_of_interest[0]
path[:,0] += offset_from_top
line_segments.append(path)
cluster_of_interest = hpp_clusters[1]
offset_from_top = cluster_of_interest[0]
nmap = binary_image[cluster_of_interest[0]:cluster_of_interest[len(cluster_of_interest)-1],:]
plt.figure(figsize=(20,20))
plt.imshow(invert(nmap), cmap="gray")
path = np.array(astar(nmap, (int(nmap.shape[0]/2), 0), (int(nmap.shape[0]/2),nmap.shape[1]-1)))
plt.plot(path[:,1], path[:,0])
offset_from_top = cluster_of_interest[0]
fig, ax = plt.subplots(figsize=(20,10), ncols=2)
for path in line_segments:
ax[1].plot((path[:,1]), path[:,0])
ax[1].axis("off")
ax[0].axis("off")
ax[1].imshow(img, cmap="gray")
ax[0].imshow(img, cmap="gray")
## add an extra line to the line segments array which represents the last bottom row on the image
last_bottom_row = np.flip(np.column_stack(((np.ones((img.shape[1],))*img.shape[0]), np.arange(img.shape[1]))).astype(int), axis=0)
line_segments.append(last_bottom_row)
```
### Lets divide the image now by the line segments passing through the image
```
line_images = []
def extract_line_from_image(image, lower_line, upper_line):
lower_boundary = np.min(lower_line[:, 0])
upper_boundary = np.min(upper_line[:, 0])
img_copy = np.copy(image)
r, c = img_copy.shape
for index in range(c-1):
img_copy[0:lower_line[index, 0], index] = 255
img_copy[upper_line[index, 0]:r, index] = 255
return img_copy[lower_boundary:upper_boundary, :]
line_count = len(line_segments)
fig, ax = plt.subplots(figsize=(10,10), nrows=line_count-1)
for line_index in range(line_count-1):
line_image = extract_line_from_image(img, line_segments[line_index], line_segments[line_index+1])
line_images.append(line_image)
ax[line_index].imshow(line_image, cmap="gray")
```
### now that I have the lines, I can now divide these lines into words.
Ref: https://github.com/muthuspark/ml_research/blob/master/Separate%20words%20in%20a%20line%20using%20VPP.ipynb
```
from skimage.filters import threshold_otsu
#binarize the image, guassian blur will remove any noise in the image
first_line = line_images[0]
thresh = threshold_otsu(first_line)
binary = first_line > thresh
# find the vertical projection by adding up the values of all pixels along rows
vertical_projection = np.sum(binary, axis=0)
# plot the vertical projects
fig, ax = plt.subplots(nrows=2, figsize=(20,10))
plt.xlim(0, first_line.shape[1])
ax[0].imshow(binary, cmap="gray")
ax[1].plot(vertical_projection)
height = first_line.shape[0]
## we will go through the vertical projections and
## find the sequence of consecutive white spaces in the image
whitespace_lengths = []
whitespace = 0
for vp in vertical_projection:
if vp == height:
whitespace = whitespace + 1
elif vp != height:
if whitespace != 0:
whitespace_lengths.append(whitespace)
whitespace = 0 # reset whitepsace counter.
print("whitespaces:", whitespace_lengths)
avg_white_space_length = np.mean(whitespace_lengths)
print("average whitespace lenght:", avg_white_space_length)
## find index of whitespaces which are actually long spaces using the avg_white_space_length
whitespace_length = 0
divider_indexes = []
for index, vp in enumerate(vertical_projection):
if vp == height:
whitespace_length = whitespace_length + 1
elif vp != height:
if whitespace_length != 0 and whitespace_length > avg_white_space_length:
divider_indexes.append(index-int(whitespace_length/2))
whitespace_length = 0 # reset it
print(divider_indexes)
# lets create the block of words from divider_indexes
divider_indexes = np.array(divider_indexes)
dividers = np.column_stack((divider_indexes[:-1],divider_indexes[1:]))
# now plot the findings
fig, ax = plt.subplots(nrows=len(dividers), figsize=(5,10))
for index, window in enumerate(dividers):
ax[index].axis("off")
ax[index].imshow(first_line[:,window[0]:window[1]], cmap="gray")
```
| github_jupyter |
# CSE 6040, Fall 2015 [28]: K-means Clustering, Part 2
Last time, we implemented the basic version of K-means. In this lecture we will explore some advanced techniques
to improve the performance of K-means.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
### Read in data
```
df = pd.read_csv ('http://vuduc.org/cse6040/logreg_points_train.csv')
points = df.as_matrix (['x_1', 'x_2'])
labels = df['label'].as_matrix ()
n = points.shape[0]
d = points.shape[1]
k = 2
df.head()
def init_centers(X, k):
sampling = np.random.randint(0, n, k)
return X[sampling, :]
```
### Fast implementation of the distance matrix computation
The idea is that $$||(x - c)||^2 = ||x||^2 - 2\langle x, c \rangle + ||c||^2 $$
This has many advantages.
1. The centers are fixed (during a single iteration), so only needs to compute once
2. Data points are usually sparse, but centers are not
3. If implement cleverly, we don't need to use for loops
```
def compute_d2(X, centers):
D = np.empty((n, k))
for i in range(n):
D[i, :] = np.linalg.norm(X[i,:] - centers, axis=1) ** 2
return D
def compute_d2_fast(X, centers):
# @YOUSE: compute a length-n array, where each entry is the square of norm of a point
first_term =
# @YOUSE: compute a (n * k) matrix, where entry (i,j) is the two times of inner product of row i of X and row j of centers
second_term =
# @YOUSE: compute a length-k array, where each entry is the square of norm of a center
third_term =
D = np.tile(first_term, (centers.shape[0], 1)).T - second_term + np.tile(third_term, (n,1))
D[D < 0] = 0
return D
```
Let's see the different in running time of the two implementations.
```
centers = init_centers(points, k)
%timeit D = compute_d2(points, centers)
%timeit D = compute_d2_fast(points, centers)
def cluster_points(D):
return np.argmin(D, axis=1)
def update_centers(X, clustering):
centers = np.empty((k, d))
for i in range(k):
members = (clustering == i)
if any(members):
centers[i, :] = np.mean(X[members, :], axis=0)
return centers
def WCSS(D):
min_val = np.amin(D, axis=1)
return np.sum(min_val)
def has_converged(old_centers, centers):
return set([tuple(x) for x in old_centers]) == set([tuple(x) for x in centers])
def kmeans_basic(X, k):
old_centers = init_centers(X, k)
centers = init_centers(X, k)
i = 1
while not has_converged(old_centers, centers):
old_centers = centers
D = compute_d2_fast(X, centers)
clustering = cluster_points(D)
centers = update_centers(X, clustering)
print "iteration", i, "WCSS = ", WCSS(D)
i += 1
return centers, clustering
centers, clustering = kmeans_basic(points, k)
def plot_clustering_k2(centers, clustering):
df['clustering'] = clustering
sns.lmplot(data=df, x="x_1", y="x_2", hue="clustering", fit_reg=False,)
if df['clustering'][0] == 0:
colors = ['b', 'g']
else:
colors = ['g', 'b']
plt.scatter(centers[:,0], centers[:,1], s=500, c=colors, marker=u'*' )
plot_clustering_k2(centers, clustering)
```
### K-means implementation in Scipy
Actually, Python has a builtin K-means implementation in Scipy.
Scipy is a superset of Numpy, and is installed by default in many Python distributions.
```
from scipy.cluster.vq import kmeans,vq
# distortion is the same as WCSS.
# It is called distortion in the Scipy document, since clustering can be used in compression.
centers, distortion = kmeans(points, k)
# vq return the clustering (assignment of group for each point)
# based on the centers obtained by the kmeans function.
# _ here means ignore the second return value
clustering, _ = vq(points, centers)
plot_clustering_k2(centers, clustering)
```
## Elbow method to determine a good k
Elbow method is a general rule of thumb when selecting parameters.
The idea is to that one should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data
```
df_kcurve = pd.DataFrame(columns = ['k', 'distortion'])
for i in range(1,10):
_, distortion = kmeans(points, i)
df_kcurve.loc[i] = [i, distortion]
df_kcurve.plot(x="k", y="distortion")
```
You can see that at $k=2$, there is a sharper angle.
### Exercise: implement K-means++
K-means++ differs from K-means only in the initialization step.
In K-means, we randomly select k random data points as the centers at one time.
One may have bad luck and get poor initializations where all k points are concentrated in one area.
This could lead to a bad local optimum or take a long time to converge.
The idea of K-means++ is to select more spread-out centers.
In particular, K-means++ selects k centers iteratively, one at a time.
In the first iteration, it randomly choose only one random points as the 1st center.
In the second iteration, we calculate the square distance between each point and the 1st center,
and randomly choose the 2nd center with a probability distribution proportional to this square distance.
Now suppose we have chosen $m<k$ centers, the $(m+1)$-th center is randomly chosen
with a probability distribution proportional to the square distance between each point to its nearest center.
The initialization step finishes when all k centers are chosen.
```
def init_centers_kplusplus(X, k):
# @YOUSE: implement the initialization step in k-means++
# return centers: (k * d) matrix
pass
def kmeans_kplusplus(X, k):
old_centers = init_centers_kplusplus(X, k)
centers = init_centers(X, k)
i = 1
while not has_converged(old_centers, centers):
old_centers = centers
D = compute_d2_fast(X, centers)
clustering = cluster_points(D)
centers = update_centers(X, clustering)
print "iteration", i, "WCSS = ", WCSS(D)
i += 1
return centers, clustering
centers, clustering = kmeans_kplusplus(points, k)
plot_clustering_k2(centers, clustering)
```
| github_jupyter |
## Chapter 15
---
# K-Nearest Neighbors
An observation is predicted to be the class of that of the largest proportion of the k-nearest observations.
## 15.1 Finding an Observation's Nearest Neighbors
```
from sklearn import datasets
from sklearn.neighbors import NearestNeighbors
from sklearn.preprocessing import StandardScaler
iris = datasets.load_iris()
features = iris.data
standardizer = StandardScaler()
features_standardized = standardizer.fit_transform(features)
nearest_neighbors = NearestNeighbors(n_neighbors=2).fit(features_standardized)
#nearest_neighbors_euclidian = NearestNeighbors(n_neighbors=2, metric='euclidian').fit(features_standardized)
new_observation = [1, 1, 1, 1]
distances, indices = nearest_neighbors.kneighbors([new_observation])
features_standardized[indices]
```
### Discussion
How do we measure distance?
* Euclidian
$$
d_{euclidean} = \sqrt{\sum_{i=1}^{n}{(x_i - y_i)^2}}
$$
* Manhattan
$$
d_{manhattan} = \sum_{i=1}^{n}{|x_i - y_i|}
$$
* Minkowski (default)
$$
d_{minkowski} = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{\frac{1}{p}}
$$
## 15.2 Creating a K-Nearest Neighbor Classifier
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
standardizer = StandardScaler()
X_std = standardizer.fit_transform(X)
knn = KNeighborsClassifier(n_neighbors=5, n_jobs=-1).fit(X_std, y)
new_observations = [[0.75, 0.75, 0.75, 0.75],
[1, 1, 1, 1]]
knn.predict(new_observations)
```
### Discussion
In KNN, given an observation $x_u$, with an unknown target class, the algorithm first identifies the k closest observations (sometimes called $x_u$'s neighborhood) based on some distance metric, then these k observations "vote" based on their class and the class that wins the vote is $x_u$'s predicted class. More formally, the probability $x_u$ is some class j is:
$$
\frac{1}{k} \sum_{i \in v}{I(y_i = j)}
$$
where v is the k observatoin in $x_u$'s neighborhood, $y_i$ is the class of the ith observation, and I is an indicator function (i.e., 1 is true, 0 otherwise). In scikit-learn we can see these probabilities using `predict_proba`
## 15.3 Identifying the Best Neighborhood Size
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
iris = datasets.load_iris()
features = iris.data
target = iris.target
standardizer = StandardScaler()
features_standardized = standardizer.fit_transform(features)
knn = KNeighborsClassifier(n_neighbors=5, n_jobs=-1)
pipe = Pipeline([("standardizer", standardizer), ("knn", knn)])
search_space = [{"knn__n_neighbors": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}]
classifier = GridSearchCV(
pipe, search_space, cv=5, verbose=0).fit(features_standardized, target)
classifier.best_estimator_.get_params()["knn__n_neighbors"]
```
## 15.4 Creating a Radius-Based Nearest Neighbor Classifier
given an observation of unknown class, you need to predict its class based on the class of all observations within a certain distance.
```
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn import datasets
iris = datasets.load_iris()
features = iris.data
target = iris.target
standardizer = StandardScaler()
features_standardized = standardizer.fit_transform(features)
rnn = RadiusNeighborsClassifier(
radius=.5, n_jobs=-1).fit(features_standardized, target)
new_observations = [[1, 1, 1, 1]]
rnn.predict(new_observations)
```
| github_jupyter |
# Huggingface Sagemaker-sdk - Distributed Training Demo
### Model Parallelism using `SageMakerTrainer`
1. [Introduction](#Introduction)
2. [Development Environment and Permissions](#Development-Environment-and-Permissions)
1. [Installation](#Installation)
2. [Development environment](#Development-environment)
3. [Permissions](#Permissions)
3. [Processing](#Preprocessing)
1. [Tokenization](#Tokenization)
2. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket)
4. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job)
1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job)
2. [Estimator Parameters](#Estimator-Parameters)
3. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3)
3. [Attach to old training job to an estimator ](#Attach-to-old-training-job-to-an-estimator)
5. [_Coming soon_:Push model to the Hugging Face hub](#Push-model-to-the-Hugging-Face-hub)
# Introduction
Welcome to our end-to-end distributed Text-Classification example. In this demo, we will use the Hugging Face `transformers` and `datasets` library together with a Amazon sagemaker-sdk extension to run GLUE `mnli` benchmark on a multi-node multi-gpu cluster using [SageMaker Model Parallelism Library](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-intro.html). The demo will use the new smdistributed library to run training on multiple gpus. We extended the `Trainer` API to a the `SageMakerTrainer` to use the model parallelism library. Therefore you only have to change the imports in your `train.py`.
```python
from transformers.sagemaker import SageMakerTrainingArguments as TrainingArguments
from transformers.sagemaker import SageMakerTrainer as Trainer
```
_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_
# Development Environment and Permissions
## Installation
_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```
!pip install "sagemaker>=2.31.0" "transformers>=4.4.2" "datasets[s3]>=1.5.0" --upgrade
```
## Development environment
**upgrade ipywidgets for `datasets` library and restart kernel, only needed when prerpocessing is done in the notebook**
```
%%capture
import IPython
!conda install -c conda-forge ipywidgets -y
IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used
import sagemaker.huggingface
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# Fine-tuning & starting Sagemaker Training Job
In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In a Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in .....
```python
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters = {'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
})
```
When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running.
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
```
The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments.
Sagemaker is providing useful properties about the training environment through various environment variables, including the following:
* `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting.
* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.
* `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set.
To run your training job locally you can define `instance_type='local'` or `instance_type='local-gpu'` for gpu usage. _Note: this does not working within SageMaker Studio_
## Creating an Estimator and start a training job
In this example we are going to use the `run_glue.py` from the transformers example scripts. We modified it and included `SageMakerTrainer` instead of the `Trainer` to enable model-parallelism. You can find the code [here](https://github.com/huggingface/transformers/tree/master/examples/text-classification).
```python
from transformers.sagemaker import SageMakerTrainingArguments as TrainingArguments, SageMakerTrainer as Trainer
```
```
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'model_name_or_path':'roberta-large',
'task_name': 'mnli',
'per_device_train_batch_size': 16,
'per_device_eval_batch_size': 16,
'do_train': True,
'do_eval': True,
'do_predict': True,
'num_train_epochs': 2,
'output_dir':'/opt/ml/model',
'max_steps': 500,
}
# configuration for running training on smdistributed Model Parallel
mpi_options = {
"enabled" : True,
"processes_per_host" : 8,
}
smp_options = {
"enabled":True,
"parameters": {
"microbatches": 4,
"placement_strategy": "spread",
"pipeline": "interleaved",
"optimize": "speed",
"partitions": 4,
"ddp": True,
}
}
distribution={
"smdistributed": {"modelparallel": smp_options},
"mpi": mpi_options
}
# instance configurations
instance_type='ml.p3dn.24xlarge'
instance_count=1
volume_size=400
# metric definition to extract the results
metric_definitions=[
{'Name': 'train_runtime', 'Regex':"train_runtime.*=\D*(.*?)$"},
{'Name': 'train_samples_per_second', 'Regex': "train_samples_per_second.*=\D*(.*?)$"},
{'Name': 'epoch', 'Regex': "epoch.*=\D*(.*?)$"},
{'Name': 'f1', 'Regex': "f1.*=\D*(.*?)$"},
{'Name': 'exact_match', 'Regex': "exact_match.*=\D*(.*?)$"}]
# estimator
huggingface_estimator = HuggingFace(entry_point='run_glue.py',
source_dir='./scripts',
metrics_definition=metric_definitions,
instance_type=instance_type,
instance_count=instance_count,
volume_size=volume_size,
role=role,
transformers_version='4.4.2',
pytorch_version='1.6.0',
py_version='py36',
distribution= distribution,
hyperparameters = hyperparameters,
debugger_hook_config=False)
huggingface_estimator.hyperparameters()
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit()
```
## Estimator Parameters
```
# container image used for training job
print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n")
# s3 uri where the trained model is located
print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n")
# latest training job name for this estimator
print(f"latest training job name for this estimator: \n{huggingface_estimator.latest_training_job.name}\n")
# access the logs of the training job
huggingface_estimator.sagemaker_session.logs_for_job(huggingface_estimator.latest_training_job.name)
```
## Attach to old training job to an estimator
In Sagemaker you can attach an old training job to an estimator to continue training, get results etc..
```
from sagemaker.estimator import Estimator
# job which is going to be attached to the estimator
old_training_job_name=''
# attach old training job
huggingface_estimator_loaded = Estimator.attach(old_training_job_name)
# get model output s3 from training job
huggingface_estimator_loaded.model_data
```
| github_jupyter |
# Adversarial attacks on ammonia using the pretrained models
In this notebook, we perform an adversarial attack on zeolites using the [SchNet NN potential](https://github.com/learningmatter-mit/NeuralForceField). We will be using the third generation of ammonia models, as shown in [our paper](https://arxiv.org/abs/2101.11588).
The utilities at `nff` will be used to perform the energy/force predictions. `nglview` will be used to visualize the generated trajectories. A few utility functions from this repo, `robust`, will be used as well. For the sake of generality, all steps for performing the adversarial attack are shown in this notebook.
```
import os
import sys
sys.path.append('..')
import robust as rb
import torch as ch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from nff.io import NeuralFF, AtomsBatch, EnsembleNFF
from nff.data import Dataset
from nff.train import load_model
from ase.io import Trajectory, read
import nglview as nv
```
## Loading the dataset, models and initial geometry
```
DEVICE = 2
dset = Dataset.from_file('../data/ammonia.pth.tar')
PATH = '../models/ammonia'
models = []
for model_name in sorted(os.listdir(PATH)):
m = NeuralFF.from_file(os.path.join(PATH, model_name), device=DEVICE).model
models.append(m)
ensemble = EnsembleNFF(models, device=DEVICE)
CUTOFF = 5
def get_atoms(props):
atoms = AtomsBatch(
positions=props['nxyz'][:, 1:],
numbers=props['nxyz'][:, 0],
cutoff=CUTOFF,
props={'energy': 0, 'energy_grad': []},
calculator=ensemble,
nbr_torch=False,
device=DEVICE,
)
_ = atoms.update_nbr_list()
return atoms
initial = get_atoms(dset[np.argmin(dset.props['energy'])])
```
## Defining the adversarial attack
The `Attacker` class allows one to perform an adversarial attack using an ensemble of SchNet models.
```
energy_dset = rb.PotentialDataset(
ch.zeros_like(dset.props['energy']),
dset.props['energy'],
ch.zeros_like(dset.props['energy']),
)
loss_fn = rb.loss.AdvLoss(
train=energy_dset,
temperature=20,
)
attacker = rb.schnet.Attacker(
initial,
ensemble,
loss_fn,
device=DEVICE,
)
results = attacker.attack()
```
## Post-processing the results: visualizing and plotting
After the adversarial attack is performed, we can now visualize it and post-process the results. We start by recalculating the variance in forces and energies.
```
df = pd.DataFrame(results)
df['forces_var'] = [f.var(-1).mean() for f in df['forces']]
df['energy_var'] = [e.var() for e in df['energy']]
df['energy_avg'] = [e.mean() for e in df['energy']]
```
Then, we reconstruct the trajectory of the adversarial attack using the values of `delta` along the attack.
```
newatoms = []
for transl in df.delta:
at = initial.copy()
at.translate(transl)
newatoms.append(at)
view = nv.show_asetraj(newatoms)
view.add_unitcell()
view
```
Finally, we can plot the trajectory of the adversarial attack based on the sampled properties shown before (see Fig. S10 of the paper). Notice how the geometry that maximizes the adversarial loss is not necessarily the one with highest energy.
```
fig, ax = plt.subplots(figsize=(4, 5))
ax.spines['right'].set_visible(True)
ax.plot(-df.loss, color='k')
ax.set_ylabel('Adversarial Loss')
COLOR_TAX = '#d62728'
tax = ax.twinx()
tax.plot(df.energy_avg, color=COLOR_TAX)
tax.set_yticklabels(
tax.get_yticks(),
color=COLOR_TAX
)
tax.set_ylabel('Mean energy (kcal/mol)', color=COLOR_TAX)
ax.set_xlabel('Attack step')
plt.show()
```
| github_jupyter |
# Hello Object Detection
A very basic introduction to using object detection models with OpenVINO.
We use the [horizontal-text-detection-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_horizontal_text_detection_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the format `[x_min, y_min, x_max, y_max, conf]`.
## Imports
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.inference_engine import IECore
```
## Load the Model
```
ie = IECore()
net = ie.read_network(
model="model/horizontal-text-detection-0001.xml",
weights="model/horizontal-text-detection-0001.bin",
)
exec_net = ie.load_network(net, "CPU")
output_layer_ir = next(iter(exec_net.outputs))
input_layer_ir = next(iter(exec_net.input_info))
```
## Load an Image
```
# Text detection models expects image in BGR format
image = cv2.imread("data/intel_rnb.jpg")
# N,C,H,W = batch size, number of channels, height, width
N, C, H, W = net.input_info[input_layer_ir].tensor_desc.dims
# Resize image to meet network expected input sizes
resized_image = cv2.resize(image, (W, H))
# Reshape to network input shape
input_image = np.expand_dims(resized_image.transpose(2, 0, 1), 0)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB));
```
## Do Inference
```
result = exec_net.infer(inputs={input_layer_ir: input_image})
# Extract list of boxes from results
boxes = result["boxes"]
# Remove zero only boxes
boxes = boxes[~np.all(boxes == 0, axis=1)]
```
## Visualize Results
```
# For each detection, the description has the format: [x_min, y_min, x_max, y_max, conf]
# Image passed here is in BGR format with changed width and height. To display it in colors expected by matplotlib we use cvtColor funtion
def convert_result_to_image(bgr_image, resized_image, boxes, threshold=0.3, conf_labels=True):
# Helper function to multiply shape by ratio
def multiply_by_ratio(ratio_x, ratio_y, box):
return [
max(shape * ratio_y, 10) if idx % 2 else shape * ratio_x
for idx, shape in enumerate(box[:-1])
]
# Define colors for boxes and descriptions
colors = {"red": (255, 0, 0), "green": (0, 255, 0)}
# Fetch image shapes to calculate ratio
(real_y, real_x), (resized_y, resized_x) = image.shape[:2], resized_image.shape[:2]
ratio_x, ratio_y = real_x / resized_x, real_y / resized_y
# Convert base image from bgr to rgb format
rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)
# Iterate through non-zero boxes
for box in boxes:
# Pick confidence factor from last place in array
conf = box[-1]
if conf > threshold:
# Convert float to int and multiply position of each box by x and y ratio
(x_min, y_min, x_max, y_max) = map(int, multiply_by_ratio(ratio_x, ratio_y, box))
# Draw box based on position, parameters in rectangle function are: image, start_point, end_point, color, thickness
rgb_image = cv2.rectangle(rgb_image, (x_min, y_min), (x_max, y_max), colors["green"], 3)
# Add text to image based on position and confidence
# Parameters in text function are: image, text, bottomleft_corner_textfield, font, font_scale, color, thickness, line_type
if conf_labels:
rgb_image = cv2.putText(
rgb_image,
f"{conf:.2f}",
(x_min, y_min - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.8,
colors["red"],
1,
cv2.LINE_AA,
)
return rgb_image
plt.figure(figsize=(10, 6))
plt.axis("off")
plt.imshow(convert_result_to_image(image, resized_image, boxes, conf_labels=False));
```
| github_jupyter |
# scikit-learn for NLP -- Part 2 Feature Engineering
If you are reading this, that means you are still alive. Welcome back to the reality of learning scikit-learn.
This tutorial focuses on feature engineering and covers more advanced topics in scikit-learn like feature extraction, building a pipeline, creating custom transformers, feature union, dimensionality reduction, and grid search. Feature engineering is a very important step in NLP and ML; it is not a trival task to select good features. Therefore, we are spending a lot of time on it here.
Without further ado, let's start with loading a dataset again. This time, we will use a CSV file that has more than two columns, i.e. one column for labels and mutiple columns for raw data/features. This time, we are using a subset of the Yelp Review Data Challenge dataset. Just like the 20 News Group dataset, I converted this dataset to CSV.
There are 5 star ratings (shocking), and I extracted 500 reviews for each rating. This dataset is small, because it's only intended for some quick demo; therefore, the performance of any classifier won't be too good (and this should be a regression problem instead of classification, but to make things easier, let's stick with classification). Other than extracing the text of each review, I also included other users' votes for each review, i.e funny, useful and cool. For this tutorial, our task is to predict the star rating of each review.
```
import pandas as pd
dataset = pd.read_csv('yelp-review-subset.csv', header=0, delimiter=',', names=['stars', 'text', 'funny', 'useful', 'cool'])
# just checking the dataset
print('There are {0} star ratings, and {1} reviews'.format(len(dataset.stars.unique()), len(dataset)))
print(dataset.stars.value_counts())
```
With Pandas dataframe, it is very easy to select a subset of a dataframe by column names, simply pass in a list of column names. So we are going to split our data just like the previous tutorial.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dataset[['text', 'funny', 'useful', 'cool']], dataset['stars'], train_size=0.8)
```
The difference now is that we have four columns in our raw data. Three of them are ``funny``, ``useful`` and ``cool`` which contain numeric values, which is perfect for scikit-learn as it expects values of features to be numeric or indexes. What we need to do is to extract features from the ``text`` column.
```
print(X_train.columns)
```
We can first do something very similar to the previous tutorial, we initialize a ``CountVectorizer`` object and then pass raw text dat into its ``fit_transform()`` function to index the count of each word. Please note that you should not pass in the whole ``X_train`` dataframe into the function, but only the ``text`` column, i.e. the ``X_train.text`` dataframe (more or less like an array). Otherwise, it does not extract features from your ``text`` column, but simly indexes all the values in each column. Please see the shape of the output of two different dataframes.
```
from sklearn.feature_extraction.text import CountVectorizer
# initialize a CountVectorizer
cv = CountVectorizer()
# fit the raw data into the vectorizer and tranform it into a series of arrays
X_train_counts = cv.fit_transform(X_train.text)
print(X_train_counts.shape)
# this is not what you want.
cv_test = CountVectorizer()
X_train_counts_test = cv.fit_transform(X_train)
print(X_train_counts_test.shape)
```
Now we have a problem, the ``X_train_counts`` vector only contains features from ``text``, but now what should we do if we want to include ``funny``, ``useful`` and ``cool`` vote counts as feature as well?
### 1. ``Pipeline`` and ``FeatureUnion``
To deal with that problem, we need to talk about [``Pipeline``](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) and [``FeatureUnion``](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html). ``Pipeline`` lets us define a list of steps which consists of a list of transformers to extract features from data (including ``FeatureUnion``) and a final estimator (aka classifier). ``FeatureUnion`` basically concatenates results of multiple transformer objects. The following is a complete example of how to use these two together.
```
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import FeatureUnion, Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import DictVectorizer
from sklearn.metrics import classification_report
class ItemSelector(TransformerMixin, BaseEstimator):
"""This class allows you to select a subset of a dataframe based on given column name(s)."""
def __init__(self, keys):
self.keys = keys
def fit(self, x, y=None):
return self
def transform(self, dataframe):
return dataframe[self.keys]
class VotesToDictTransformer(TransformerMixin, BaseEstimator):
"""This tranformer converts the vote counts of each row into a dictionary."""
def fit(self, x, y=None):
return self
def transform(self, votes):
funny, useful, cool = votes['funny'], votes['useful'], votes['cool']
return [{'funny': binarize_number(f, 1), 'useful': binarize_number(u, 1), 'cool': binarize_number(c, 1)}
for f, u, c in zip(funny, useful, cool)]
def binarize_number(num, threshold):
return 0 if num < threshold else 1
pipeline = Pipeline([
# Use FeatureUnion to combine the features from text and votes
('union', FeatureUnion(
transformer_list=[
# Pipeline for getting BOW features from the texts
('bag-of-words', Pipeline([
('selector', ItemSelector(keys='text')),
('counts', CountVectorizer()),
])),
# Pipeline for getting vote counts as features
# the DictVecotrizer object there transform indexes the values of the dictionaries
# passed down from the VotesToDictTransformer object.
('votes', Pipeline([
('selector', ItemSelector(keys=['funny', 'useful', 'cool'])),
('votes_to_dict', VotesToDictTransformer()),
('vectorizer', DictVectorizer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'bag-of-words': 1.0,
'votes': 0.5
},
)),
# Use a naive bayes classifier on the combined features
('clf', LogisticRegression()),
])
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
print(classification_report(predicted, y_test))
```
### 2. Custom Transformers
In the previous section, I defined two classes ``ItemSelector`` and ``VotesToDictTransformer``, and the commonality of these two is that they inherited the [``TransformerMixin``](http://scikit-learn.org/stable/modules/generated/sklearn.base.TransformerMixin.html) class. ``TransformerMixin`` is the base class of many built-in transformers and vectorizers in scikit-learn, e.g. ``CountVectorizer``, ``TfidfVectorizer``, ``TfidfTransformer``, ``DictVectorizer``, etc. We define the ``transform()`` function to manipulate the data in a more custom way. For example, ``ItemSelector`` returns a subset of dataframe based on given column names, and ``VotesToDictTransformer`` transforms a dataframe into a list of dictionaries.
To demonstrate how useful customer transformers are, let's define another one. Say, we hypothesize that the sentiment of each review can be a strong feature for predict the star rating. Then we would need a ``SentimentTransformer`` class.
To avoid spending time to train our own sentiment classifier, we use the ``TextBlob`` package for its built-in [sentiment analysis feature](https://textblob.readthedocs.io/en/dev/quickstart.html#sentiment-analysis).
```
from textblob import TextBlob
class SentimentTransformer(TransformerMixin, BaseEstimator):
def fit(self, x, y=None):
return self
def transform(self, texts):
features = []
for text in texts:
blob = TextBlob(text.decode('utf-8'))
features.append({'polarity': binarize_number(blob.sentiment.polarity, 0.5),
'subjectivity': binarize_number(blob.sentiment.subjectivity, 0.5)})
return features
```
Let's add that transformer to our existing pipeline, and see if the additional features help.
```
pipeline = Pipeline([
('union', FeatureUnion(
transformer_list=[
('bag-of-words', Pipeline([
('selector', ItemSelector(keys='text')),
('counts', CountVectorizer()),
])),
('votes', Pipeline([
('selector', ItemSelector(keys=['funny', 'useful', 'cool'])),
('votes_to_dict', VotesToDictTransformer()),
('vectorizer', DictVectorizer()),
])),
('sentiments', Pipeline([
('selector', ItemSelector(keys='text')),
('sentiment_transform', SentimentTransformer()),
('vectorizer', DictVectorizer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'bag-of-words': 1.0,
'votes': 0.5,
'sentiments': 1.0,
},
)),
# Use a naive bayes classifier on the combined features
('clf', LogisticRegression()),
])
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
print(classification_report(predicted, y_test))
```
### 3. Feature Reduction/Selection
Two major problems of using bag-of-words as features are (1) that it introduces noise; and (2) that it increases the dimensionality of feature space. When using bag-of-words, we simply throw in a bunch of words into the feature space and hope and pray that they work, because we don't know what words are most informative in a model. Other than handcrafting features and selecting what words to put into the feature space, we can also use the [``feature_selection``](http://scikit-learn.org/stable/modules/feature_selection.html) module to automatically select informative features and eliminate noise.
"``SelectFromModel`` is a meta-transformer that can be used along with any estimator that has a ``coef_`` or ``feature_importances_`` attribute after fitting. The features are considered unimportant and removed, if the corresponding ``coef_`` or ``feature_importances_`` values are below the provided ``threshold`` parameter." Basically, the idea is "to reduce the dimensionality of the data to use with another classifier, they can be used along with ``feature_selection.SelectFromModel`` to select the non-zero coefficients."
In this following example, we use ``LogisticRegression`` to perform feature elemination. "l2" is the penalty, and C controls the sparsity: the smaller C the fewer features selected.
```
from sklearn.feature_selection import SelectFromModel
pipeline = Pipeline([
('union', FeatureUnion(
transformer_list=[
('bag-of-words', Pipeline([
('selector', ItemSelector(keys='text')),
('counts', CountVectorizer()),
])),
('votes', Pipeline([
('selector', ItemSelector(keys=['funny', 'useful', 'cool'])),
('votes_to_dict', VotesToDictTransformer()),
('vectorizer', DictVectorizer()),
])),
('sentiments', Pipeline([
('selector', ItemSelector(keys='text')),
('sentiment_transform', SentimentTransformer()),
('vectorizer', DictVectorizer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'bag-of-words': 1.0,
'votes': 0.5,
'sentiments': 1.0,
},
)),
# use SelectFromModel to select informative features
('feature_selection', SelectFromModel(LogisticRegression(C=0.5, penalty="l2"))),
# Use a naive bayes classifier on the combined features
('clf', LogisticRegression()),
])
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
print(classification_report(predicted, y_test))
```
### 4. Grid Search
Finally, many classifiers like logistic regression or SVM have certain parameters to tweak for them to get optimal results, and it can be pain in the neck for one to try every combination. Grid Search is an automated method to try every combination and rank the best combinations. In scikit-learn, we use ``model_selection.GridSearchCV``. The bare minimum set of parameters for grid search is an ``estimator`` object and a list or dictionary of parameters. In our case, we are passing a pipline and a dictionary (``Pipeline`` inherits ``BaseEstimator``).
Parameters of the estimators in the pipeline can be accessed using the <estimator>__<parameter> syntax, and that will be the key of the dictionary, and the value will be a list of values to experiment. For example, we want to try the combination of different ``max_iter`` and ``C`` values, and the name of the ``LogisticRegression`` in the pipeline is `clf`. Therefore, in the dictionary there are two entries: ``clf__max_iter`` and ``clf__C``.
```
from sklearn.model_selection import GridSearchCV
pipeline = Pipeline([
('union', FeatureUnion(
transformer_list=[
('bag-of-words', Pipeline([
('selector', ItemSelector(keys='text')),
('counts', CountVectorizer()),
])),
('votes', Pipeline([
('selector', ItemSelector(keys=['funny', 'useful', 'cool'])),
('votes_to_dict', VotesToDictTransformer()),
('vectorizer', DictVectorizer()),
])),
('sentiments', Pipeline([
('selector', ItemSelector(keys='text')),
('sentiment_transform', SentimentTransformer()),
('vectorizer', DictVectorizer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'bag-of-words': 1.0,
'votes': 0.5,
'sentiments': 1.0,
},
)),
# Use a naive bayes classifier on the combined features
('clf', LogisticRegression()),
])
params = dict(clf__max_iter=[50, 100, 150], clf__C=[1.0, 0.5, 0.1])
grid_search = GridSearchCV(pipeline, param_grid=params)
grid_search.fit(X_train, y_train)
print(grid_search.best_params_)
```
According to the output above, when C=0.1 and max_iter=50, we get the best results. To validate the results, let's use these values to train and test a model.
```
pipeline = Pipeline([
('union', FeatureUnion(
transformer_list=[
('bag-of-words', Pipeline([
('selector', ItemSelector(keys='text')),
('counts', CountVectorizer()),
])),
('votes', Pipeline([
('selector', ItemSelector(keys=['funny', 'useful', 'cool'])),
('votes_to_dict', VotesToDictTransformer()),
('vectorizer', DictVectorizer()),
])),
('sentiments', Pipeline([
('selector', ItemSelector(keys='text')),
('sentiment_transform', SentimentTransformer()),
('vectorizer', DictVectorizer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'bag-of-words': 1.0,
'votes': 0.5,
'sentiments': 1.0,
},
)),
# Use a naive bayes classifier on the combined features
('clf', LogisticRegression(C=0.1, max_iter=50)),
])
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
print(classification_report(predicted, y_test))
```
As we can see, with almost everything the same as the pipeline in Section 2, changing the value of ``C`` and that of ``max_iter`` improves our results (the default value of ``C`` is 1.0 and that of ``max_iter`` is 100).
### 5. Conclusion
This is just a simple overview of performing feature engineering in scikit-learn, and there are many different models that you can try. For example, with ``GridSearchCV``, you can even try comparing the performance of different classifiers. This two-part tutorial is to help you get familiar and comfortable with scikit-learn and its main modules. Please check its documentation if you need more clarification on how to do certain things, since scikit-learn is one of the best documented libraries that I know of!
| github_jupyter |
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Quantum Computing Service reservation utility
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/reservations"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/reservations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/reservations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/reservations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
```
This Colab notebook provides canned interactions with Cirq to manage a project's reservations on the Quantum Computing Service.
For information on how to download a Colab notebook from GitHub, see [these instructions](colab.ipynb). You can also [view this file](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/reservations.ipynb) or download the [raw content](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/reservations.ipynb) of this content from Github.
## Configure
Choose the project to manage, autheticate, and install the necessary tools.
**Note: If you are running on Jupyter notebook, please don't forget to change `project_id`, `processor_id` and `time_zone` to your preferences**
```
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
import cirq.google
from cirq.google.engine.client.quantum_v1alpha1.gapic import enums
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
# The processor to view.
processor_id = 'mcgee' #@param ['rainbow', 'pacific', 'mcgee']
# The local time zone.
time_zone = 'America/Los_Angeles' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq.google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
processor = engine.get_processor(processor_id)
import datetime
import pytz
tz = pytz.timezone(time_zone)
now = tz.localize(datetime.datetime.now())
def date_string(timestamp):
if timestamp.seconds < 0:
return 'Beginning of time'
if timestamp.seconds > 4771848621:
return 'End of time'
time = datetime.datetime.fromtimestamp(timestamp.seconds).astimezone(tz)
return time.strftime('%Y-%m-%d %H:%M:%S')
def delta(start, end):
if start.seconds < 0 or end.seconds > 5000000000:
return "∞"
return "{} hrs".format((end.seconds - start.seconds) / (60 * 60))
def time_slot_string(time_slot):
start = date_string(time_slot.start_time)
end = date_string(time_slot.end_time)
slot_type = cirq.google.engine.client.quantum_v1alpha1.types.QuantumTimeSlot.TimeSlotType.Name(time_slot.slot_type)
slot_string = "{} to {} ({}) - {}".format(start, end, delta(time_slot.start_time, time_slot.end_time), slot_type)
if time_slot.HasField('reservation_config'):
return "{} for {}".format(slot_string, time_slot.reservation_config.project_id)
if time_slot.HasField('maintenance_config'):
return "{} {} - {}".format(slot_string, time_slot.maintenance_config.title, time_slot.maintenance_config.description)
return slot_string
def reservation_string(reservation):
start = date_string(reservation.start_time)
end = date_string(reservation.end_time)
id = reservation.name.split('/reservations/')[1]
reservation_string = "{} to {} ({}) - {}".format(start, end, delta(reservation.start_time, reservation.end_time), id)
if len(reservation.whitelisted_users) > 0:
return "{} {}".format(reservation_string, reservation.whitelisted_users)
return reservation_string
print("============================")
print("Runtime setup completed")
```
## Checkout the upcoming schedule
```
#@title
schedule = processor.get_schedule()
for s in schedule:
print(time_slot_string(s))
```
## Find available time
```
#@title
schedule = processor.get_schedule()
unallocated = list(filter(lambda t: t.slot_type == enums.QuantumTimeSlot.TimeSlotType.UNALLOCATED, schedule))
for s in unallocated:
print(time_slot_string(s))
if len(unallocated) == 0:
print("No available time slots")
```
## List upcoming reservations for the project
```
#@title
reservations = processor.list_reservations()
for r in reservations:
print(reservation_string(r))
else:
print(f"No reservations for project {project_id}")
```
## Reserve time
```
#@markdown Create a new reservation for the given start date and time with the given duration in hours.
start_date_picker = "2020-11-06" #@param {type:"date"}
start_time = "15" #@param [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
hours = 1 #@param {type:"integer"}
#@markdown Comma-separated email addresses of any additional users to explicitly whitelist for the reservation.
whitelist_user_emails = "" #@param {type:"string"}
start_time_naive = datetime.datetime.strptime("{} {}".format(start_date_picker, start_time), '%Y-%m-%d %H')
start_time = tz.localize(start_time_naive)
end_time = start_time + datetime.timedelta(hours=hours)
print(reservation_string(processor.create_reservation(start_time=start_time,
end_time=end_time,
whitelisted_users=[e.strip() for e in whitelist_user_emails.split(',') if e])))
```
## Update reservation time
```
#@markdown Update the deatils of an existing reservation. _You can find the `reservation_id` by listing your reservations or checking the output when you create a new reservation._
reservation_id = "" #@param {type:"string"}
start_date_picker = "2020-03-27" #@param {type:"date"}
start_time = "12" #@param [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
hours = 2#@param {type:"integer"}
start_time_naive = datetime.datetime.strptime("{} {}".format(start_date_picker, start_time), '%Y-%m-%d %H')
start_time = tz.localize(start_time_naive)
end_time = start_time + datetime.timedelta(hours=hours)
print(reservation_string(processor.update_reservation(reservation_id=reservation_id,
start_time=start_time,
end_time=end_time)))
```
## Update reservation whitelisted users
```
#@markdown Update the deatils of an existing reservation. _You can find the `reservation_id` by listing your reservations or checking the output when you create a new reservation._
reservation_id = "" #@param {type:"string"}
#@markdown Comma-separated email addresses of any additional users to explicitly whitelist for the reservation.
whitelisted_user_emails = "" #@param {type:"string"}
print(reservation_string(processor.update_reservation(reservation_id=reservation_id,
whitelisted_users=[e.strip() for e in whitelisted_user_emails.split(',') if e])))
```
## Remove reservation
```
#@markdown Delete a specific reservation as long as it is outside the schedule freeze. Inside the schedule freeze period reservations are cancelled instead.
reservation_id = "" #@param {type:"string"}
processor.remove_reservation(reservation_id)
```
| github_jupyter |
# Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Prediction
The goal of this project of mine is to bring users to try and experiment with the seq2seq neural network architecture. This is done by solving different simple toy problems about signal prediction. Normally, seq2seq architectures may be used for other more sophisticated purposes than for signal prediction, let's say, language modeling, but this project is an interesting tutorial in order to then get to more complicated stuff.
In this project are given 4 exercises of gradually increasing difficulty. I take for granted that the public already have at least knowledge of basic RNNs and how can they be shaped into an encoder and a decoder of the most simple form (without attention). To learn more about RNNs in TensorFlow, you may want to visit this other project of mine about that: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
The current project is a series of example I have first built in French, but I haven't got the time to generate all the charts anew with proper English text. I have built this project for the practical part of the third hour of a "master class" conference that I gave at the WAQ (Web At Quebec) in March 2017:
https://webaquebec.org/classes-de-maitre/deep-learning-avec-tensorflow
You can find the French, original, version of this project in the French Git branch: https://github.com/guillaume-chevalier/seq2seq-signal-prediction/tree/francais
## How to use this ".ipynb" Python notebook ?
Except the fact I made available an ".py" Python version of this tutorial within the repository, it is more convenient to run the code inside the notebook. The ".py" code exported feels a bit raw as an exportation.
To run the notebook, you must have installed Jupyter Notebook or iPython Notebook. To open the notebook, you must write `jupyter notebook` or `iPython notebook` in command line (from the folder containing the notebook once downloaded, or a parent folder). It is then that the notebook application (IDE) will open in your browser as a local server and it will be possible to open the `.ipynb` notebook file and to run code cells with `CTRL+ENTER` and `SHIFT+ENTER`, it is also possible to restart the kernel and run all cells at once with the menus. Note that this is interesting since it is possible to make that IDE run as hosted on a cloud server with a lot of GPU power while you code through the browser.
## Exercises
Note that the dataset changes in function of the exercice. Most of the time, you will have to edit the neural networks' training parameter to succeed in doing the exercise, but at a certain point, changes in the architecture itself will be asked and required. The datasets used for this exercises are found in `datasets.py`.
### Exercise 1
In theory, it is possible to create a perfect prediction of the signal for this exercise. The neural network's parameters has been set to acceptable values for a first training, so you may pass this exercise by running the code without even a change. Your first training might get predictions like that (in yellow), but it is possible to do a lot better with proper parameters adjustments:
<img src="images/E1.png" />
Note: the neural network sees only what is to the left of the chart and is trained to predict what is at the right (predictions in yellow).
We have 2 time series at once to predict, which are tied together. That means our neural network processes multidimensional data. A simple example would be to receive as an argument the past values of multiple stock market symbols in order to predict the future values of all those symbols with the neural network, which values are evolving together in time. That is what we will do in the exercise 6.
### Exercise 2
Here, rather than 2 signals in parallel to predict, we have only one, for simplicity. HOWEVER, this signal is a superposition of two sine waves of varying wavelenght and offset (and restricted to a particular min and max limit of wavelengts).
In order to finish this exercise properly, you will need to edit the neural network's hyperparameters. As an example, here is what is possible to achieve as a predction with those better (but still unperfect) training hyperparameters:
- `nb_iters = 2500`
- `batch_size = 50`
- `hidden_dim = 35`
<img src="images/E2.png" />
Here are predictions achieved with a bigger neural networks with 3 stacked recurrent cells and a width of 500 hidden units for each of those cells:
<img src="images/E2_1.png" />
<img src="images/E2_2.png" />
<img src="images/E2_3.png" />
<img src="images/E2_4.png" />
Note that it would be possible to obtain better results with a smaller neural network, provided better training hyperparameters and a longer training, adding dropout, and on.
### Exercise 3
This exercise is similar to the previous one, except that the input data given to the encoder is noisy. The expected output is not noisy. This makes the task a bit harder. Here is a good example of what a training example (and a prediction) could now looks like :
<img src="images/E3.png" />
Therefore the neural network is brought to denoise the signal to interpret its future smooth values. Here are some example of better predictions on this version of the dataset :
<img src="images/E3_1.png" />
<img src="images/E3_2.png" />
<img src="images/E3_3.png" />
<img src="images/E3_4.png" />
Similarly as I said for the exercise 2, it would be possible here too to obtain better results. Note that it would also have been possible to ask you to predict to reconstruct the denoised signal from the noisy input (and not predict the future values of it). This would have been called a "denoising autoencoder", this type of architecture is also useful for data compression, such as manipulating images.
### Exercise 4
This exercise is much harder than the previous ones and is built more as a suggestion. It is to predict the future value of the Bitcoin's price. We have here some daily market data of the bitcoin's value, that is, BTC/USD and BTC/EUR. This is not enough to build a good predictor, at least having data precise at the minute level, or second level, would be more interesting. Here is a prediction made on the actual future values, the neural network has not been trained on the future values shown here and this is a legitimate prediction, given a well-enough model trained on the task:
<img src="images/E5.png" />
Disclaimer: this prediction of the future values was really good and you should not expect predictions to be always that good using as few data as actually (side note: the other prediction charts in this project are all "average" except this one). Your task for this exercise is to plug the model on more valuable financial data in order to make more accurate predictions. Let me remind you that I provided the code for the datasets in "datasets.py", but that should be replaced for predicting accurately the Bitcoin.
It would be possible to improve the input dimensions of your model that accepts (BTC/USD and BTC/EUR). As an example, you could create additionnal input dimensions/streams which could contain meteo data and more financial data, such as the S&P 500, the Dow Jones, and on. Other more creative input data could be sine waves (or other-type-shaped waves such as saw waves or triangles or two signals for `cos` and `sin`) representing the fluctuation of minutes, hours, days, weeks, months, years, moon cycles, and on. This could be combined with a Twitter sentiment analysis about the word "Bitcoin" in tweets in order to have another input signal which is more human-based and abstract. Actually, some libraries exists to convert text to a sentiment value, and there would also be the neural network end-to-end approach (but that would be a way more complicated setup). It is also interesting to know where is the bitcoin most used: http://images.google.com/search?tbm=isch&q=bitcoin+heatmap+world
With all the above-mentionned examples, it would be possible to have all of this as input features, at every time steps: (BTC/USD, BTC/EUR, Dow_Jones, SP_500, hours, days, weeks, months, years, moons, meteo_USA, meteo_EUROPE, Twitter_sentiment). Finally, there could be those two output features, or more: (BTC/USD, BTC/EUR).
This prediction concept can apply to many things, such as meteo prediction and other types of shot-term and mid-term statistical predictions.
## To change which exercise you are doing, change the value of the following "exercise" variable:
```
exercise = 1 # Possible values: 1, 2, 3, or 4.
from datasets import generate_x_y_data_v1, generate_x_y_data_v2, generate_x_y_data_v3, generate_x_y_data_v4
# We choose which data function to use below, in function of the exericse.
if exercise == 1:
generate_x_y_data = generate_x_y_data_v1
if exercise == 2:
generate_x_y_data = generate_x_y_data_v2
if exercise == 3:
generate_x_y_data = generate_x_y_data_v3
if exercise == 4:
generate_x_y_data = generate_x_y_data_v4
import tensorflow as tf # Version 1.0 or 0.12
import numpy as np
import matplotlib.pyplot as plt
# This is for the notebook to generate inline matplotlib
# charts rather than to open a new window every time:
%matplotlib inline
```
## Neural network's hyperparameters
```
sample_x, sample_y = generate_x_y_data(isTrain=True, batch_size=3)
print("Dimensions of the dataset for 3 X and 3 Y training examples : ")
print(sample_x.shape)
print(sample_y.shape)
print("(seq_length, batch_size, output_dim)")
# Internal neural network parameters
seq_length = sample_x.shape[0] # Time series will have the same past and future (to be predicted) lenght.
batch_size = 5 # Low value used for live demo purposes - 100 and 1000 would be possible too, crank that up!
output_dim = input_dim = sample_x.shape[-1] # Output dimension (e.g.: multiple signals at once, tied in time)
hidden_dim = 12 # Count of hidden neurons in the recurrent units.
layers_stacked_count = 2 # Number of stacked recurrent cells, on the neural depth axis.
# Optmizer:
learning_rate = 0.007 # Small lr helps not to diverge during training.
nb_iters = 150 # How many times we perform a training step (therefore how many times we show a batch).
lr_decay = 0.92 # default: 0.9 . Simulated annealing.
momentum = 0.5 # default: 0.0 . Momentum technique in weights update
lambda_l2_reg = 0.003 # L2 regularization of weights - avoids overfitting
```
## Definition of the seq2seq neuronal architecture
<img src="https://www.tensorflow.org/images/basic_seq2seq.png" />
Comparatively to what we see in the image, our neural network deals with signal rather than letters. Also, we don't have the feedback mechanism yet.
```
# Backward compatibility for TensorFlow's version 0.12:
try:
tf.nn.seq2seq = tf.contrib.legacy_seq2seq
tf.nn.rnn_cell = tf.contrib.rnn
tf.nn.rnn_cell.GRUCell = tf.contrib.rnn.GRUCell
print("TensorFlow's version : 1.0 (or more)")
except:
print("TensorFlow's version : 0.12")
tf.reset_default_graph()
# sess.close()
sess = tf.InteractiveSession()
with tf.variable_scope('Seq2seq'):
# Encoder: inputs
enc_inp = [
tf.placeholder(tf.float32, shape=(None, input_dim), name="inp_{}".format(t))
for t in range(seq_length)
]
# Decoder: expected outputs
expected_sparse_output = [
tf.placeholder(tf.float32, shape=(None, output_dim), name="expected_sparse_output_".format(t))
for t in range(seq_length)
]
# Give a "GO" token to the decoder.
# Note: we might want to fill the encoder with zeros or its own feedback rather than with "+ enc_inp[:-1]"
dec_inp = [ tf.zeros_like(enc_inp[0], dtype=np.float32, name="GO") ] + enc_inp[:-1]
# Create a `layers_stacked_count` of stacked RNNs (GRU cells here).
cells = []
for i in range(layers_stacked_count):
with tf.variable_scope('RNN_{}'.format(i)):
cells.append(tf.nn.rnn_cell.GRUCell(hidden_dim))
# cells.append(tf.nn.rnn_cell.BasicLSTMCell(...))
cell = tf.nn.rnn_cell.MultiRNNCell(cells)
# Here, the encoder and the decoder uses the same cell, HOWEVER,
# the weights aren't shared among the encoder and decoder, we have two
# sets of weights created under the hood according to that function's def.
dec_outputs, dec_memory = tf.nn.seq2seq.basic_rnn_seq2seq(
enc_inp,
dec_inp,
cell
)
# For reshaping the output dimensions of the seq2seq RNN:
w_out = tf.Variable(tf.random_normal([hidden_dim, output_dim]))
b_out = tf.Variable(tf.random_normal([output_dim]))
# Final outputs: with linear rescaling for enabling possibly large and unrestricted output values.
output_scale_factor = tf.Variable(1.0, name="Output_ScaleFactor")
reshaped_outputs = [output_scale_factor*(tf.matmul(i, w_out) + b_out) for i in dec_outputs]
# Training loss and optimizer
with tf.variable_scope('Loss', reuse=tf.AUTO_REUSE):
# L2 loss
output_loss = 0
for _y, _Y in zip(reshaped_outputs, expected_sparse_output):
output_loss += tf.reduce_mean(tf.nn.l2_loss(_y - _Y))
# L2 regularization (to avoid overfitting and to have a better generalization capacity)
reg_loss = 0
for tf_var in tf.trainable_variables():
if not ("Bias" in tf_var.name or "Output_" in tf_var.name):
reg_loss += tf.reduce_mean(tf.nn.l2_loss(tf_var))
loss = output_loss + lambda_l2_reg * reg_loss
with tf.variable_scope('Optimizer', reuse=tf.AUTO_REUSE):
optimizer = tf.train.RMSPropOptimizer(learning_rate, decay=lr_decay, momentum=momentum)
train_op = optimizer.minimize(loss)
```
## Training of the neural net
```
def train_batch(batch_size):
"""
Training step that optimizes the weights
provided some batch_size X and Y examples from the dataset.
"""
X, Y = generate_x_y_data(isTrain=True, batch_size=batch_size)
feed_dict = {enc_inp[t]: X[t] for t in range(len(enc_inp))}
feed_dict.update({expected_sparse_output[t]: Y[t] for t in range(len(expected_sparse_output))})
_, loss_t = sess.run([train_op, loss], feed_dict)
return loss_t
def test_batch(batch_size):
"""
Test step, does NOT optimizes. Weights are frozen by not
doing sess.run on the train_op.
"""
X, Y = generate_x_y_data(isTrain=False, batch_size=batch_size)
feed_dict = {enc_inp[t]: X[t] for t in range(len(enc_inp))}
feed_dict.update({expected_sparse_output[t]: Y[t] for t in range(len(expected_sparse_output))})
loss_t = sess.run([loss], feed_dict)
return loss_t[0]
# Training
train_losses = []
test_losses = []
sess.run(tf.global_variables_initializer())
for t in range(nb_iters+1):
train_loss = train_batch(batch_size)
train_losses.append(train_loss)
if t % 10 == 0:
# Tester
test_loss = test_batch(batch_size)
test_losses.append(test_loss)
print("Step {}/{}, train loss: {}, \tTEST loss: {}".format(t, nb_iters, train_loss, test_loss))
print("Fin. train loss: {}, \tTEST loss: {}".format(train_loss, test_loss))
# Plot loss over time:
plt.figure(figsize=(12, 6))
plt.plot(
np.array(range(0, len(test_losses)))/float(len(test_losses)-1)*(len(train_losses)-1),
np.log(test_losses),
label="Test loss"
)
plt.plot(
np.log(train_losses),
label="Train loss"
)
plt.title("Training errors over time (on a logarithmic scale)")
plt.xlabel('Iteration')
plt.ylabel('log(Loss)')
plt.legend(loc='best')
plt.show()
# Test
nb_predictions = 5
print("Let's visualize {} predictions with our signals:".format(nb_predictions))
X, Y = generate_x_y_data(isTrain=False, batch_size=nb_predictions)
feed_dict = {enc_inp[t]: X[t] for t in range(seq_length)}
outputs = np.array(sess.run([reshaped_outputs], feed_dict)[0])
for j in range(nb_predictions):
plt.figure(figsize=(12, 3))
for k in range(output_dim):
past = X[:,j,k]
expected = Y[:,j,k]
pred = outputs[:,j,k]
label1 = "Seen (past) values" if k==0 else "_nolegend_"
label2 = "True future values" if k==0 else "_nolegend_"
label3 = "Predictions" if k==0 else "_nolegend_"
plt.plot(range(len(past)), past, "o--b", label=label1)
plt.plot(range(len(past), len(expected)+len(past)), expected, "x--b", label=label2)
plt.plot(range(len(past), len(pred)+len(past)), pred, "o--y", label=label3)
plt.legend(loc='best')
plt.title("Predictions v.s. true values")
plt.show()
print("Reminder: the signal can contain many dimensions at once.")
print("In that case, signals have the same color.")
print("In reality, we could imagine multiple stock market symbols evolving,")
print("tied in time together and seen at once by the neural network.")
```
## Author
Guillaume Chevalier
- https://ca.linkedin.com/in/chevalierg
- https://twitter.com/guillaume_che
- https://github.com/guillaume-chevalier/
## License
This project is free to use according to the [MIT License](https://github.com/guillaume-chevalier/seq2seq-signal-prediction/blob/master/LICENSE) as long as you cite me and the License (read the License for more details). You can cite me by pointing to the following link:
- https://github.com/guillaume-chevalier/seq2seq-signal-prediction
## Converting notebook to a readme file
```
# Let's convert this notebook to a README for the GitHub project's title page:
!jupyter nbconvert --to markdown seq2seq.ipynb
!mv seq2seq.md README.md
```
| github_jupyter |
```
import glob
import matplotlib.pyplot as pyplot
import math
import numpy
import os
import pandas
import tensorflow
from PIL import Image
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import *
from tensorflow.keras.utils import to_categorical
def get_captcha_label(file_path):
"""
Precondition: CAPTCHA images were generated using the
'generator.py' script found in this project folder
Args:
file_path (str): the path to the CAPTCHA image
Returns:
the 'label' for each CAPTCHA denoted by the
string in the file name before the '_'
character
Example: '9876_image.png' -> '9876'
"""
try:
path, file_name = os.path.split(file_path)
file_name, extension = os.path.splitext(file_name)
label, _ = file_name.split("_")
return label
except Exception as e:
print('error while parsing %s. %s' % (file_path, e))
return None, None
def create_captcha_dataframe(captcha_images_directory):
"""
Args:
captcha_images_directory (str): the full file path to the folder where the captcha images
were generated
Returns:
a pandas.DataFrame object storing each captcha file name along with its label
"""
files = glob.glob(os.path.join(captcha_images_directory, "*.png"))
attributes = list(map(get_captcha_label, files))
data_frame = pandas.DataFrame(attributes)
data_frame['file'] = files
data_frame.columns = ['label', 'file']
data_frame = data_frame.dropna()
return data_frame
def shuffle_and_split_data(data_frame):
"""
Shuffle and split the data into 3 sets: training, validation, and testing.
Args:
data_frame (pandas.DataFrame): the data to shuffle and split
Returns:
3 numpy.ndarray objects -> (train_indices, validation_indices, test_indices)
each hold the index positions for data in the pandas.DataFrame
"""
shuffled_indices = numpy.random.permutation(len(data_frame))
train_up_to = int(len(data_frame) * 0.7)
train_indices = shuffled_indices[:train_up_to]
test_indices = shuffled_indices[train_up_to:]
# Further split up the training data.
train_up_to = int(train_up_to * 0.7)
train_indices, validation_indices = train_indices[:train_up_to], train_indices[train_up_to:]
return train_indices, validation_indices, test_indices
```
---
**'relu'** stands for **'Rectified Linear Unit'**, the most commonly used activation function for convolutional neural networks.
**'softmax'** is another activation function used for classifying data.
Activation functions are analagous to the 'firing' of neurons in biological neural networks.
**Layers**:
- Convolutional layer: applies a filter to the CAPTCHA image to extract features (characters and/or digits) from the image
- Pooling layer: immediately follows a convolutional layer and used to downscale the image after each filter is applied
- Flattening layer: converts the CAPTCHA image represented as a 3D tensor (array) to a 1D tensor
- Dense layer: used to assist with operations on an n-dimensional tensor such as rotation, scaling, etc
- Reshape layer: used to restructure the output of the neural network
```
def create_untrained_model(image_height=100, image_width=100, image_channels=3,
character_length=4, categories=10):
input_layer = tensorflow.keras.Input(shape=(image_height, image_width, image_channels))
hidden_layers = layers.Conv2D(32, 3, activation='relu')(input_layer)
hidden_layers = layers.MaxPooling2D((2, 2))(hidden_layers)
hidden_layers = layers.Conv2D(64, 3, activation='relu')(hidden_layers)
hidden_layers = layers.MaxPooling2D((2, 2))(hidden_layers)
hidden_layers = layers.Conv2D(64, 3, activation='relu')(hidden_layers)
hidden_layers = layers.MaxPooling2D((2, 2))(hidden_layers)
hidden_layers = layers.Flatten()(hidden_layers)
hidden_layers = layers.Dense(1024, activation='relu')(hidden_layers)
hidden_layers = layers.Dense(character_length * categories, activation='softmax')(hidden_layers)
hidden_layers = layers.Reshape((character_length, categories))(hidden_layers)
model = models.Model(inputs=input_layer, outputs=hidden_layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics= ['accuracy'])
return model
def create_untrained_alternative_model(image_height=100, image_width=100, image_channels=3,
character_length=4, categories=10):
model = Sequential()
model.add(Input(shape=(image_height, image_width, image_channels)))
model.add(Conv2D(filters=16, kernel_size=(3,3),padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=32, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=1024,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(character_length * categories, activation='softmax'))
model.add(Reshape((character_length, categories)))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics= ['accuracy'])
return model
def create_untrained_vgg16_model(image_height=100, image_width=100, image_channels=3,
character_length=4, categories=10):
model = Sequential()
model.add(Input(shape=(image_height, image_width, image_channels)))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(data_format="channels_last", pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(data_format="channels_last", pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(data_format="channels_last", pool_size=(2, 2)))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(data_format="channels_last", pool_size=(2, 2)))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
#model.add(MaxPooling2D(data_format="channels_last", pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(character_length * categories))
model.add(Activation('softmax'))
model.add(Reshape((character_length, categories)))
optimizer = RMSprop(lr=1e-4)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
def get_captcha_generator(data_frame, indices, for_training, batch_size=16, image_height=100, image_width=100,
categories=10):
"""
Args:
data_frame (pandas.DataFrame): contains the file paths to the CAPTCHA images and their labels
indices (int): specifies training indices, testing indices, or validation indices of the DataFrame
for_training (bool): 'True' for training or validation set, 'False' to specify a test set
batch_size (int): number of data instances to return when iterated upon
image_height (int): height in pixels to resize the CAPTCHA image to
image_width (int): width in pixels to resize the CAPTCHA image to
categories (int): number of possible values for each position in the CAPTCHA image
Returns:
a generator object for producing CAPTCHA images along with their labels
Yields:
a pair of lists -> (CAPTCHA images, labels)
"""
images, labels = [], []
while True:
for i in indices:
captcha = data_frame.iloc[i]
file, label = captcha['file'], captcha['label']
captcha_image = Image.open(file)
captcha_image = captcha_image.resize((image_height, image_width))
captcha_image = numpy.array(captcha_image) / 255.0
images.append(numpy.array(captcha_image))
labels.append(numpy.array([numpy.array(to_categorical(int(i), categories)) for i in label]))
if len(images) >= batch_size:
yield numpy.array(images), numpy.array(labels)
images, labels = [], []
if not for_training:
break
# TODO: add parameters to satisfy what is required for 'get_captcha_generator' function
def train_model(model, data_frame, train_indices, validation_indices,
training_batch_size, validation_batch_size, training_epochs,
image_height, image_width, categories):
training_set_generator = get_captcha_generator(data_frame,
train_indices,
for_training=True,
batch_size=training_batch_size,
image_height=image_height,
image_width=image_width,
categories=categories)
validation_set_generator = get_captcha_generator(data_frame,
validation_indices,
for_training=True,
batch_size=validation_batch_size,
image_height=image_height,
image_width=image_width,
categories=categories)
callbacks = [
ModelCheckpoint("./model_checkpoint", monitor='val_loss')
]
history = model.fit(training_set_generator,
steps_per_epoch=len(train_indices)//training_batch_size,
epochs=training_epochs,
callbacks=callbacks,
validation_data=validation_set_generator,
validation_steps=len(validation_indices)//validation_batch_size)
return history
def plot_training_history(history):
figure, axes = pyplot.subplots(1, 2, figsize=(20, 5))
axes[0].plot(history.history['accuracy'], label='Training accuracy')
axes[0].plot(history.history['val_accuracy'], label='Validation accuracy')
axes[0].set_xlabel('Epochs')
axes[0].legend()
axes[1].plot(history.history['loss'], label='Training loss')
axes[1].plot(history.history['val_loss'], label='Validation loss')
axes[1].set_xlabel('Epochs')
axes[1].legend()
# TODO: add parameters to satisfy what is required for 'get_captcha_generator' function
def get_prediction_results(model, data_frame, test_indices, testing_batch_size,
image_height, image_width, categories):
testing_set_generator = get_captcha_generator(data_frame,
test_indices,
for_training=True,
batch_size=testing_batch_size,
image_height=image_height,
image_width=image_width,
categories=categories)
captcha_images, captcha_text = next(testing_set_generator)
predictions = model.predict_on_batch(captcha_images)
true_values = tensorflow.math.argmax(captcha_text, axis=-1)
predictions = tensorflow.math.argmax(predictions, axis=-1)
return captcha_images, predictions, true_values
def display_predictions_from_model(captcha_images, predictions, true_values, total_to_display=30, columns=5):
"""
Display a plot showing the results of the model's predictions.
Each subplot will contain the CAPTCHA image, the model's prediction value, and the true value (label).
Args:
captcha_images (PNG image): the CAPTCHA image file
predictions (EagerTensor): the prediction value made by the model
true_values (EagerTensor): the label associated with the CAPTCHA image
total_to_display (int): total number of subplots
columns (int): number of columns in the plot
"""
random_indices = numpy.random.permutation(total_to_display)
rows = math.ceil(total_to_display / columns)
figure, axes = pyplot.subplots(rows, columns, figsize=(15, 20))
for i, image_index in enumerate(random_indices):
result = axes.flat[i]
result.imshow(captcha_images[image_index])
result.set_title('prediction: {}'.format(
''.join(map(str, predictions[image_index].numpy()))))
result.set_xlabel('true value: {}'.format(
''.join(map(str, true_values[image_index].numpy()))))
result.set_xticks([])
result.set_yticks([])
```
| github_jupyter |
# Basic Tensors
```
import torch
import numpy as np
data = np.array([1, 2, 3])
type(data)
torch.Tensor(data)
torch.tensor(data) #factory function that matches the input data type
torch.as_tensor(data)
torch.from_numpy(data)
torch.eye(2)
torch.zeros(2,2)
torch.ones(2,2)
torch.rand(2,2)
```
# Creating PyTorch Tensors -- Best Options
```
data = np.array([1, 2, 3])
```
Note that apart from the Tensor class constructor invocation, the rest are all factory methods.
```
t1 = torch.Tensor(data)
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)
print(t1.dtype)
print(t2.dtype)
print(t3.dtype)
print(t4.dtype)
torch.get_default_dtype()
torch.tensor(np.array([1, 2, 3]), dtype=torch.float64)
```
## Data Copying and Sharing in Tensors
```
data = np.array([1, 2, 3])
t1 = torch.Tensor(data)
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)
# Now we modify the original numpy array
data[0] = 0
data[1] = 0
data[2] = 0
print(t1)
print(t2)
print(t3)
print(t4)
```
The tensors `t3` and `t4` are also modified! It turns out that `torch.Tensor` and `torch.tensor` __copy__ new data (i.e. creates new object in memory). On the other hand, `as_tensor` and `from_numpy` __share__ memory from data.
`tensor.Tensor`
* copy
* uses global data type
__`tensor.tensor`__† <----- Preferred
* copy
* dynamic; infers data type
__`tensor.as_tensor`__† <----- Preferred
* shares memory
* Accepts _any_ array-like object as input.
`tensor.from_numpy`
* shares memory
* Accepts only NumPy arrays.
## Flatten, Reshape and Squeeze
We can categorize high-level tensor operations into four categories:
- Reshaping operations
- Element-wise operations
- Reduction operations
- Access operations
```
t = torch.tensor([
[1,1,1,1],
[2,2,2,2],
[3,3,3,3]
], dtype=torch.float32)
t.size()
t.shape
len(t.shape)
```
To get the number of scalar components of the tensor, we perform the following operation:
```
torch.tensor(t.shape).prod()
t.numel() # Short for Number of Elements
t.reshape(6, 2)
t.reshape(12, 1)
t.reshape(1, 12)
t.reshape(2, 2, 3)
t.reshape(-1) # -1 says that reshape method will figure out the value based on attributes of t
```
## Squeeze
`torch.squeeze(input, dim=None, *, out=None) → Tensor`
Returns a tensor with all the dimensions of input of size 1 removed.
For example, if input is of shape: `A×1×B×C×1×D` then the out tensor will be of shape: `A×B×C×D`.
When dim is given, a squeeze operation is done only in the given dimension.
If input is of shape: `A×1×B` , squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape `A×B` .
```
>>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])
>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2])
```
```
t.reshape(1, 12) # Note: Double brackets
t.reshape(1, 12).squeeze()
```
`torch.unsqueeze(input, dim) → Tensor`
Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor.
A `dim` value within the range `[-input.dim() - 1, input.dim() + 1)` can be used.
Negative dim will correspond to unsqueeze() applied at `dim = dim + input.dim() + 1`.
```
>>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1, 2, 3, 4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
```
```
t.reshape(1, 12).squeeze().unsqueeze(dim = 0). # (1, 12) -> 12 -> (1, 12)
t.reshape(1, 12).squeeze().unsqueeze(dim = 1) # (1, 12) -> (12, 1)
t.unsqueeze(dim=2)
print(t.shape)
print(t.unsqueeze(dim=2).shape)
print(t.unsqueeze(dim=1).shape)
t.unsqueeze(dim=1)
t
```
## Flatten
`torch.flatten(input, start_dim=0, end_dim=-1) → Tensor`
Flattens a contiguous range of dims in a tensor.
```
>>> t = torch.tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
>>> torch.flatten(t)
tensor([1, 2, 3, 4, 5, 6, 7, 8])
>>> torch.flatten(t, start_dim=1)
tensor([[1, 2, 3, 4],
[5, 6, 7, 8]])
```
```
def flatten(t):
t= t.reshape(1, -1)
t = t.squeeze()
return t
flatten(t)
t.reshape(-1)
t.flatten()
```
## Concat
```
t1 = torch.tensor([1, 2])
t2 = torch.tensor([3, 4])
torch.cat((t1, t2), dim=0)
t1 = t1.unsqueeze(dim = 0)
t2 = t2.unsqueeze(dim = 0)
torch.cat((t1, t2), dim=0)
torch.cat((t1, t2), dim=1)
```
### Example: Batch image input for CNN
```
t1 = torch.ones(4, 4)
t2 = torch.ones(4, 4) * 2
t3 = torch.ones(4, 4) * 3
print(t1)
print(t2)
print(t3)
batch = torch.stack((t1, t2, t3))
print(batch.shape)
batch # Rank 3 tensor that contains 3 4x4 images.
batch = batch.reshape(3, 1, 4, 4) # Batch, Channel, Height, Width
batch
# Lets check out the tensor via some indexing.
print("First Image : \n" , batch[0])
print("First Color Channel : \n", batch[0][0])
print("First row of pixels in the first color channels: \n", batch[0][0][0])
print("First pixel value in the first row of the first color channel of the first image: \n", batch[0][0][0][0])
```
#### Now we fill flatten the image across each channel.
```
batch.flatten(start_dim=1).shape
batch.flatten(start_dim=1) # start_dim tells us which axis to start with, in order to flatten.
```
## Element-wise Operation
An element-wise Operation is an operation between two tensors that operates on corresponding elements within the respective tensors. The correspondence is determinced by indices.
Also the tensors need to be of the same shape.
```
t1 = torch.tensor([
[1, 2],
[3, 4]
], dtype=torch.float32)
t2 = torch.tensor([
[9, 8],
[7, 6]
], dtype=torch.float32)
t1[0]
t1[0][0]
t1 + t2
```
#### But in the case of a lower rank tensor, **Broadcasting** happens.
Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.
We can see what the broadcasted scalar value looks like using the `broadcast_to()` Numpy function:
```
np.broadcast_to(2, t1.shape)
array([[2, 2],
[2, 2]])
```
This means the scalar value is transformed into a rank-2 tensor just like t1, and just like that, the shapes match and the element-wise rule of having the same shape is back in play. This is all under the hood of course.
Even though these two tenors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible. The lower rank tensor t2 will be transformed via broadcasting to match the shape of the higher rank tensor t1, and the element-wise operation will be performed as usual.
The concept of broadcasting is the key to understanding how this operation will be carried out. As before, we can check the broadcast transformation using the broadcast_to() numpy function.
```
t1 = torch.tensor([
[1, 1],
[1, 1]
], dtype=torch.float32)
t2 = torch.tensor([2, 4], dtype=torch.float32)
t1 + t2
t3 = torch.tensor([
[0, 5, 7],
[6, 0, 7],
[0, 8, 0]
], dtype=torch.float32)
t3.eq(0)
t3.ge(3)
t3.gt(5)
```
### Element-wise operations using functions
```
t3.abs()
t3.sqrt()
t3.neg()
t3.neg().abs()
```
## Reduction operations
A reduction operation on a tensor is an operation that reduces the number of elements contained within the tensor. Tensors give us the ability to manage our data.
- Reshaping operations gave us the ability to position our elements along particular axes.
- Element-wise operations allow us to perform operations on elements between two tensors.
- Reduction operations allow us to perform operations on elements within a single tensor.
```
t = torch.tensor([
[0, 1, 0],
[2, 0, 2],
[0, 3, 0]
], dtype=torch.float32)
t.sum()
t.numel()
t.sum().numel()
```
Checking the number of elements in the original tensor against the result of the sum() call, we can see that, indeed, the tensor returned by the call to sum() contains fewer elements than the original.
Since the number of elements have been reduced by the operation, we can conclude that the sum() method is a reduction operation.
```
t.prod()
t.mean()
t.std()
```
### Do reduction operations always reduce to a tensor with a single element?
The answer is no!
In fact, we often reduce specific axes at a time. This process is important. It’s just like we saw with reshaping when we aimed to flatten the image tensors within a batch while still maintaining the batch axis.
```
t = torch.tensor([
[1,1,1,1],
[2,2,2,2],
[3,3,3,3]
], dtype=torch.float32)
t.sum(dim=0)
```
Surprise! Element-wise operations are in play here.
When we sum across the first axis, we are taking the summation of all the elements of the first axis. To do this, we must utilize element-wise addition.
```
> t[0]
tensor([1., 1., 1., 1.])
> t[1]
tensor([2., 2., 2., 2.])
> t[2]
tensor([3., 3., 3., 3.])
> t[0] + t[1] + t[2]
tensor([6., 6., 6., 6.])
```
```
t.sum(dim=1)
```
The second axis in this tensor contains numbers that come in groups of four. Since we have three groups of four numbers, we get three sums.
```
> t[0].sum()
tensor(4.)
> t[1].sum()
tensor(8.)
> t[2].sum()
tensor(12.)
> t.sum(dim=1)
tensor([ 4., 8., 12.])
```
The specification of `dim=k` can be thought of as all elements which differ only on the `k` axis are aggregated.
### Argmax Tensor Reduction Operation
Argmax returns the index location of the maximum value inside a tensor.
```
t = torch.tensor([
[1,0,0,2],
[0,3,3,0],
[4,0,0,5]
], dtype=torch.float32)
t.max()
```
The first piece of code confirms for us that the max is indeed 5, but the call to the argmax() method tells us that the 5 is sitting at index 11. What’s happening here?
We’ll have a look at the flattened output for this tensor. If we don’t specific an axis to the argmax() method, it returns the index location of the max value from the flattened tensor, which in this case is indeed 11.
```
t.argmax()
t.max(dim = 0)
t.max(dim = 1)
```
For the first axis, the max values are, 4, 3, 3, and 5. These values are determined by taking the element-wise maximum across each array running across the first axis.
For each of these maximum values, the argmax() method tells us which element along the first axis where the value lives.
The 4 lives at index two of the first axis.
The first 3 lives at index one of the first axis.
The second 3 lives at index one of the first axis.
The 5 lives at index two of the first axis.
For the second axis, the max values are 2, 3, and 5. These values are determined by taking the maximum inside each array of the first axis. We have three groups of four, which gives us 3 maximum values.
The argmax values here, tell the index inside each respective array where the max value lives.
In practice, we often use the argmax() function on a network’s output prediction tensor, to determine which category has the highest prediction value.
### Accessing elements inside Tensors
```
t = torch.tensor([
[1,2,3],
[4,5,6],
[7,8,9]
], dtype=torch.float32)
t.mean()
t.mean().item()
t.mean(dim=0).tolist()
t.mean(dim=0).numpy()
```
When we compute the mean across the first axis, multiple values are returned, and we can access the numeric values by transforming the output tensor into a Python list or a NumPy array.
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 开始使用 [TensorBoard.dev](https://tensorboard.dev)
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tensorboard/tbdev_getting_started">
<img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />
在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/tbdev_getting_started.ipynb">
<img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />
在 Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/tbdev_getting_started.ipynb">
<img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />
在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/tbdev_getting_started.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a>
</td>
</table>
[TensorBoard.dev](https://tensorboard.dev) 提供了云端的 [TensorBoard](https://tensorflow.google.cn/tensorboard) 体验,可让您上传并与所有人共享 ML 实验结果。
该笔记本训练了一个简单的模型,并演示了如何将日志上传到 TensorBoard.dev。
### 配置并导入
```
try:
# %tensorflow_version 仅存在于 Colab。
%tensorflow_version 2.x
except Exception:
pass
!pip install -U tensorboard >piplog 2>&1
import tensorflow as tf
import datetime
```
### 训练一个简单的模型并创建 TensorBoard 日志
```
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
```
### 上传到 TensorBoard.dev
上传 TensorBoard 日志将提供一个可以与任何人共享的链接。 请注意,上传的 TensorBoard 是公开的。 不要上传敏感数据。
上传程序将一直运行直到停止,以便在进行训练时从目录中读取新数据。
```
!tensorboard dev upload --logdir ./logs
```
请注意,您上传的每个实例都有一个唯一的实例 ID。 即使您使用相同的目录重新上传,您也会获得新的实例 ID。 可以使用 `list` 命令查看您上传的实例列表。
```
!tensorboard dev list
```
要删除已上传的实例,可以使用 `delete` 命令并指定适当的实例。
```
# 您必须将 YOUR_EXPERIMENT_ID 替换为前一个输出的值
# tensorboard `list` 命令或 `upload` 命令。 例如
# `tensorboard dev delete --experiment_id pQpJNh00RG2Lf1zOe9BrQA`
## !tensorboard dev delete --experiment_id YOUR_EXPERIMENT_ID_HERE
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import sys
#For Mark Kamuda
#sys.path.append("/home/ubuntu/Notebooks/annsa/")
#For Sam Dotson
sys.path.append("/home/samgdotson/Research/annsa")
from scipy.interpolate import griddata
import annsa as an
import numpy as np
```
#### Define neural network
```
from cvt_oct_load_templates import *
```
### Load normalized templates, set isotope list and shielding settings
```
spectral_templates=load_templates(normalization='normalarea')
isotope_list=an.isotopes[:-3]
```
## Define simulation function
```
def simulate_shielded_template_dataset(isotope_list,
spectral_template_settings,
integration_times,
signal_to_backgrounds,
calibrations):
all_source_spectra=[]
all_keys=[]
LLD=10
background_cps=85.
total_spectra=0
random_settings=False
for isotope in isotope_list:
for spectral_template_setting in spectral_template_settings:
for integration_time in integration_times:
for signal_to_background in signal_to_backgrounds:
for calibration in calibrations:
isotopes_per_setting=1
# Simulate extra unshielded spectra to avoid training set imbalance
if spectral_template_setting=='noshield':
isotopes_per_setting*=len(spectral_template_settings)-1
for _ in range(isotopes_per_setting):
# I125 emits a maximum gamma-ray energy of 30keV.
# This will either be completely attenuated by any shielding, or produce insignificatn Compton Continuum.
# For this reason shielded I125 is removed as a class
if isotope=='I125' and spectral_template_setting!='noshield':
continue
# Simulate source
if random_settings==True:
calibration=np.random.uniform(calibrations[0],calibrations[-1])
signal_to_background=10**np.random.uniform(np.log10(signal_to_backgrounds[0]),np.log10(signal_to_backgrounds[-1]))
integration_time=10**np.random.uniform(np.log10(integration_times[0]),np.log10(integration_times[-1]))
source_template=spectral_templates[spectral_template_setting][isotope]
source_template=griddata(range(1024), source_template, calibration*np.arange(1024),method='cubic',fill_value=0.0)
source_template[0:LLD]=0
source_template[source_template < 0] = 0
source_template/=np.sum(source_template)
source_template*=integration_time*background_cps*signal_to_background
background_template=spectral_templates['background']['chicago']
background_template=griddata(range(1024), background_template, calibration*np.arange(1024),method='cubic',fill_value=0.0)
background_template[0:LLD]=0
background_template[background_template < 0] = 0
background_template/=np.sum(background_template)
background_template*=integration_time*background_cps
"""
flag=0
while np.sum(np.isnan(sampled_spectrum))!=0:
sampled_spectrum=an.sample_spectrum(source_template,
generate_random_counts_on_detector(integration_time))
flag+=1
if flag==1:
print 'spectral template'+spectral_templates[spectral_template_setting]+'contains NaN'
break
"""
all_source_spectra.append(source_template+background_template)
isotope_key=isotope
if spectral_template_setting!='noshield':
isotope_key+='_shielded'
all_keys.append(isotope_key)
print ('\1b[2k\r'),
print('Isotope %s, template %s, %s total spectra simulated' %(isotope,
spectral_template_setting,
total_spectra)),
return np.array(all_source_spectra),np.array(all_keys)
```
## Final CVT-OCT dataset divisions
```
spectral_template_settings=['noshield',
'aluminum40pct',
'aluminum60pct',
'aluminum80pct',
'iron40pct',
'iron60pct',
'iron80pct',
'lead40pct',
'lead60pct',
'lead80pct']
integration_time_division=6
signal_to_background_division=6
calibration_division=7
integration_times=np.logspace(np.log10(10),np.log10(3600),integration_time_division)
signal_to_backgrounds=np.logspace(np.log10(0.25),np.log10(4),signal_to_background_division)
calibrations=np.linspace(1.19*0.9,1.19*1.1,calibration_division)
print integration_times
print signal_to_backgrounds
print calibrations
training_data,training_keys=simulate_shielded_template_dataset(isotope_list,
spectral_template_settings,
integration_times,
signal_to_backgrounds,
calibrations)
print training_data.shape
np.save('FINAL_template_training_data.npy',training_data)
np.save('FINAL_template_training_keys.npy',training_keys)
```
## CVT-OCT Hyperparameter training dataset divisions
```
spectral_template_settings=['noshield',
'aluminum40pct',
'aluminum80pct',
'iron40pct',
'iron80pct',
'lead40pct',
'lead80pct']
integration_time_division=4
signal_to_background_division=4
calibration_division=4
integration_times=np.logspace(np.log10(60),np.log10(1800),integration_time_division)
signal_to_backgrounds=np.logspace(np.log10(0.5),np.log10(3),signal_to_background_division)
calibrations=np.linspace(1.19*0.9,1.19*1.1,calibration_division)
print integration_times
print signal_to_backgrounds
print calibrations
training_data,training_keys=simulate_shielded_template_dataset(isotope_list,
spectral_template_settings,
integration_times,
signal_to_backgrounds,
calibrations)
print training_data.shape
np.save('FINAL_template_hyperparameter_training_data.npy',training_data)
np.save('FINAL_template_hyperparameter_training_keys.npy',training_keys)
```
## CVT-OCT Hyperparameter testing dataset divisions
```
spectral_template_settings=['noshield',
'aluminum40pct',
'aluminum80pct',
'iron40pct',
'iron80pct',
'lead40pct',
'lead80pct']
integration_times=np.logspace(np.log10(60),np.log10(1800),integration_time_division)
signal_to_backgrounds=np.logspace(np.log10(0.5),np.log10(3),signal_to_background_division)
calibrations=np.linspace(1.19*0.9,1.19*1.1,calibration_division)
integration_times=integration_times[:-1]+np.diff(integration_times)/2.0
signal_to_backgrounds=signal_to_backgrounds[:-1]+np.diff(signal_to_backgrounds)/2.0
calibrations=calibrations[:-1]+np.diff(calibrations)/2.0
print integration_times
print signal_to_backgrounds
print calibrations
training_data,training_keys=simulate_shielded_template_dataset(isotope_list,
spectral_template_settings,
integration_times,
signal_to_backgrounds,
calibrations)
print training_data.shape
```
| github_jupyter |
# Skip-gram Word2Vec
In this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of Word2Vec from Chris McCormick
* [First Word2Vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [Neural Information Processing Systems, paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for Word2Vec also from Mikolov et al.
---
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs.
To solve this problem and greatly increase the efficiency of our networks, we use what are called **embeddings**. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
<img src='assets/lookup_matrix.png' width=50%>
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
---
## Word2Vec
The Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words.
<img src="assets/context_drink.png" width=40%>
Words that show up in similar **contexts**, such as "coffee", "tea", and "water" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space.
There are two architectures for implementing Word2Vec:
>* CBOW (Continuous Bag-Of-Words) and
* Skip-gram
<img src="assets/word2vec_architectures.png" width=60%>
In this implementation, we'll be using the **skip-gram architecture** with **negative sampling** because it performs better than CBOW and trains faster with negative sampling. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
---
## Loading Data
Next, we'll ask you to load in data and place it in the `data` directory
1. Load the [text8 dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/October/5bbe6499_text8/text8.zip); a file of cleaned up *Wikipedia article text* from Matt Mahoney.
2. Place that data in the `data` folder in the home directory.
3. Then you can extract it and delete the archive, zip file to save storage space.
After following these steps, you should have one file in your data directory: `data/text8`.
```
# read in the extracted text file
with open('data/text8') as f:
text = f.read()
# print out the first 100 characters
print(text[:100])
```
## Pre-processing
Here I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things:
>* It converts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems.
* It removes all words that show up five or *fewer* times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations.
* It returns a list of words in the text.
This may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it!
```
import utils
# get list of words
words = utils.preprocess(text)
print(words[:30])
# print some stats about this word data
print("Total words in text: {}".format(len(words)))
print("Unique words: {}".format(len(set(words)))) # `set` removes any duplicate words
```
### Dictionaries
Next, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries.
>* The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1, and so on.
Once we have our dictionaries, the words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(int_words[:30])
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
> Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
import numpy as np
threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
print(train_words[:30])
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
Say, we have an input and we're interested in the idx=2 token, `741`:
```
[5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712]
```
For `R=2`, `get_target` should return a list of four values:
```
[5233, 58, 10571, 27349]
```
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]
return list(target_words)
# test your code!
# run this cell multiple times to check for random window selection
int_text = [i for i in range(10)]
print('Input: ', int_text)
idx=5 # word index of interest
target = get_target(int_text, idx=idx, window_size=5)
print('Target: ', target) # you should get some indices around the idx
```
### Generating Batches
Here's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
int_text = [i for i in range(20)]
x,y = next(get_batches(int_text, batch_size=4, window_size=5))
print('x\n', x)
print('y\n', y)
```
---
## Validation
Here, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity:
<img src="assets/two_vectors.png" width=30%>
$$
\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}
$$
We can encode the validation words as vectors $\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'):
""" Returns the cosine similarity of validation words with words in the embedding matrix.
Here, embedding should be a PyTorch embedding module.
"""
# Here we're calculating the cosine similarity between some random words and
# our embedding vectors. With the similarities, we can look at what words are
# close to our random words.
# sim = (a . b) / |a||b|
embed_vectors = embedding.weight
# magnitude of embedding vectors, |b|
magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0)
# pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_examples = torch.LongTensor(valid_examples).to(device)
valid_vectors = embedding(valid_examples)
similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes
return valid_examples, similarities
```
---
# SkipGram model
Define and train the SkipGram model.
> You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) and a final, softmax output layer.
An Embedding layer takes in a number of inputs, importantly:
* **num_embeddings** – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix
* **embedding_dim** – the size of each embedding vector; the embedding dimension
Below is an approximate diagram of the general structure of our network.
<img src="assets/skip_gram_arch.png" width=60%>
>* The input words are passed in as batches of input word tokens.
* This will go into a hidden layer of linear units (our embedding layer).
* Then, finally into a softmax output layer.
We'll use the softmax layer to make a prediction about the context words by sampling, as usual.
---
## Negative Sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct example, but only a small number of incorrect, or noise, examples. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
There are two modifications we need to make. First, since we're not taking the softmax output over all the words, we're really only concerned with one output word at a time. Similar to how we use an embedding table to map the input word to the hidden layer, we can now use another embedding table to map the hidden layer to the output word. Now we have two embedding layers, one for input words and one for output words. Secondly, we use a modified loss function where we only care about the true example and a small subset of noise examples.
$$
- \large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)} -
\sum_i^N \mathbb{E}_{w_i \sim P_n(w)}\log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)}
$$
This is a little complicated so I'll go through it bit by bit. $u_{w_O}\hspace{0.001em}^\top$ is the embedding vector for our "output" target word (transposed, that's the $^\top$ symbol) and $v_{w_I}$ is the embedding vector for the "input" word. Then the first term
$$\large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)}$$
says we take the log-sigmoid of the inner product of the output word vector and the input word vector. Now the second term, let's first look at
$$\large \sum_i^N \mathbb{E}_{w_i \sim P_n(w)}$$
This means we're going to take a sum over words $w_i$ drawn from a noise distribution $w_i \sim P_n(w)$. The noise distribution is basically our vocabulary of words that aren't in the context of our input word. In effect, we can randomly sample words from our vocabulary to get these words. $P_n(w)$ is an arbitrary probability distribution though, which means we get to decide how to weight the words that we're sampling. This could be a uniform distribution, where we sample all words with equal probability. Or it could be according to the frequency that each word shows up in our text corpus, the unigram distribution $U(w)$. The authors found the best distribution to be $U(w)^{3/4}$, empirically.
Finally, in
$$\large \log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)},$$
we take the log-sigmoid of the negated inner product of a noise vector with the input vector.
<img src="assets/neg_sampling_loss.png" width=50%>
To give you an intuition for what we're doing here, remember that the sigmoid function returns a probability between 0 and 1. The first term in the loss pushes the probability that our network will predict the correct word $w_O$ towards 1. In the second term, since we are negating the sigmoid input, we're pushing the probabilities of the noise words towards 0.
```
import torch
from torch import nn
import torch.optim as optim
class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()
self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist
# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab, n_embed)
self.out_embed = nn.Embedding(n_vocab, n_embed)
# Initialize embedding tables with uniform distribution
# I believe this helps with convergence
self.in_embed.weight.data.uniform_(-1, 1)
self.out_embed.weight.data.uniform_(-1, 1)
def forward_input(self, input_words):
input_vectors = self.in_embed(input_words)
return input_vectors
def forward_output(self, output_words):
output_vectors = self.out_embed(output_words)
return output_vectors
def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist
# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)
device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)
noise_vectors = self.out_embed(noise_words).view(batch_size, n_samples, self.n_embed)
return noise_vectors
class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input_vectors, output_vectors, noise_vectors):
batch_size, embed_size = input_vectors.shape
# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)
# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)
# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()
# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors
# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()
```
### Training
Below is our training loop, and I recommend that you train on GPU, if available.
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))
# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)
# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
print_every = 1500
steps = 0
epochs = 5
# train for some number of epochs
for e in range(epochs):
# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)
# input, output, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)
# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# loss stats
if steps % print_every == 0:
print("Epoch: {}/{}".format(e+1, epochs))
print("Loss: ", loss.item()) # avg batch loss at this point in training
valid_examples, valid_similarities = cosine_similarity(model.in_embed, device=device)
_, closest_idxs = valid_similarities.topk(6)
valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu')
for ii, valid_idx in enumerate(valid_examples):
closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:]
print(int_to_vocab[valid_idx.item()] + " | " + ', '.join(closest_words))
print("...\n")
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# getting embeddings from the embedding layer of our model, by name
embeddings = model.in_embed.weight.to('cpu').data.numpy()
viz_words = 380
tsne = TSNE()
embed_tsne = tsne.fit_transform(embeddings[:viz_words, :])
fig, ax = plt.subplots(figsize=(16, 16))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
```
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Lists in Python
Estimated time needed: **15** minutes
## Objectives
After completing this lab you will be able to:
- Perform list operations in Python, including indexing, list manipulation and copy/clone list.
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dataset">About the Dataset</a>
</li>
<li>
<a href="#list">Lists</a>
<ul>
<li><a href="index">Indexing</a></li>
<li><a href="content">List Content</a></li>
<li><a href="op">List Operations</a></li>
<li><a href="co">Copy and Clone List</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Lists</a>
</li>
</ul>
</div>
<hr>
<h2 id="#dataset">About the Dataset</h2>
Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.
The table has one row for each movie and several columns:
- **artist** - Name of the artist
- **album** - Name of the album
- **released_year** - Year the album was released
- **length_min_sec** - Length of the album (hours,minutes,seconds)
- **genre** - Genre of the album
- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)
- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)
- **date_released** - Date on which the album was released
- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)
- **rating_of_friends** - Indicates the rating from your friends from 1 to 10
<br>
<br>
The dataset can be seen below:
<font size="1">
<table font-size:xx-small>
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table></font>
<hr>
<h2 id="list">Lists</h2>
<h3 id="index">Indexing</h3>
We are going to take a look at lists in Python. A list is a sequenced collection of different objects such as integers, strings, and other lists as well. The address of each element within a list is called an <b>index</b>. An index is used to access and refer to items within a list.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsIndex.png" width="1000" />
To create a list, type the list within square brackets <b>[ ]</b>, with your content inside the parenthesis and separated by commas. Let’s try it!
```
# Create a list
L = ["Michael Jackson", 10.1, 1982]
L
```
We can use negative and regular indexing with a list :
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsNeg.png" width="1000" />
```
# Print the elements on each index
print('the same element using negative and positive indexing:\n Postive:',L[0],
'\n Negative:' , L[-3] )
print('the same element using negative and positive indexing:\n Postive:',L[1],
'\n Negative:' , L[-2] )
print('the same element using negative and positive indexing:\n Postive:',L[2],
'\n Negative:' , L[-1] )
```
<h3 id="content">List Content</h3>
Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting:
```
# Sample List
["Michael Jackson", 10.1, 1982, [1, 2], ("A", 1)]
```
<h3 id="op">List Operations</h3>
We can also perform slicing in lists. For example, if we want the last two elements, we use the following command:
```
# Sample List
L = ["Michael Jackson", 10.1,1982,"MJ",1]
L
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsSlice.png" width="1000">
```
# List slicing
L[3:5]
```
We can use the method <code>extend</code> to add new elements to the list:
```
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
```
Another similar method is <code>append</code>. If we apply <code>append</code> instead of <code>extend</code>, we add one element to the list:
```
# Use append to add elements to list
L = [ "Michael Jackson", 10.2]
L.append(['pop', 10])
L
```
Each time we apply a method, the list changes. If we apply <code>extend</code> we add two new elements to the list. The list <code>L</code> is then modified by adding two new elements:
```
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
```
If we append the list <code>['a','b']</code> we have one new element consisting of a nested list:
```
# Use append to add elements to list
L.append(['a','b'])
L
```
As lists are mutable, we can change them. For example, we can change the first element as follows:
```
# Change the element based on the index
A = ["disco", 10, 1.2]
print('Before change:', A)
A[0] = 'hard rock'
print('After change:', A)
```
We can also delete an element of a list using the <code>del</code> command:
```
# Delete the element based on the index
print('Before change:', A)
del(A[0])
print('After change:', A)
```
We can convert a string to a list using <code>split</code>. For example, the method <code>split</code> translates every group of characters separated by a space into an element in a list:
```
# Split the string, default is by space
'hard rock'.split()
```
We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma:
```
# Split the string by comma
'A,B,C,D'.split(',')
```
<h3 id="co">Copy and Clone List</h3>
When we set one variable <b>B</b> equal to <b>A</b>; both <b>A</b> and <b>B</b> are referencing the same list in memory:
```
# Copy (copy by reference) the list A
A = ["hard rock", 10, 1.2]
B = A
print('A:', A)
print('B:', B)
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRef.png" width="1000" align="center">
Initially, the value of the first element in <b>B</b> is set as hard rock. If we change the first element in <b>A</b> to <b>banana</b>, we get an unexpected side effect. As <b>A</b> and <b>B</b> are referencing the same list, if we change list <b>A</b>, then list <b>B</b> also changes. If we check the first element of <b>B</b> we get banana instead of hard rock:
```
# Examine the copy by reference
print('B[0]:', B[0])
A[0] = "banana"
print('B[0]:', B[0])
```
This is demonstrated in the following figure:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRefGif.gif" width="1000" />
You can clone list **A** by using the following syntax:
```
# Clone (clone by value) the list A
B = A[:]
B
```
Variable **B** references a new copy or clone of the original list; this is demonstrated in the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsVal.gif" width="1000" />
Now if you change <b>A</b>, <b>B</b> will not change:
```
print('B[0]:', B[0])
A[0] = "hard rock"
print('B[0]:', B[0])
```
<h2 id="quiz">Quiz on List</h2>
Create a list <code>a_list</code>, with the following elements <code>1</code>, <code>hello</code>, <code>[1,2,3]</code> and <code>True</code>.
```
# Write your code below and press Shift+Enter to execute
```
<details><summary>Click here for the solution</summary>
```python
a_list = [1, 'hello', [1, 2, 3] , True]
a_list
```
</details>
Find the value stored at index 1 of <code>a_list</code>.
```
# Write your code below and press Shift+Enter to execute
```
<details><summary>Click here for the solution</summary>
```python
a_list[1]
```
</details>
Retrieve the elements stored at index 1, 2 and 3 of <code>a_list</code>.
```
# Write your code below and press Shift+Enter to execute
```
<details><summary>Click here for the solution</summary>
```python
a_list[1:4]
```
</details>
Concatenate the following lists <code>A = [1, 'a']</code> and <code>B = [2, 1, 'd']</code>:
```
# Write your code below and press Shift+Enter to execute
```
<details><summary>Click here for the solution</summary>
```python
A = [1, 'a']
B = [2, 1, 'd']
A + B
```
</details>
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
## Author
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Other contributors
<a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
(nm_num_integration)=
# Numerical integration
```{index} Numerical integration
```
Numerical integration involves finding the integral of a function. While [SymPy](https://www.sympy.org/en/index.html) can be used to do analytical integration, there are many functions for which finding an analytical solution to integration is very difficult, and numerical integration is used instead.
To understand how to perform numerical integration, we first need to understand what exactly is the purpose of integration. For a 1D function, integration means finding the area underneath the curve. However, integration can also be extended for a 2D function and even a 3D function.
[Quadrature](https://en.wikipedia.org/wiki/Numerical_integration) is the term used for numerical evaluation of a *definite* (i.e. over a range \\([a,b]\\)) integral, or in 1D finding the area under a curve.
<p style="text-align:center;"><img src="https://upload.wikimedia.org/wikipedia/commons/f/f2/Integral_as_region_under_curve.svg" width="400px"></p>
Source: [Wikipedia](https://upload.wikimedia.org/wikipedia/commons/f/f2/Integral_as_region_under_curve.svg)
```{admonition} Wondered why the symbol of integration looks like this $\int$?
The symbol of integration actually comes from the cursive of capital letter \\(S\\), meaning summation. But why would it come from the idea of summation? Well, integration is very much related to summation.
```
We know that for a 1D function, which will be our primary topic of discussion today, if we evaluate an integral, we find the area under the function, as illustrated in the figure above. If your curve was a straight line (remember, straight lines are but a subset of curves which has no curves), then it would be rather straightforward to calculate the area under the curve, since you would either have a rectangle if your line was a horizontal straight line, or a trapezoid if your line was not a horizontal straight line. Yet, what happens if your line got a bunch of bend and curves?
For example, we could have a very simple equation like
\\[f(x) = \sin(x) +5.\\]
This equation is a sinusoidal function, of much more complicated shape that is not as easy to get the area through a rectangle or a trapezoid. Therefore, integration comes in to help you find the area. But how does the integration actually get you the area underneath the curve?
Well, what integration essentially does is basically breaking the area into smaller and smaller parts, evaluating the area of each part, and then summing each small part together. The small part of the area can be approximated to be a rectangle, a trapezoid, or some other weird shape if you find it suitable.
A very simple example, using rectangles are shown below:
<p style="text-align:center;"><img src="https://upload.wikimedia.org/wikipedia/commons/2/28/Riemann_integral_regular.gif" width="400px"></p>
Source: [Wikipedia](https://upload.wikimedia.org/wikipedia/commons/2/28/Riemann_integral_regular.gif)
```{margin} Note
It should also be noted that although the example shown uses rectangular slices, it is not neccesary to use rectangular slices. It is not even neccessary to use slices that are of the the same width. Even if you used slices with different width, and which are not rectangles, like trapezoids for example, you could still make the slices increasingly thinner so that the summation of the area under the slices comes closer and closer to the area underneath the curve, until the slices become infinitely thin, and the summed area of slices becomes essentially equal to the area under the curve.
```
As each rectangle slice becomes thinner and thinner, the summed area from the rectangles become more and more closely fitting to the area under the curve. It should be understood that if the rectangle slice becomes infinitely thin, then the summed area from the rectangle would become so close to the area underneath the curve that the two would be essentially the same. You may find the mathematics for the Riemann integral on [Wikipedia](https://en.wikipedia.org/wiki/Riemann_integral). Of course, integration has come a long way since the Riemann integral, and other integrals were developed to deal with the deficiencies with the Riemann integral.
The choice of approximation method, as well as the size of the intervals, will control the error. Better methods as well as smaller (i.e. more to cover our total interval of interest: \\([a,b]\\)) sub-intervals will lead to lower errors, but will generally cost more to compute.
Here the following quadrature methods will be covered in the context of a simple function:
* Midpoint rule (also known as the rectangle method)
* Trapezoid rule
* Simpson's rule
* Composite Simpson's rule
* Weddle's rule.
## Example
Let's begin with a simple function to demonstrate some of the most basic methods for performing numerical integration:
\\[f\left ( x \right ) := \sin \left ( x \right ),\\]
and assume that we want to know what the area under the, \\(\sin\\) function between 0 and \\(\pi\\), i.e. \\([a,b]=[0,\pi]\\).
The indefinite integral (or anti-derivative) of \\(\sin \left ( x \right )\\) is of course \\(-\cos \left ( x \right )\\) (plus a constant of integration, \\(C\\), which we can simply ignore as we saw above as it drops out as soon as we perform a *definite* integral).
Since we know the indefinite integral exactly in this case, we can perform the definite integration (i.e. find the area under the curve) ourselves exactly by hand:
\\[I := \int_{0}^{\pi} \sin \left ( x \right ) = \left [ -\cos\left ( x \right )+ C \right ]_{0}^{\pi} =-\cos\left ( \pi \right ) - (-\cos\left ( 0 \right )) =-\cos\left ( \pi \right ) + \cos\left ( 0 \right ) = -(-1) + 1 = 2.\\]
We included the constant \\(C\\) here to just to emphasise again the fact that it's present doesn't matter - we can just not write it down in this type of expression.
Let's start by plotting the function between these points.
```
import numpy as np
import matplotlib.pyplot as plt
# Set up the figure
fig = plt.figure(figsize=(10, 4))
ax1 = plt.subplot(111)
# Get the value of pi from numpy and generate 100 equally spaced values from 0 to pi.
x = np.linspace(0, np.pi, 100)
# Calculate sin at these points.
y = np.sin(x)
# plot
ax1.plot(x, y, 'b')
# Set x axis limits between 0 and pi.
ax1.set_xlim([0, np.pi])
ax1.set_ylim([0, 1.1])
# Label axis.
ax1.set_xlabel('$x$', fontsize=14)
ax1.set_ylabel('$f(x)=\sin(x)$', fontsize=14)
ax1.set_title('An example function we wish to integrate', fontsize=14)
# Overlay a grid.
ax1.grid(True)
plt.show()
```
(nm_midpoint_rule)=
## Midpoint rule (rectangle method)
The **midpoint rule** is perhaps the simplest quadrature rule. For reasons you will see below it is sometimes also called the **rectangle method**.
Consider one of the subintervals \\([x_i, x_{i+1}]\\). The midpoint rule approximates the integral over this (the \\(i\\)-th) subinterval by the area of a rectangle, with a base of length \\((x_{i+1}-x_i)\\) and a height given by the value of \\(f(x)\\) at the midpoint of that interval (i.e. at \\(x=(x_{i+1}+x_i)/2\\)):
\\[ I_M^{(i)} := (x_{i+1}-x_i) \times f \left ( \frac {x_{i+1}+x_i} {2} \right ), \quad\text{for}
\quad 0\le i \le n-1.\\]
The midpoint estimate of \\(I\\) then simply involves summing up over all the subintervals:
\\[I_M := \sum_{i=0}^{n-1} \, f \left ( \frac {x_{i+1}+x_i} {2} \right )\, (x_{i+1}-x_i).\\]
Midpoint rule:
1. Divide the interval you want to calculate the area under the curve for into smaller pieces, each will be called a subinterval.
2. Assume that the interval begins at \\(x_0\\) and ends at \\(x_n\\). We can pick a random subinterval \\([x_i,x_{i+1}]\\) where \\(0\le i \le n-1\\). For example, your 1st interval will be \\([x_0,x_1]\\) and your $i=0$, and your last interval will be \\([x_{n-1},x_n]\\) and your \\(i=n-1\\).
3. For every subinterval, we approxiamte the slice to be a rectangle. To find the area of the rectangle we need to find the width and the height. The width of the rectangle is simply the width of the subinterval. The height of the rectangle can be estimated as the value of the function at the midpoint of the subinterval, so \\(f \left ( \frac {x_{i+1}+x_i} {2} \right )\\). To find the area we simply multiply the width of the rectangle by the height of the rectangle.
Width of the rectangle:
\\[ x_{i+1} - x_i.\\]
Height of the rectangle:
\\[f \left ( \frac {x_{i+1}+x_i} {2} \right ).\\]
Area of the rectangle:
\\[(x_{i+1} - x_i) f \left ( \frac {x_{i+1}+x_i} {2} \right ).\\]
Generalizing the above for all slices, where \\(I_M^{(i)}\\) is simply the area of the subinterval \\([x_i,x_{i+1}]\\). \\(M\\) subscript here denotes the use of the midpoint method.
\\[ I_M^{(i)} := (x_{i+1}-x_i) \times f \left ( \frac {x_{i+1}+x_i} {2} \right ), \quad\text{for}
\quad 0\le i \le n-1.\\]
4. To find the area under the curve, we need to sum up all of the areas from the subinterval, so we are going to use the summation symbol. We know that the subinterval index goes from the first subinterval where \\(i=0\\) to the last subinterval where \\(i=n-1\\), thus we arrive at
\\[I_M := \sum_{i=0}^{n-1} f \left ( \frac {x_{i+1}+x_i} {2} \right ) (x_{i+1}-x_i).\\]
Note that we dropped \\(i\\) prefix from \\(I\\) because now it is the whole area under the curve, not just the area from one subinterval.
Let's write some code to plot the idea as well as compute an estimate of the integral using the midpoint rule.
```
# this is a matplotlib function that allows us to easily plot rectangles
# which will be useful for visualising what the midpoint rule does
from matplotlib.patches import Rectangle
def f(x):
"""The function we wish to integrate"""
return np.sin(x)
# Get the value of pi from numpy and generate equally spaced values from 0 to pi.
x = np.linspace(0, np.pi, 100)
y = f(x)
# Plot
fig = plt.figure(figsize=(10, 4))
ax1 = plt.subplot(111)
ax1.plot(x, y, 'b', lw=2)
ax1.margins(0.1)
# Label axis.
ax1.set_xlabel('x', fontsize=14)
ax1.set_ylabel('$f(x)=\sin(x)$', fontsize=14)
# Overlay a grid.
ax1.grid(True)
number_intervals = 5
xi = np.linspace(0, np.pi, number_intervals+1)
I_M = 0.0
for i in range(number_intervals):
ax1.add_patch(Rectangle((xi[i], 0.0), (xi[i+1] - xi[i]),
f((xi[i+1]+xi[i])/2), fill=False, ls='--', color='k', lw=2))
I_M += f((xi[i+1]+xi[i])/2)*(xi[i+1] - xi[i])
ax1.set_title('The sum of the areas of the rectangles is $I_M =$ {:.12f}.'.format(I_M),
fontsize=14)
plt.show()
```
A more complex example is shown below, where the red line shows the original function we wish to compute the integral of, and the blue rectangles *approximate* the area under that function for a number of sub-intervals:
<p style="text-align:center;"><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Integration_rectangle.svg/340px-Integration_rectangle.svg.png" width="600px"></p>
### Implementation
```{margin} Note
Note that the SciPy module features many different integration functions, and you can find thorough documentation for these functions (including methods not covered in this course) [here](http://docs.scipy.org/doc/scipy/reference/integrate.html). This library does not contain a function for the midpoint rule, but it is trivial to create our own.
```
Clearly the sum of the areas of all the rectangles provides an estimate of the true integral. In the case above we observe an error of around 1.5%.
As we are going to compare different rules below, let's implement a midpoint rule function.
```
def midpoint_rule(a, b, function, number_intervals=10):
""" Our implementation of the midpoint quadrature rule.
a and b are the end points for our interval of interest.
'function' is the function of x \in [a,b] which we can evaluate as needed.
number_intervals is the number of subintervals/bins we split [a,b] into.
Returns the integral of function(x) over [a,b].
"""
interval_size = (b - a)/number_intervals
# Some examples of some asserts which might be useful here -
# you should get into the habit of using these sorts of checks as much as is possible/sensible.
assert interval_size > 0
assert type(number_intervals) == int
# Initialise to zero the variable that will contain the cumulative sum of all the areas
I_M = 0.0
# Find the first midpoint -- i.e. the centre point of the base of the first rectangle
mid = a + (interval_size/2.0)
# and loop until we get past b, creating and summing the area of each rectangle
while (mid < b):
# Find the area of the current rectangle and add it to the running total
# this involves an evaluation of the function at the subinterval midpoint
I_M += interval_size * function(mid)
# Move the midpoint up to the next centre of the interval
mid += interval_size
# Return our running total result
return I_M
# Check the function runs if it agrees with our first
# version used to generate the schematic plot of the method above:
print('midpoint_rule(0, np.pi, np.sin, number_intervals=5) = ',
midpoint_rule(0, np.pi, np.sin, number_intervals=5))
```
Now let's test the midpoint function:
```
print("The exact area found by direct integration = 2")
for i in (1, 2, 10, 100, 1000):
area = midpoint_rule(0, np.pi, np.sin, i)
print("Area %g rectangle(s) = %g (error=%g)"%(i, area, abs(area-2)))
```
````{admonition} Exercise
:class: dropdown, tip
Create a log-log plot of error egainst the number of subintervals:
```python
# Create a list of interval sizes to test
interval_sizes_M = [1, 2, 4, 8, 16, 32, 100, 1000]
# Initialise an array to store the errors
errors_M = np.zeros_like(interval_sizes_M, dtype='float64')
# Loop over the list of interval sizes, compute and store errors
for (i, number_intervals) in enumerate(interval_sizes_M):
area = midpoint_rule(0, np.pi, f, number_intervals)
errors_M[i] = abs(area-2)
# Plot how the errors vary with interval size
fig = plt.figure(figsize=(5, 5))
ax1 = plt.subplot(111)
ax1.loglog(interval_sizes_M, errors_M, 'bo-', lw=2)
ax1.set_xlabel('log(no. of intervals)', fontsize=16)
ax1.set_ylabel('log(error)', fontsize=16)
ax1.set_title('Convergence plot for $\sin$ integration\nwith the midpoint rule',
fontsize=16)
from myst_nb import glue
glue("midpoint_conv_fig", fig, display=False)
plt.show()
```
```{glue:} midpoint_conv_fig
```
````
```
# Create a list of interval sizes to test
interval_sizes_M = [1, 2, 4, 8, 16, 32, 100, 1000]
# Initialise an array to store the errors
errors_M = np.zeros_like(interval_sizes_M, dtype='float64')
# Loop over the list of interval sizes, compute and store errors
for (i, number_intervals) in enumerate(interval_sizes_M):
area = midpoint_rule(0, np.pi, f, number_intervals)
errors_M[i] = abs(area-2)
# Plot how the errors vary with interval size
fig = plt.figure(figsize=(5, 5))
ax1 = plt.subplot(111)
ax1.loglog(interval_sizes_M, errors_M, 'bo-', lw=2)
ax1.set_xlabel('log(no. of intervals)', fontsize=16)
ax1.set_ylabel('log(error)', fontsize=16)
ax1.set_title('Convergence plot for $\sin(x)$ integration\nwith the midpoint rule',
fontsize=16)
from myst_nb import glue
glue("midpoint_conv_fig", fig, display=False)
plt.show()
```
**Observations:**
* With one rectangle, we are simply finding the area of a box of shape \\(\pi \times 1\\), where \\(\pi\\) is the width of the rectangle and \\(1\\) is the value of the function evaluated at the midpoint, \\(\pi/2\\). So of course the result is \\(\pi\\).
* As we increase the number of subintervals, or rectangles, we increase the accuracy of our area. We can observe from the slope of the log-log plot of error against number of subintervals that the error is a quadratic function of the inverse of the number of subintervals (or equivalently is quadratically dependent on the spacing between the points - the interval size). This demonstrates that (for this particular example at least), the method demonstrates second-order accuracy - if we halve the interval size the error goes down by a factor of 4!
* The simplicity of this method is its weakness, as rectangles (i.e. a flat top) are rarely a good approximation for the shape of a smooth function.
* We want to use as few shapes as possible to approximate our function, because each additional rectangle is one extra time round the loop, which includes its own operations as well as an extra evaluation of the function, and hence increases the overall computational cost.
(nm_trapezoid_rule)=
## Trapezoid rule
As previously stated, the slices we use do not have to be rectangles, they can also be trapezoids. Rectangle rule is very similar to the **trapezoid rule** except for one small difference.
For the trapezoid rule, the width of the subinterval will be multiplied by
\\[\frac{f(x_i) + f(x_{i+1})}{2}.\\]
For the trapezoid rule, the subscipt we will use the subscript \\(T\\). If we change the shape of the rectangle to a trapezoid (i.e. the top of the shape now being a linear line fit defined by the values of the function at the two end points of the subinterval, rather than the constant value used in the midpoint rule), we arrive at the trapezoid, or trapezoidal rule.
The trapezoid rule approximates the integral by the area of a trapezoid with base \\((x_{i+1}-x_i)\\) and the left- and right-hand-sides equal to the values of the function at the two end points.
In this case the area of the shape approximating the integral over one subinterval, is given by:
\\[I_T^{(i)} := (x_{i+1}-x_i) \times
\left( \frac {f\left ( x_{i+1}\right ) + f \left (x_{i} \right )} {2} \right)
\quad\text{for}
\quad 0\le i \le n-1.\\]
The trapezoidal estimate of \\(I\\) then simply involves summing up over all the subintervals:
\\[I_T := \sum_{i=0}^{n-1}\left(\frac{f(x_{i+1}) + f(x_{i})}{2}\right )(x_{i+1}-x_i).\\]
Let's write some code to plot the idea and compute an estimate of the integral.
```
# This is a matplotlib function that allows us to plot polygons
from matplotlib.patches import Polygon
# Get the value of pi from numpy and
# generate equally spaced values from 0 to pi.
x = np.linspace(0, np.pi, 100)
y = f(x)
# plot
fig = plt.figure(figsize=(10, 4))
ax1 = plt.subplot(111)
ax1.plot(x, y, 'b', lw=2)
ax1.margins(0.1)
# Label axis.
ax1.set_xlabel('$x$', fontsize=14)
ax1.set_ylabel('$\sin(x)$', fontsize=14)
ax1.set_title('Approximating function with trapezoids', fontsize=14)
# Overlay a grid.
ax1.grid(True)
number_intervals = 5
xi = np.linspace(0, np.pi, number_intervals+1)
I_T = 0.0
for i in range(number_intervals):
ax1.add_patch(Polygon(np.array([[xi[i], 0], [xi[i], f(xi[i])], [
xi[i+1], f(xi[i+1])], [xi[i+1], 0]]), closed=True, fill=False, ls='--', color='k', lw=2))
I_T += ((f(xi[i+1]) + f(xi[i]))/2)*(xi[i+1] - xi[i])
ax1.set_title('The sum of the areas of the trapezoids is $I_T =$ {:.12f}.'.format(I_T),
fontsize=14)
plt.show()
```
For our pictorial example used above, the approximation looks like it should be more accurate than the midpoint rule:
<p style="text-align:center;"><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Integration_trapezoid.svg/340px-Integration_trapezoid.svg.png" width="600px"></p>
The tops of the shapes (now trapezoids) are approximating the variation of the function with a linear function, rather than a flat (constant) function. This looks like it should give more accurate results, but see below.
Note that numpy has a function for the trapezoid rule, [`numpy.trapz`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html), but we'll make our own that works in a similar way to our midpoint rule function.
```
def trapezoidal_rule(a, b, function, number_intervals=10):
"""Our implementation of the trapezoidal quadrature rule.
Note that as discussed in the lecture this version of the implementation
performs redundant function evaluations - see the composite implementation
in the homework for a more efficient version.
"""
interval_size = (b - a)/number_intervals
assert interval_size > 0
assert type(number_intervals) == int
I_T = 0.0
# Loop to create each trapezoid
# note this function takes a slightly different approach to Midpoint
# (a for loop rather than a while loop) to achieve the same thing
for i in range(number_intervals):
# Set the start of this interval
this_bin_start = a + (interval_size * i)
# Find the area of the current trapezoid and add it to the running total
I_T += interval_size * \
(function(this_bin_start)+function(this_bin_start+interval_size))/2.0
# Return our running total result
return I_T
```
We can test the function in a similar way:
```
print("The exact area found by direct integration = 2")
for i in (1, 2, 10, 100, 1000):
area = trapezoidal_rule(0, np.pi, np.sin, i)
print("Area %g trapezoid(s) = %g (error=%g)"%(i, area, abs(area-2)))
```
## Error analysis
It is important to understand the errors of the numerical integration method we are using. If there are a limited number of slices, the area covered by the slices from trapezoid rule or the midpoint rule will not be slightly different from the actual area under the curve. Of course, if we increase the number of slices, the error becomes smaller; however, we also increase the computational power required.
A good numerical integration method should be able to have few slices, meaning using little computational power, but still be able to have a small error only. Thus, to know if the numerical integration method we used is a good or a bad method, we need to analyse the errors, or more exactly the change in the errors with numbers of slices, of our numerical integration method.
A good method will have a rapid decrease in the error when increasing the number of slices, while a bad method will have a slow decrease in the error when increasing the number of slices.
The accuracy of a quadrature, i.e. the mid point rule, the trapezoidal rule etc. is predicted by examining its behaviour in relationship with polynominals.
We say that the _degree of accuracy_ or the _degree of precision_ of a quadrature rule is equal to \\(M\\) if it is exact for all polynomials of degree up to and including \\(M\\), but not exact for some polynomial of degree \\(M+1\\).
```{margin} Note
Test your own code on function \\(x^2\\) to demonstrate that midpoint and trapezoid rule won't give the exact solution.
```
Clearly both the midpoint and trapezoid rules will give the exact result for both constant and linear functions, but they are not exact for quadratics. Therefore, they have a degree of precision of 1 (remember that through the error analysis, we found that it is 2nd order accurate, which is a different term to degree of precision!!).
### Concave-down functions
The first half of a sine wave is concave-down and we notice from the plot that trapezoidal rule consistently _underestimate_ the area under the curve as line segments are always under the curve.
In contrast, the mid-point rule will have parts of each rectangle above and below the curve, hence to a certain extent the _errors will cancel_ each other out.
```{glue:} concave_fig
```
This is why, *for this particular example*, the errors in the mid-point rule turn out to be approximately half those in the trapezoidal rule.
While this result turns out to be *generally* true for smooth functions, we can always come up with (counter) examples where the trapezoid rule will win.
Taylor series analysis can be used to formally construct upper bounds on the quadrature error for both methods.
We know that the error when integrating constant and linear functions is zero for our two rules, so let's first consider an example of integrating a quadratic polynomial.
We know analytically that
\\[\int_{0}^{1} x^{2}\,dx = \left.\frac{1}{3}x^3\right|_0^1=\frac {1}{3}.\\]
Numerically, the midpoint rule on a single interval gives
\\[ I_M = 1 \left(\frac {1}{2}\right)^{2} = \frac {1}{4},\\]
while the trapezoidal rule gives
\\[ I_T = 1 \frac {0+1^{2}}{2} = \frac {1}{2}.\\]
The error for \\(I_M\\) is therefore \\(1/3 - 1/4 = 1/12\\), while the error for \\(I_T\\) is \\(1/3 - 1/2 = -1/6\\).
Therefore, the midpoint rule is twice as accurate as the trapezoid rule:
\\[|E_M| = \frac{1}{2} |E_T|,\\]
where \\(|E|\\) indicates the error (the absolute value of the difference from the exact solution).
This is the case for this simple example, and we can see from the actual error values printed above that it also appears to be approximately true for the sine case (which is not a simple polynomial) as well.
```
# Get the value of pi from numpy and generate equally spaced values from 0 to pi.
x = np.linspace(0, np.pi/2, 100)
y = f(x)
# plot
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True, sharey=True)
ax1 = axes[0]
ax1.plot(x, y, 'b', lw=2)
# Label axis.
ax1.set_ylabel('$f(x)=\sin(x)$', fontsize=14)
ax1.set_title('Approximating a function with rectangles', fontsize=14)
# Overlay a grid.
ax1.grid(True)
number_intervals = 5
xi = np.linspace(0, np.pi/2, number_intervals+1)
I_M = 0.0
for i in range(number_intervals):
ax1.add_patch(Rectangle((xi[i], 0.0), (xi[i+1] - xi[i]),
f((xi[i+1]+xi[i])/2), fill=False, ls='--', color='k', lw=2))
I_M += f((xi[i+1]+xi[i])/2)*(xi[i+1] - xi[i])
ax1.set_title('The sum of the areas of the rectangles is $I_M =$ {:.12f}.'.format(I_M),
fontsize=14)
# plot
ax2 = axes[1]
ax2.plot(x, y, 'b', lw=2)
# Label axis.
ax2.set_xlabel('x', fontsize=14)
ax2.set_ylabel('$\sin(x)$', fontsize=14)
ax2.set_title('Approximating function with trapezoids', fontsize=14)
# Overlay a grid.
ax2.grid(True)
number_intervals = 5
xi = np.linspace(0, np.pi/2, number_intervals+1)
I_T = 0.0
for i in range(number_intervals):
ax2.add_patch(Polygon(np.array([[xi[i], 0], [xi[i], f(xi[i])], [
xi[i+1], f(xi[i+1])], [xi[i+1], 0]]), closed=True, fill=False, ls='--', color='k', lw=2))
I_T += ((f(xi[i+1]) + f(xi[i]))/2)*(xi[i+1] - xi[i])
ax2.set_title('The sum of the areas of the trapezoids is $I_T =$ {:.12f}.'.format(I_T),
fontsize=14)
plt.subplots_adjust(hspace=0.4)
glue("concave_fig", fig, display=False)
plt.show()
```
```{margin}
<img src="images_nm1/SIMPSONS.png">
```
(nm_simpsons_rule)=
## Simpson's rule
For our half sine wave, the rectangle method overestimates it by about 0.4%, while the trapezoid method underestimates it for 0.9%. We notice that in this situation, the rectangle method overestimates while the trapezoid method underestimates about twice the overestimate of the rectangle method. Could we combine this to obtain something more accurate?
Knowing the error estimates from the two rules explored so far opens up the potential for us to combine them in an appropriate manner to create a new quadrature rule, generally more accurate than either one separately.
Suppose \\(I_S\\) indicates an unknown, but more accurate, estimate of the integral over an interval. Then, as seen above, as \\(I_T\\) has an error that is approximately \\(-2\\) times the error in \\(I_M\\), the following relation must hold approximately:
\\[I_S - I_T \approx -2 \left ( I_S - I_M\right ).\\]
This follows from the fact that \\(I - I_T \approx -2 \left ( I - I_M\right )\\), provided that \\(I_S\\) is closer to \\(I\\) than either of the other two estimates. Replacing this approximately equals sign with actual equality defines \\(I_S\\) for us in terms of things we know.
We can rearrange this to give an expression for \\(I_S\\) that yields a more accurate estimate of the integral than either \\(I_M\\) or \\(I_T\\):
\\[I_S := \frac{2}{3}I_M + \frac{1}{3}I_T.\\]
We combined twice the overestimate from the rectangle method and once the underestimate from the trapezoid method, and then divided everything by 3 to obtain something more accurate! What we're doing here is using the fact that we know something about (the *leading order* behaviour of the) two errors, and we can therefore combine them to cancel this error to a certain extent.
This estimate will generally be more accurate than either \\(M\\) or \\(T\\) alone. The error won't be zero in general as we're only cancelling out the leading order term in the error, but a consequence is that we will be left with higher-degree terms in the error expansion of the new quadrature rule which should be smaller (at least in the asymptotic limit), and converge faster.
The resulting quadrature method in this case is known as [**Simpson's rule**](http://en.wikipedia.org/wiki/Simpson%27s_rule). Let's expand the Simpsons rule by substituting in what we know about the rectangle rule and the trapezoid rule:
$$
\begin{align*}
I_S &:= \frac{2}{3}I_M + \frac{1}{3}I_T \\[5pt]
&= \frac{2}{3} (b-a)f\left ( \frac{a+b}{2}\right ) + \frac{1}{3}(b-a)\frac{(f(a) + f(b))}{2} \\[5pt]
& = \frac{(b-a)}{6}\left( f \left ( a\right ) + 4f \left ( c\right ) + f\left ( b\right )\right),
\end{align*}
$$
where \\(a\\) and \\(b\\) are the end points of an interval and \\(c = \left ( a+b\right )/2\\) is the midpoint.
Note that an alternate derivation of the same rule involves fitting a *quadratic function* (i.e. \\(P_2(x)\\) rather than the constant and linear approximations already considered) that interpolates the integral at the two end points of the interval, \\(a\\) and \\(b\\), as well as at the midpoint, \\(c = \left ( a+b\right )/2\\), and calculating the integral under that polynomial approximation.
Let's plot what this method is doing and compute the integral for our sine case.
```
# Get the value of pi from numpy and generate equally spaced values from 0 to pi.
x = np.linspace(0, np.pi, 100)
y = f(x)
# plot
fig = plt.figure(figsize=(10, 4))
ax1 = plt.subplot(111)
ax1.plot(x, y, 'b', lw=2)
ax1.margins(0.1)
# Label axis.
ax1.set_xlabel('x', fontsize=16)
ax1.set_ylabel('sin(x)', fontsize=16)
# Overlay a grid.
ax1.grid(True)
number_intervals = 5
xi = np.linspace(0, np.pi, number_intervals+1)
I_S = 0.0
for i in range(number_intervals):
# Use a non-closed Polygon to visualise the straight sides of each interval
ax1.add_patch(Polygon(np.array([[xi[i], f(xi[i])], [xi[i], 0], [xi[i+1], 0], [xi[i+1], f(xi[i+1])]]),
closed=False, fill=False, ls='--', color='k', lw=2))
# Add the quadratic top - fit a quadratic using numpy
poly_coeff = np.polyfit((xi[i], (xi[i] + xi[i+1])/2.0, xi[i + 1]),
(f(xi[i]), f((xi[i] + xi[i+1])/2.0), f(xi[i+1])), 2)
# Plot the quadratic using 20 plotting points within the interval
ax1.plot(np.linspace(xi[i], xi[i+1], 20),
f(np.linspace(xi[i], xi[i+1], 20)), ls='--', color='k', lw=2)
# qdd in the area of the interval shape to our running total using Simpson's formula
I_S += ((xi[i+1] - xi[i])/6.) * (f(xi[i]) + 4 *
f((xi[i] + xi[i+1])/2.0) + f(xi[i+1]))
ax1.set_title("The Simpson's rule approximation is $I_s =$ {:.12f}.".format(I_S),
fontsize=14)
plt.show()
```
It looks much closer to the actual function:
<p style="text-align:center;"><img src="http://upload.wikimedia.org/wikipedia/commons/5/50/Integration_simpson.png" width="600px"></p>
Let's make a function to test it out.
```
def simpsons_rule(a, b, function, number_intervals=10):
""" Function to evaluate Simpson's rule.
Note that this implementation takes the function as an argument,
and evaluates this at the midpoint of subintervals in addition to the
end point. Hence additional information is generated and used through
additional function evaluations.
This is different to the function/implementation available with SciPy
where discrete data only is passed to the function.
Bear this in mind when comparing results - there will be a factor of two
in the definition of "n" we need to be careful about!
Also note that this version of the function performs redundant function
evaluations - see the **composite** implementation below.
"""
interval_size = (b - a)/number_intervals
assert interval_size > 0
assert type(number_intervals) == int
I_S = 0.0
# Loop to valuate Simpson's formula over each interval
for i in range(number_intervals):
# Find a, c, and b
this_bin_start = a + interval_size * (i)
this_bin_mid = this_bin_start + interval_size/2
this_bin_end = this_bin_start + interval_size
# Calculate the rule and add to running total.
I_S += (interval_size/6) * (function(this_bin_start) +
4 * function(this_bin_mid) + function(this_bin_end))
# Return our running total result
return I_S
```
Let's test the function:
```
print("The area found by direct integration = 2")
for i in (1, 2, 10, 100, 1000):
area = simpsons_rule(0, np.pi, np.sin, i)
print("Area %g Simpson's interval(s) = %g (error=%g)"%(i, area, abs(area-2)))
```
For this simple function you should find far smaller errors, and which drop much more rapidly with smaller \\(h\\) (or more sub-intervals).
**Observations:**
- The errors are lower than for the midpoint and trapezoidal rules, and the method converges more rapidly - i.e. the relative improvement only gets better for more subintervals.
- This expression now integrates up to cubics exactly (by construction), so it is of order 4 (if we halve the interval size, the error goes by a factor of \\(2^4=16\\)).
The convergence can be confirmed in the plot below:
```{glue:} simpson_conv_fig
```
- The degree of accuracy or precision of this method is 3.
- Simpson's rule integrates to cubics exactly, so since it's integrating exactly, cubics cannot contribute to the error. Only the quartic (4th) order terms contribute to the error, so it's 4th order accurate.
- We're getting down to errors close to the machine precision now when we use 1000 subintervals. Remember: your average consumer grade hardware can only handle that many decimal points, and you will need some rather expensive hardware to have even higher levels of precisions. Continuing with 1000 subintervals is actually not helpful as the error that you will get stops decreasing as it is so small that your computer stops being able to discriminate it from 0. Remember we may well either have a relatively small number of data points, or want to minimise the number of function evaluations well below this relatively high number. This will mean that for problems with lots of variation, and/or in higher dimensions, that we still work to do in improving our quadrature methods.
- As was the case with our first trapezoidal implementation, we are performing unnecessary function evaluations here; we can fix this issue through the implementation of a so-called *composite* version of the rule, which still gives the same result as your Simpson's rule, but makes it but easier for the computer. The composite Simpson's rule still does many evaluations, but fewer evaluations than your standard Simpson's rule.
```
# Now let's test the Simpson's rule function.
#print("The exact area found by direct integration = 2")
interval_sizes_S = [1, 2, 4, 8, 16, 32, 100, 1000]
errors_S = np.zeros_like(interval_sizes_S, dtype='float64')
areas_S = np.zeros_like(interval_sizes_S, dtype='float64')
for (i, number_intervals) in enumerate(interval_sizes_S):
areas_S[i] = simpsons_rule(0, np.pi, f, number_intervals)
errors_S[i] = abs(areas_S[i] - 2)
#print('Area {:<4d} for Simpson = {:.16f} (error = {:.9e})'.format(
# number_intervals, areas_S[i], errors_S[i]))
#print('\nVerificaton check: These are the corresponding values computed using SciPy'
# ' (BUT read the comment in the code above!)')
# note that since our function above takes the function and can evaluate it wherever it likes,
# it essentially doubles the number of intervals by evaluating the function at the mid points.
# The scipy function takes in discrete data points, and hence fits a polynomial across two
# intervals.
# Therefore to get the same values we need to explicitly double the number of intervals in the
# function call:
# instead of passing it 'number_intervals' points, we pass it '2*number_intervals + 1' points
# Also the SciPy implementation obviously needs an even number of intervals (equivalently an ODD
# number of data points)
# Note we didn't have this issue with the SciPy version of trapezoidal as both the function and
# data point passing versions of the method only need two (end) points per interval.
for (i, number_intervals) in enumerate(interval_sizes_S):
area_scipy_simpson = si.simps(f(np.linspace(0, np.pi, 2*number_intervals + 1)),
np.linspace(0, np.pi, 2*number_intervals + 1))
#print('{0:.16f}, {1:.16e}'.format(area_scipy_simpson, abs(area_scipy_simpson - areas_S[i])))
# plot
fig = plt.figure(figsize=(5, 5))
ax1 = plt.subplot(111)
ax1.loglog(interval_sizes_S, errors_S, 'ro-', lw=2, label='Simpson')
ax1.loglog(interval_sizes_T, errors_T, 'bo-', lw=2, label='Trapezoidal')
ax1.loglog(interval_sizes_M, errors_M, 'ko-', lw=2, label='Midpoint')
ax1.set_xlabel('log(no. of intervals)', fontsize=14)
ax1.set_ylabel('log(error)', fontsize=14)
ax1.set_title('Quadrature rule convergence', fontsize=14)
ax1.legend(loc='best', fontsize=14)
glue("simpson_conv_fig", fig, display=False)
plt.show()
```
(nm_composite_simpsons_rule)=
## Composite Simpson's rule
If we assume that our interval \\([a,b]\\) has been split up into \\(n\\) intervals (or \\(n+1\\) data points) we can save some function evaluations by writing Simpson's rule in the following form:
$$
\begin{align*}
I_{S}
& = \frac{\Delta x}{3}\left[ f \left ( x_0\right ) + 4f \left ( x_1\right ) + 2f\left ( x_2\right ) + 4f \left ( x_3\right ) + \cdots + 2 f \left ( x_{n-2}\right ) + 4 f \left ( x_{n-1}\right ) + f \left ( x_{n}\right ) \right]\\[5pt]
& = \frac{\Delta x}{3}\left[ f \left ( x_0\right ) + 2\sum_{i=1}^{n/2 - 1} f\left(x_{2i}\right) + 4\sum_{i=1}^{n/2} f\left(x_{2i-1}\right) + f \left ( x_{n}\right ) \right].
\end{align*}
$$
This is known as the [composite Simpson's rule](http://en.wikipedia.org/wiki/Simpson%27s_rule#Composite_Simpson.27s_rule),
or more precisely the *composite Simpson's 1/3 rule*.
You can find a version of Simpson's rule implemented by SciPy - [`scipy.interpolate.simps`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.simps.html).
Note that this way of formulating Simpson's rule (where we do not allow additional function evaluations at the midpoints of intervals - we assume we are only in a position to use the given data points) requires that \\(n\\) be even.
This way of writing the composite form in the case of \\(n=2\\) is equivalent to the formula over \\([a,b]\\) that introduced the additional midpoint location \\(c\\).
Let's implement this rule:
```
def simpsons_composite_rule(a, b, function, number_intervals=10):
"""Function to evaluate the composite Simpson's rule only using
function evaluations at (number_intervals + 1) points.
This implementation requires that the number of subintervals (number_intervals) be even
"""
assert number_intervals % 2 == 0, "number_intervals is not even"
interval_size = (b - a) / number_intervals
# Start with the two end member values
I_cS2 = function(a) + function(b)
# Add in those terms with a coefficient of 4
for i in range(1, number_intervals, 2):
I_cS2 += 4 * function(a + i * interval_size)
# And those terms with a coefficient of 2
for i in range(2, number_intervals-1, 2):
I_cS2 += 2 * function(a + i * interval_size)
return I_cS2 * (interval_size / 3.0)
```
Let's test the rule:
```
print("The area found by direct integration = 2")
for i in (2, 10, 100, 1000):
area = simpsons_composite_rule(0, np.pi, np.sin, i)
print("Area %g rectangle(s) = %g (error=%g)"%(i, area, abs(area-2)))
```
This is a slight improvement for a simple function like \\(\sin\\), but will be much more of an improvement for functions which oscillate more, in a relative sense compared to the size of our bins.
(nm_weddles_rule)=
## Weddle's rule
We noted above that Simpson's rule is fourth-order accurate. Suppose we take an approximation to \\(I\\) using \\(n\\) subintervals with Simpson's rule and call the result \\(I_S\\), and then apply Simpson's rule with double the number of intervals (\\(2n\\)) and call the result \\(I_{S_2}\\).
Then we have two estimates for the integral where we expect \\(I_{S_2}\\) to be approximately \\(2^4=16\\) times more accurate than \\(S\\). In particular, we expect the lowest (i.e. the leading) order error term in \\(I_{S_2}\\) to be precisely one sixteenth that of \\(I_S\\).
Similar to how we derived Simpson's rule by combining what we knew of the error for the midpoint and trapezoidal rules, with this knowledge we can combine the two estimates from Simpson's rule to derive an even more accurate estimate of \\(I\\).
Let's call this more accurate rule \\(I_W\\), which we can find by solving:
\\[I_W - I_S = 16 \left ( I_W - I_{S_2} \right ),\\]
for \\(I_W\\).
With a bit of manipulation,
$$\begin{align*}
& \;\;\; I_W - I_S = 16 \left ( I_W - I_{S_2} \right ) \\[5pt]
\implies & \;\;\; I_W - I_S = 16 I_W - 16 I_{S_2} \\[5pt]
\implies & \;\;\; 15 I_W = 16 I_{S_2} - I_S \\[5pt]
\implies & \;\;\; 15 I_W = 15 I_{S_2} + (I_{S_2} - I_S) ,
\end{align*}$$
we get this expression
\\[ I_W = I_{S_2} + \frac {\left (I_{S_2} - I_S \right )}{15}.\\]
This is known as **Weddle's rule**, or the extrapolated Simpson's rule because it uses two different values for the interval size and extrapolates from these two to obtain an even more accurate result.
Making a function for this rule is easy as we can just call our Simpson's rule functions with two values for the number of intervals.
### Implementation
We can implement this by calling already created functions for composite Simpson's rule:
```
def weddles_rule(a, b, function, number_intervals=10):
""" Function to evaluate Weddle's quadrature rule using
appropriate calls to the composite_simpson function
"""
S = simpsons_composite_rule(a, b, function, number_intervals)
S2 = simpsons_composite_rule(a, b, function, number_intervals*2)
return S2 + (S2 - S)/15.
```
We can test it in a similar way:
```
for i in (2, 10, 100, 1000):
area = weddles_rule(0, np.pi, np.sin, i)
print("Area with %g Weddle's interval(s) = %g (error=%g)"%(i, area, abs(area-2)))
```
Our final result is much more accurate for fewer required bins:
```
# Now let's test the Weddle's rule function.
#print("The exact area found by direct integration = 2")
interval_sizes_W = [2, 4, 8, 16, 32, 100, 1000]
errors_W = np.zeros_like(interval_sizes_W, dtype='float64')
for (i, number_intervals) in enumerate(interval_sizes_W):
area = weddles_rule(0, np.pi, f, number_intervals)
errors_W[i] = abs(area-2)
#print('Area {0:<4d} interval(s), {1:<4d} function evaluations for Weddle = {2:.16f} (error = {3:.16e})'.format(
# number_intervals, ((number_intervals+1)+(2*number_intervals+1)), area, errors_W[i]))
# plot
fig = plt.figure(figsize=(5, 5))
ax1 = plt.subplot(111)
ax1.loglog(interval_sizes_W, errors_W, 'go-', lw=2, label='Weddle')
# need to run the other quadrature rules to allow the following 3 lines
ax1.loglog(interval_sizes_S, errors_S, 'ro-', lw=2, label='Simpson')
ax1.loglog(interval_sizes_T, errors_T, 'bo-', lw=2, label='Trapezoidal')
ax1.loglog(interval_sizes_M, errors_M, 'ko-', lw=2, label='Midpoint')
ax1.set_xlabel('log(no. of intervals)', fontsize=14)
ax1.set_ylabel('log(error)', fontsize=14)
ax1.set_title('Quadrature rule convergence', fontsize=14)
ax1.legend(loc='best', fontsize=12)
plt.show()
```
## Other rules
Note that the above technique of using the same rule, but with different values for the interval size, \\(h\\), to derive a more accurate estimate of the integral is an example of what is more generally called *Richardson extrapolation*. Performing this approach using the trapezoid rule as the starting point leads to what is termed *Romberg integration*.
Taking the idea behind Simpson's rule which fits a quadratic Lagrange interpolating polynomial to *equally spaced* points in the interval, end extending to any order Lagrange polynomial leads to the [*Newton-Cotes* family of quadrature rules](https://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas).
Note finally, that even wider families exist where the function being integrated is evaluated at non-equally-spaced points. And of course for practical application these ideas need to be extended to more than one dimension.
| github_jupyter |
# Data Preparation
```
import html
from functools import partial
import bleach
import numpy as np
import pandas as pd
from inflection import parameterize
from constants import OUTPUT_DATA, RAW_DATA, cols, df_types
# import dtale
```
- https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html
- https://en.wikipedia.org/wiki/Administrative_divisions_of_Nepal
- https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html
- https://gadm.org/download_country.html
- https://www.vizforsocialgood.com/join-a-project/2021/12/28/build-up-nepal
- https://www.buildupnepal.com/project-map/
- https://docs.python.org/3.8/library/json.html
- https://inflection.readthedocs.io/en/latest/index.html#module-inflection
- https://github.com/mozilla/bleach
- https://bleach.readthedocs.io/en/latest/clean.html
- https://bleach.readthedocs.io/en/latest/clean.html#stripping-markup-strip
- http://www.jsondiff.com/
- https://docs.python.org/3.7/library/html.html
- https://github.com/ChristophLabacher/fix-punctuation
## Setup
```
df = pd.read_excel(
RAW_DATA,
# index_col=0,
index_col=None,
sheet_name="Worksheet",
verbose=True,
skipfooter=1,
# dtype=str,
dtype=df_types,
usecols=cols,
)
df.columns = df.columns.str.strip()
```
## Data Analysis
```
df.shape
df.head()
df.tail()
df.dtypes
# d = dtale.show(df)
# d
# d.open_browser()
# df["Province Name"].isna().sum()
df.isna().sum()
# For numeric columns, should NaN be converted to 0?
df["Status"].value_counts(dropna=False)
df["District"].value_counts(dropna=False)
df["Province Name"].value_counts(dropna=False)
df["Type"].value_counts(dropna=False)
# There are some latitudes and longitudes equal to 0.
df["Latitude"].value_counts(dropna=False)
df["Longitude"].value_counts(dropna=False)
df["Gender"].value_counts(dropna=False)
# Years must be converted to strings.
df["Start Date Name"].value_counts(dropna=False)
# (
# (df["Jobs (production)"].fillna(0) + df["Jobs (construction)"].fillna(0))
# == df["Total jobs"]
# ).sum() == df.shape[0]
df["Name"].value_counts(dropna=False).head()
# https://en.wikipedia.org/wiki/Nawalparasi_District
df.query("Name == 'Sangita Shrestha'")
# https://docs.python.org/3/library/re.html#regular-expression-syntax
# https://stackoverflow.com/a/28312011
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html
descriptions = df.loc[
df["Description"].str.contains("\s{2,}", na=False), "Description"
].to_list()
print(len(descriptions))
# descriptions
# df["Description"].to_list()
```
## Data Processing
```
df["Start Date Name"] = df["Start Date Name"].str.removesuffix(".0")
# df["Start Date Name"].value_counts(dropna=False).head()
df.columns = map(partial(parameterize, separator="_"), df.columns)
df.columns.to_list()
# df["description"] = df["description"].str.strip()
# https://docs.python.org/3/library/re.html#regular-expression-syntax
# df["description"] = df["description"].replace({"\s{2,}": " "}, regex=True)
df["description"] = df["description"].replace({"[ \t\r\f\v]{2,}": " "}, regex=True)
# Remove `_x000b_` and `<br />`
def clean_tags(d):
# https://github.com/mozilla/bleach/issues/192
# https://stackoverflow.com/a/21928583
return html.unescape(bleach.clean(d, strip=True)) if not pd.isna(d) else d
df["description"] = df["description"].apply(clean_tags)
# df["description"] = df["description"].str.replace("<br />", "", regex=False)
df["description"] = df["description"].str.replace("_x000b_", "", regex=False)
df["description"] = df["description"].str.replace("\s*\n\s*", "\n", regex=True)
df["description"] = df["description"].str.strip()
df["description"] = np.where(
df["description"].str.contains('[\w"]$', na=False, regex=True),
df["description"] + ".",
df["description"],
)
# df.head()
# df["description"].to_list()
status_map = {"Closed / Sold": "Closed/Sold", "recently started": "Recently started"}
df["status"] = df["status"].replace(status_map)
type_map = {
"Entrepreneur": "Male entrepreneur",
"Female Entrepreneur": "Female entrepreneur",
"Returning Migrant": "Returning migrant",
"Youth Entrepreneur": "Youth entrepreneur",
"Husband wife": "Male and female entrepreneurs",
"DAG - Disadvantaged Groups": "Disadvantaged group",
}
df["type"] = df["type"].replace(type_map)
# df["status"].value_counts(dropna=False)
# df["type"].value_counts(dropna=False)
# https://en.wikipedia.org/wiki/Dang_District,_Nepal
df["district"] = df["district"].replace({"Dang Deokhuri": "Dang"})
df.sample()
# df["district"].sort_values().unique()
# Nawalparasi -> Nawalparasi East (Gandaki) or Nawalparasi West (Lumbini)
# https://en.wikipedia.org/wiki/Nawalparasi_District
# Rukum -> Rukum East (Lumbini) or Rukum West (Karnali)
# https://en.wikipedia.org/wiki/Rukum_District
df["district"] = np.where(
(df["province_name"] == "Gandaki Province") & (df["district"] == "Nawalparasi"),
"Nawalparasi East",
df["district"],
)
df["district"] = np.where(
(df["province_name"] == "Lumbini Province") & (df["district"] == "Nawalparasi"),
"Nawalparasi West",
df["district"],
)
df["district"] = np.where(
(df["province_name"] == "Lumbini Province") & (df["district"] == "Rukum"),
"Rukum East",
df["district"],
)
df["district"] = np.where(
(df["province_name"] == "Karnali Province") & (df["district"] == "Rukum"),
"Rukum West",
df["district"],
)
# df.isna().sum()
```
## Output
```
# indent = 2
indent = 0
df.to_json(OUTPUT_DATA, orient="records", force_ascii=False, indent=indent)
```
---
| github_jupyter |
```
import kfp
import kfp.components as comp
import kfp.dsl as dsl
from kfp.gcp import use_gcp_secret
from kfp.components import ComponentStore
from os import path
import json
cs = ComponentStore(local_search_paths=['.', '{{config.output_package}}'],
url_search_prefixes=['{{config.github_component_url}}'])
pre_process_op = cs.load_component('{{config.preprocess.component}}')
hpt_op = cs.load_component('hptune')
param_comp = cs.load_component('get_tuned_params')
train_op = cs.load_component('{{config.train.component}}')
deploy_op = cs.load_component('{{config.deploy.component}}')
@dsl.pipeline(
name='KFP-Pipelines Example',
description='Kubeflow pipeline generated from ai-pipeline asset'
)
def pipeline_sample(
project_id='{{config.project_id}}',
region = '{{config.region}}',
python_module = '{{config.train.python_module}}',
package_uri = '{{config.train.python_package}}',
dataset_bucket = '{{config.bucket_id}}',
staging_bucket = 'gs://{{config.bucket_id}}',
job_dir_hptune = 'gs://{{config.bucket_id}}/hptune',
job_dir_train = 'gs://{{config.bucket_id}}/train',
runtime_version_train = '{{config.runtime_version}}',
runtime_version_deploy = '{{config.runtime_version}}',
hptune_config='{{config.hptune.config}}',
model_id='{{config.deploy.model_id}}',
version_id='{{config.deploy.version_id}}',
common_args_hpt=json.dumps([
{% for arg in config.hptune.args %} {% set name = arg.name %} {% set value = arg.default %} '--{{name}}', '{{value}}' ,
{% endfor %} ]),
common_args_train=json.dumps([
{% for arg in config.train.args %} {% set name = arg.name %} {% set value = arg.default%} '--{{name}}', '{{value}}' ,
{% endfor %} ]),
replace_existing_version=True
):
#Preprocess Task
pre_process_task = pre_process_op(
{% for arg in config.preprocess.component_args %}
{% set name = arg.name %}
{{name}}={{name}},
{% endfor %}
)
# HP tune Task
hpt_task = hpt_op (
region = region,
python_module = python_module,
package_uri = package_uri,
staging_bucket = staging_bucket,
job_dir = job_dir_hptune,
config=hptune_config,
runtime_version = runtime_version_train,
args = common_args_hpt ,
)
hpt_task.after(pre_process_task)
# Get the best hyperparameters
param_task = param_comp (
project_id=project_id,
hptune_job_id=hpt_task.outputs['job_id'].to_struct(),
common_args=common_args_train,
)
# Train Task
train_task = train_op (
project_id = project_id,
python_module = python_module,
package_uris = json.dumps([package_uri.to_struct()]),
region = region,
args = str(param_task.outputs['tuned_parameters_out']) ,
job_dir = job_dir_train,
python_version = '',
runtime_version = runtime_version_train,
master_image_uri = '',
worker_image_uri = '',
training_input = '',
job_id_prefix = '',
wait_interval = '30'
)
#model_uri=train_task.outputs['job_dir'],
#model_uri='gs://poc-bucket-0120/train/out/export/exporter',
deploy_model = deploy_op(
model_uri=train_task.outputs['job_dir'].to_struct()+'{{config.train.model_out_prefix}}',
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=runtime_version_deploy,
replace_existing_version=replace_existing_version
)
kfp.dsl.get_pipeline_conf().add_op_transformer(use_gcp_secret('user-gcp-sa'))
client = kfp.Client(host='{{config.kfp_deployment_url}}')
client.create_run_from_pipeline_func(pipeline_sample, arguments={})
```
| github_jupyter |
## Image sample (DDPM - guided diffusion - Diffusion beats gans)
```
import argparse
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="2"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch as th
import torch.distributed as dist
import datetime
from collections import namedtuple
from guided_diffusion import dist_util, logger
from guided_diffusion.script_util import (
NUM_CLASSES,
model_and_diffusion_defaults,
create_model_and_diffusion,
add_dict_to_argparser,
args_to_dict,
seed_all,
diffusion_defaults,
)
def create_argparser(log_dir, model_path, **kwargs):
defaults = dict(
clip_denoised=True,
num_samples=1,
batch_size=20,
use_ddim=False,
model_path=model_path,
log_dir=log_dir,
cond="",
out_c=kwargs['out_c'],
z_cond=kwargs['z_cond'],
precomp_z=kwargs['precomp_z'],
diffusion_step=1000,
timestep_respacing=1000,
image_size=64,
num_channels=27,
num_heads=3,
)
defaults.update(model_and_diffusion_defaults())
# parser = argparse.ArgumentParser()
# add_dict_to_argparser(parser, defaults)
return namedtuple('GenericDict', defaults.keys())(**defaults)
# return parser
def model_and_diffusion_defaults():
"""
Defaults for image training.
"""
res = dict(
image_size=64,
num_channels=27,
num_res_blocks=2,
num_heads=3,
num_heads_upsample=-1,
num_head_channels=-1,
attention_resolutions="16,8",
channel_mult="",
dropout=0.0,
class_cond=False,
use_checkpoint=False,
use_scale_shift_norm=True,
resblock_updown=False,
use_fp16=False,
use_new_attention_order=False,
z_cond=True,
)
res.update(diffusion_defaults())
return res
# List model_logs
print(os.listdir("/home2/mint/model_logs"))
# args
log_dir = "ffhq64_deca_light_b64"
model_logs_path = "/home2/mint/model_logs_mount/v8_model_logs/"
# model_logs_path = "/home2/mint/model_logs/"
step = "400000"
model_path = "{}/{}/ema_0.9999_{}.pt".format(model_logs_path, log_dir, step)
args = create_argparser(log_dir=log_dir, model_path=model_path, out_c='rgb', precomp_z="/home2/mint/Diffusion_Dataset/ffhq_256/ffhq-train-light-anno.txt", z_cond=True)
# Check model_logs
if not os.path.isdir(os.path.join(model_logs_path, args.log_dir)):
print("No logs folder")
raise FileNotFoundError
else:
if not os.path.isdir(os.path.join(model_logs_path, args.log_dir, "samples")):
os.makedirs(os.path.join(model_logs_path, args.log_dir, "samples"))
dist_util.setup_dist()
logger.configure()
if args.out_c in ['rgb', 'rbg', 'brg', 'bgr', 'grb', 'gbr', 'ycrcb', 'sepia', 'hsv', 'hls']:
model_and_diffusion = model_and_diffusion_defaults()
logger.log("creating {} model and diffusion...".format(args.out_c))
else:
raise NotImplementedError
model, diffusion = create_model_and_diffusion(
**args_to_dict(args, model_and_diffusion.keys())
)
print(model)
model.load_state_dict(
dist_util.load_state_dict(args.model_path, map_location="cpu")
)
model.to(dist_util.dev())
if args.use_fp16:
model.convert_to_fp16()
model.eval()
# Condition
precomp_z = "/home2/mint/Diffusion_Dataset/ffhq_256/ffhq-valid-light-anno.txt"
precomp_z = pd.read_csv(precomp_z, header=None, sep=" ", index_col=False, names=["img_name"] + list(range(27)), lineterminator='\n')
precomp_z = precomp_z.set_index('img_name').T.to_dict('list')
#sample_fn = diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop
seed_all(0)
sample_fn = diffusion.p_sample_loop
img_name = '69994.jpg'
precomp_z_input = th.tensor(precomp_z[img_name]).cuda().view(1, -1)
sample = sample_fn(
model,
(args.batch_size, 3, args.image_size, args.image_size),
clip_denoised=args.clip_denoised,
model_kwargs={'precomp_z':precomp_z_input},
)
def decolor(s, out_c='rgb'):
if out_c in ['rgb', 'rbg', 'brg', 'bgr', 'grb', 'gbr']:
s_ = ((s + 1) * 127.5).clamp(0, 255).to(th.uint8)
elif out_c == 'luv':
s_ = ((s + 1) * 127.5).clamp(0, 255).to(th.uint8)
elif out_c == 'ycrcb':
s_ = ((s + 1) * 127.5).clamp(0, 255).to(th.uint8)
elif out_c in ['hsv', 'hls']:
h = (s[..., [0]] + 1) * 90.0
l_s = (s[..., [1]] + 1) * 127.5
v = (s[..., [2]] + 1) * 127.5
s_ = th.cat((h, l_s, v), axis=2).clamp(0, 255).to(th.uint8)
elif out_c == 'sepia':
s_ = ((s + 1) * 127.5).clamp(0, 255).to(th.uint8)
else: raise NotImplementedError
return s_
columns = 5
rows = 5
fig = plt.figure(figsize=(10, 10), dpi=100)
sample_ = sample.permute(0, 2, 3, 1) # BxHxWxC
for i in range(0, sample_.shape[0]):
s_ = decolor(s=sample_[i], out_c=args.out_c)
s_ = s_.detach().cpu().numpy()
fig.add_subplot(rows, columns, i+1)
plt.imshow(s_)
plt.show()
```
| github_jupyter |
# Named Entity Recognition on BC5CDR (Chemical + Disease Corpus) with BioBERT
---
[Github](https://github.com/eugenesiow/practical-ml/blob/master/notebooks/Named_Entity_Recognition_Mandarin_MSRA.ipynb) | More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml)
---
Notebook to train/fine-tune a BioBERT model to perform named entity recognition (NER).
The [dataset](https://github.com/shreyashub/BioFLAIR/tree/master/data/ner/bc5cdr) used is a pre-processed version of the BC5CDR (BioCreative V CDR task corpus: a resource for relation extraction) dataset from [Li et al. (2016)](https://pubmed.ncbi.nlm.nih.gov/27161011/).
The current state-of-the-art model on this dataset is the NER+PA+RL model from [Nooralahzadeh et al. (2019)](https://www.aclweb.org/anthology/D19-6125/) has an F1-score of [**89.93%**](https://paperswithcode.com/sota/named-entity-recognition-ner-on-bc5cdr). The authors did not release the source code for the paper.
Our model trained on top of BioBERT has an F1-score of **89.3%** which is slightly worse than the state-of-the-art but almost as good as the #2 [BioFlair](https://github.com/shreyashub/BioFLAIR)!
The notebook is structured as follows:
* Setting up the GPU Environment
* Getting Data
* Training and Testing the Model
* Using the Model (Running Inference)
#### Task Description
> Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.
# Setting up the GPU Environment
#### Ensure we have a GPU runtime
If you're running this notebook in Google Colab, select `Runtime` > `Change Runtime Type` from the menubar. Ensure that `GPU` is selected as the `Hardware accelerator`. This will allow us to use the GPU to train the model subsequently.
#### Install Dependencies and Restart Runtime
```
!pip install -q transformers
!pip install -q simpletransformers
```
You might see the error `ERROR: google-colab X.X.X has requirement ipykernel~=X.X, but you'll have ipykernel X.X.X which is incompatible` after installing the dependencies. **This is normal** and caused by the `simpletransformers` library.
The **solution** to this will be to **reset the execution environment** now. Go to the menu `Runtime` > `Restart runtime` then continue on from the next section to download and process the data.
# Getting Data
#### Pulling the data from Github
The dataset, includes train, test and dev sets, which we pull from the [Github repository](https://github.com/shreyashub/BioFLAIR/tree/master/data/ner/bc5cdr).
```
import urllib.request
from pathlib import Path
def download_file(url, output_file):
Path(output_file).parent.mkdir(parents=True, exist_ok=True)
urllib.request.urlretrieve (url, output_file)
download_file('https://raw.githubusercontent.com/shreyashub/BioFLAIR/master/data/ner/bc5cdr/train.txt', '/content/data/train.txt')
download_file('https://raw.githubusercontent.com/shreyashub/BioFLAIR/master/data/ner/bc5cdr/test.txt', '/content/data/test.txt')
download_file('https://raw.githubusercontent.com/shreyashub/BioFLAIR/master/data/ner/bc5cdr/dev.txt', '/content/data/dev.txt')
```
Since the data is formatted in the CoNLL `BIO` type format (you can read more on the tagging format from this [wikipedia article](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging))), we need to format it into a `pandas` dataframe with the following function. The 3 important columns in the dataframe are a word token (for mandarin this is a single character), a `BIO` label and a sentence_id to differentiate samples/sentences.
```
import pandas as pd
def read_conll(filename):
df = pd.read_csv(filename,
sep = '\t', header = None, keep_default_na = False,
names = ['words', 'pos', 'chunk', 'labels'],
quoting = 3, skip_blank_lines = False)
df = df[~df['words'].astype(str).str.startswith('-DOCSTART- ')] # Remove the -DOCSTART- header
df['sentence_id'] = (df.words == '').cumsum()
return df[df.words != '']
```
Now we execute the function on the train, test and dev sets we have downloaded from Github. We also `.head()` the training set dataframe for the first 100 rows to check that the words, labels and sentence_id have been split properly.
```
train_df = read_conll('/content/data/train.txt')
test_df = read_conll('/content/data/test.txt')
dev_df = read_conll('/content/data/dev.txt')
train_df.head(100)
```
We now print out the statistics (number of sentences) of the train, dev and test sets.
```
data = [[train_df['sentence_id'].nunique(), test_df['sentence_id'].nunique(), dev_df['sentence_id'].nunique()]]
# Prints out the dataset sizes of train and test sets per label.
pd.DataFrame(data, columns=["Train", "Test", "Dev"])
```
# Training and Testing the Model
#### Set up the Training Arguments
We set up the training arguments. Here we train to 10 epochs to get accuracy close to the SOTA. The train, test and dev sets are relatively small so we don't have to wait too long. We set a sliding window as NER sequences can be quite long and because we have limited GPU memory we can't increase the `max_seq_length` too long.
```
train_args = {
'reprocess_input_data': True,
'overwrite_output_dir': True,
'sliding_window': True,
'max_seq_length': 64,
'num_train_epochs': 10,
'train_batch_size': 32,
'fp16': True,
'output_dir': '/outputs/',
'best_model_dir': '/outputs/best_model/',
'evaluate_during_training': True,
}
```
The following line of code saves (to the variable `custom_labels`) a set of all the NER tags/labels in the dataset.
```
custom_labels = list(train_df['labels'].unique())
print(custom_labels)
```
#### Train the Model
Once we have setup the `train_args` dictionary, the next step would be to train the model. We use the pre-trained BioBERT model (by [DMIS Lab, Korea University](https://huggingface.co/dmis-lab)) from the awesome [Hugging Face Transformers](https://github.com/huggingface/transformers) library as the base and use the [Simple Transformers library](https://simpletransformers.ai/docs/classification-models/) on top of it to make it so we can train the NER (sequence tagging) model with just a few lines of code.
```
from simpletransformers.ner import NERModel
from transformers import AutoTokenizer
import pandas as pd
import logging
logging.basicConfig(level=logging.DEBUG)
transformers_logger = logging.getLogger('transformers')
transformers_logger.setLevel(logging.WARNING)
# We use the bio BERT pre-trained model.
model = NERModel('bert', 'dmis-lab/biobert-v1.1', labels=custom_labels, args=train_args)
# Train the model
# https://simpletransformers.ai/docs/tips-and-tricks/#using-early-stopping
model.train_model(train_df, eval_data=dev_df)
# Evaluate the model in terms of accuracy score
result, model_outputs, preds_list = model.eval_model(test_df)
```
The F1-score for the model is **89.3%** ('f1_score': 0.8927974947807933).
> For now thats the #3-ranked SOTA NER model on BC5CDR!
# Using the Model (Running Inference)
Running the model to do some predictions/inference is as simple as calling `model.predict(samples)`. First we get a sentence from the test set and print it out. Then we run the prediction on the sentence.
```
sample = test_df[test_df.sentence_id == 10].words.str.cat(sep=' ')
print(sample)
samples = [sample]
predictions, _ = model.predict(samples)
for idx, sample in enumerate(samples):
print('{}: '.format(idx))
for word in predictions[idx]:
print('{}'.format(word))
```
We can connect to Google Drive with the following code to save any files you want to persist. You can also click the `Files` icon on the left panel and click `Mount Drive` to mount your Google Drive.
The root of your Google Drive will be mounted to `/content/drive/My Drive/`. If you have problems mounting the drive, you can check out this [tutorial](https://towardsdatascience.com/downloading-datasets-into-google-drive-via-google-colab-bcb1b30b0166).
```
from google.colab import drive
drive.mount('/content/drive/')
```
You can move the model checkpount files which are saved in the `/outputs/` directory to your Google Drive.
```
import shutil
shutil.move('/outputs/', "/content/drive/My Drive/outputs/")
```
More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml) and do drop us some feedback on how to improve the notebooks on the [Github repo](https://github.com/eugenesiow/practical-ml/).
| github_jupyter |
# Two Independent Samples
Alternative of t-test for two independent samples
```
# Enable the commands below when running this program on Google Colab.
# !pip install arviz==0.7
# !pip install pymc3==3.8
# !pip install Theano==1.0.4
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
import pymc3 as pm
plt.style.use('seaborn-darkgrid')
np.set_printoptions(precision=3)
pd.set_option('display.precision', 3)
# Math test scores of each student in different class.
# Class A: Experimental group
# Class B: Control group
CLASS_A = [49, 66, 69, 55, 54, 72, 51, 76, 40, 62, 66, 51, 59, 68, 66, 57, 53, 66, 58, 57]
CLASS_B = [41, 55, 21, 49, 53, 50, 52, 67, 54, 69, 57, 48, 31, 52, 56, 50, 46, 38, 62, 59]
# Vsualize the data
plt.boxplot([CLASS_A, CLASS_B], labels=['Class A', 'CLASS B'])
plt.ylabel('Score')
plt.show()
# Summary
data = pd.DataFrame([CLASS_A, CLASS_B], index=['Class A', 'Class B']).transpose()
# display(data)
data.describe()
```
## 標準偏差が同一な正規分布モデル(SAME)
```
with pm.Model() as model_same:
# Prior distribution
mu_1 = pm.Uniform('mu1', 0, 100)
mu_2 = pm.Uniform('mu2', 0, 100)
sigma = pm.Uniform('sigma', 0, 50)
# Likelihood
y_1 = pm.Normal('y_1', mu=mu_1, sd=sigma, observed=CLASS_A)
y_2 = pm.Normal('y_2', mu=mu_2, sd=sigma, observed=CLASS_B)
# Difference of average values
diff_mu = pm.Deterministic('mu1 - mu2', mu_1 - mu_2)
trace_same = pm.sample(21000, chains=5)
chain_same = trace_same[1000:]
# arviz.plot_trace(chain_same)
pm.traceplot(chain_same)
plt.show()
pm.summary(chain_same)
```
### RQ1: クラスAの平均値がクラスBの平均値より大きい確率
```
print('p(mu1 - mu2 > 0) = {:.3f}'.format((chain_same['mu1'] - chain_same['mu2'] > 0).mean()))
# print(≈'p(mu1 - mu2 > 0) = {:.3f}'.format((chain_same['mu1 - mu2'] > 0).mean()))
```
### RQ2: クラスAとクラスBの平均値の差の点推定と区間推定(2つのクラスの平均的成績差は素点でどれほどか、またその差はどの程度の幅で確信できるのか)
```
pm.plot_posterior(chain_same['mu1 - mu2'], credible_interval=0.95, point_estimate='mode')
plt.xlabel(r'$\mu$1 - $\mu$2')
plt.show()
print('Point estimation (difference of population mean): {:.3f}'.format(chain_same['mu1 - mu2'].mean()))
hpd_0025 = np.quantile(chain_same['mu1 - mu2'], 0.025)
hpd_0975 = np.quantile(chain_same['mu1 - mu2'], 0.975)
print('Credible Interval (95%): ({:.3f}, {:.3f})'.format(hpd_0975, hpd_0025))
```
### RQ3: 平均値の差の片側区間推定の下限・上限(平均的な成績の上昇はどれだけ見込めるか?また、どの程度の成績の上昇しか高々見込めないのか?)
```
hpd_005 = np.quantile(chain_same['mu1 - mu2'], 0.05)
hpd_0950 = np.quantile(chain_same['mu1 - mu2'], 0.95)
print('At most (95%): {:.3f}'.format(hpd_0950))
print('At least (95%): {:.3f}'.format(hpd_005))
```
### RQ4: 平均値の差が基準点cより大きい確率(平均点の差が少ししか無いのであればメリットは少なく導入は難しい。5点より大きい成績上昇が導入の条件で、その確率が70%より大きいならば採用する。採用すべきか、見送るべきか?)
```
print('p(mu1 - mu2 > 3) = {:.3f}'.format((chain_same['mu1'] - chain_same['mu2'] > 3).mean()))
print('p(mu1 - mu2 > 5) = {:.3f}'.format((chain_same['mu1'] - chain_same['mu2'] > 5).mean()))
print('p(mu1 - mu2 > 10) = {:.3f}'.format((chain_same['mu1'] - chain_same['mu2'] > 10).mean()))
```
## 標準偏差が異なる正規分布モデル(DIFF)
```
with pm.Model() as model_diff:
# Prior distribution
mu_1 = pm.Uniform('mu1', 0, 100)
mu_2 = pm.Uniform('mu2', 0, 100)
sigma_1 = pm.Uniform('sigma1', 0, 50)
sigma_2 = pm.Uniform('sigma2', 0, 50)
# Likelihood
y_1 = pm.Normal('y1', mu=mu_1, sd=sigma_1, observed=CLASS_A)
y_2 = pm.Normal('y2', mu=mu_2, sd=sigma_2, observed=CLASS_B)
# Difference of average values
diff_mu = pm.Deterministic('mu1 - mu2', mu_1 - mu_2)
trace_diff = pm.sample(21000, chains=5)
chain_diff = trace_diff[1000:]
pm.traceplot(chain_diff)
plt.show()
pm.summary(chain_diff)
pm.plot_posterior(chain_diff['mu1 - mu2'], credible_interval=0.95, point_estimate='mode')
plt.xlabel(r'$\mu$1 - $\mu$2')
plt.show()
```
### RQ1: クラスAの平均値がクラスBの平均値より大きい確率
```
print('p(mu1 - mu2 > 0) = {:.3f}'.format((chain_diff['mu1'] - chain_diff['mu2'] > 0).mean()))
```
### RQ2: クラスAとクラスBの平均値の差の点推定と区間推定(2つのクラスの平均的成績差は素点でどれほどか、またその差はどの程度の幅で確信できるのか)
```
print('Point estimation (difference of population mean): {:.3f}'.format(chain_same['mu1 - mu2'].mean()))
hpd_0025 = np.quantile(chain_same['mu1 - mu2'], 0.025)
hpd_0975 = np.quantile(chain_same['mu1 - mu2'], 0.975)
print('Credible Interval (95%): ({:.3f}, {:.3f})'.format(hpd_0025, hpd_0975))
```
### RQ3: 平均値の差の片側区間推定の下限・上限(平均的な成績の上昇はどれだけ見込めるか?また、どの程度の成績の上昇しか高々見込めないのか?)
```
hpd_005 = np.quantile(chain_diff['mu1 - mu2'], 0.05)
hpd_0950 = np.quantile(chain_diff['mu1 - mu2'], 0.95)
print('At most (95%): {:.3f}'.format(hpd_0950))
print('At least (95%): {:.3f}'.format(hpd_005))
```
### RQ4: 平均値の差が基準点cより大きい確率(平均点の差が少ししか無いのであればメリットは少なく導入は難しい。5点より大きい成績上昇が導入の条件で、その確率が70%より大きいならば採用する。採用すべきか、見送るべきか?)
```
print('p(mu1 - mu2 > 3) = {:.3f}'.format((chain_diff['mu1'] - chain_diff['mu2'] > 3).mean()))
print('p(mu1 - mu2 > 5) = {:.3f}'.format((chain_diff['mu1'] - chain_diff['mu2'] > 5).mean()))
print('p(mu1 - mu2 > 10) = {:.3f}'.format((chain_diff['mu1'] - chain_diff['mu2'] > 10).mean()))
```
## 2つのモデルの出力結果を比較
```
data = {
'SAME': [
chain_same['mu1'].mean(),
chain_same['mu2'].mean(),
chain_same['sigma'].mean(),
chain_same['sigma'].mean(),
chain_same['mu1 - mu2'].mean()
],
'DIFF': [
chain_diff['mu1'].mean(),
chain_diff['mu2'].mean(),
chain_diff['sigma1'].mean(),
chain_diff['sigma2'].mean(),
chain_diff['mu1 - mu2'].mean()
]
}
df = pd.DataFrame(data, index=['mu1', 'mu2', 'sigma1', 'sigma2', 'mu1-mu2'])
display(df)
```
| github_jupyter |
# Images and Image Plotting
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h3><span class="fa fa-certificate"></span> Learning Objectives </h3>
</div>
<ul>
<li>Understand the concept of arrays as images</li>
<li>Load and display an image</li>
<li>Use array slicing operations to crop an image</li>
<li>Animate an image plot</li>
</ul>
</section>
## Arrays as images
All photographic images represent a measurement of how much light hits the receiver. For instance, the Hubble image below is obtained by measuring the brightnesses of distant stars:

With traditional optical cameras, this measurement results in an image which is continuous, as it is projected directly onto paper. In order to store images digitally, they need to be divided into discrete chunks, pixels, each of which contains the value of the measurement in that small portion of the image. In this representation, an image is simply a grid of numbers, which allows it to be easily stored as an array with a shape equal to the resolution of the image.
The `scikit-image` (abbreviated to `skimage` in code) module contains some sample images in the `data` submodule that we can use to demonstrate this principle.
```
# Get some import statements out of the way.
from __future__ import division, print_function
%matplotlib inline
import matplotlib.pyplot as plt
from skimage import data
# Load the data.moon() image and print it
moon = data.moon()
print(moon)
```
Once read in as an array, the image can be processed in the same ways as any other array. For instance, we can easily find the highest, lowest and mean values of the image, the type of the variables stored in the array, and the resolution of the image:
```
# Output the image minimum, mean and maximum.
print('Image min:', moon.min(), '; Image mean:', moon.mean(), '; Image max: ', moon.max())
# Output the array dtype.
print('Data type:', moon.dtype)
# Output image size.
print('Image size:', moon.shape)
```
This tells us that the image has a resolution of 512 x 512 pixels, and is stored as integers between 0 and 255. This is a common way of normalising images, but they can just as easily be stored as floats between 0 and 1. More commonly with astronomical data though, an image will consist of photon counts (i.e. integers), so the minimum will be 0 and any upper limit will likely be defined by the capabilities of the instrument.
## Plotting images
While storing an image as a grid of numbers is very useful for analysis, we still need to be able to visually inspect the image. This can be achieved with `plt.imshow()`, which allocates a colour to every element in the array according to its value.
```
# Display image array with imshow()
plt.imshow(moon)
```
When plotting an image in this way, you will often need to know what actual values correspond to the colours. To find this out, we can draw a colour bar alongside the image which indicates the mapping of values to colours:
```
plt.imshow(moon)
plt.colorbar()
```
You may notice that the default mapping of values to colours doesn't show the features of this image very well. Fortunately, matplotlib provides a large variety of colour maps which are suitable for various different purposes (more on this later). `plt.imshow()` has a `cmap` keyword argument which can be passed a string defining the desired colour map.
```
# Display the image with a better colour map.
plt.imshow(moon, cmap='gray')
plt.colorbar()
```
The full list of available colour maps (for matplotlib 1.5) can be found [here](http://matplotlib.org/examples/color/colormaps_reference.html).
<section class="callout panel panel-info">
<div class="panel-heading">
<h3><span class="fa fa-certificate"></span> Colour maps </h3>
</div>
As the images above demonstrate, the choice of colour map can make a significant difference to how your image appears, and is therefore extremely important. This is partly due to discrepancies between how quickly the colour map changes and how quickly the data changes, and partly due to the fact that [different people see colour differently](https://en.wikipedia.org/wiki/The_dress_%28viral_phenomenon%29).<br/><br/>
In particular, matplotlib's default `'jet'` colour map is notoriously bad for displaying data. This is because it is not designed taking into account how the human eye percieves colour. This leads to some parts of the colour map appearing to change very slowly, while other parts of the colour map shift from one hue to another in a very short space. The practical effect of this is to both smooth some parts of the image, obscuring the data, and to create artificial features in the image where the data is smooth.<br/><br/>
There is no single 'best' colour map - different colour maps display different kinds of image most clearly - but the `jet` map is almost never an appropriate choice for displaying any data. In general, colour maps which vary luminosity uniformly (such as the `'gray'` colour map above or the `'cubehelix'` colour map) tend to be better. Plots of various colour maps' luminosities can be found [here](http://matplotlib.org/users/colormaps.html).<br/><br/>
For a good background on this topic and a description of a decent all-round colour map scheme, see [this paper](http://www.kennethmoreland.com/color-maps/ColorMapsExpanded.pdf).
</section>
<section class="challenges panel panel-success">
<div class="panel-heading">
<h3><span class="fa fa-pencil"></span> Load and plot an image </h3>
</div>
<ol>
<li> Try loading and plotting some other image arrays from `skimage.data`. Choose one of these images and print some basic information about the values it contains.</li>
<li> Plot your chosen image with `imshow()`. Apply a colour map of your choice and display a colour bar.</li>
</ol>
</section>
```
# 1
# Load an image from skimage.data
my_image = data.coins()
# Print image shape and size
print(my_image.shape, my_image.size)
# Print data type and min&max of array
print(my_image.dtype, my_image.min(), my_image.max())
# 2
# Display my image
plt.imshow(my_image, cmap='cubehelix')
plt.colorbar()
```
### Value limits
The default behaviour of `imshow()` in terms of colour mapping is that the colours cover the full range of the data so that the lower end (blue, in the plots above) represents the smallest value in the array, and the higher end (red) represents the greatest value.
This is fine if rest of the values are fairly evenly spaced between these extremes. However, if we have a very low minimum or very high maximum compared to the rest of the image, this default scaling is unhelpful. To deal with this problem, `imshow()` allows you to set the minimum and maximum values used for the scaling with the `vmin` and `vmax` keywords.
```
plt.imshow(moon, cmap='gray', vmin=75, vmax=150)
plt.colorbar()
```
As you can see, this allows us to increase the contrast of the image at the cost of discounting extreme values, or we can include a broader range of values but see less detail. Similar effects can also be achieved with the `norm` keyword, which allows you to set how `imshow()` scales values in order to map them to colours (linear or logarithmic scaling, for example).
### Axes
You will notice in the above plots that the axes are labelled with the pixel coordinates of the image. You will also notice that the origin of the axes is in the top left corner rather than the bottom left. This is a convention in image-drawing, but can be changed if necessary by setting the `origin` keyword to `'lower'` when calling `imshow()`:
```
plt.imshow(moon, cmap='gray', origin='lower')
```
`imshow()` also allows you to change the upper and lower values of each axis, and the appropriate tick labels will be drawn. This feature can be used to apply physical spatial scales to the image (if you know them) rather than going purely on pixel positions, which may be less useful. This is done with the `extent` keyword, which takes a list of values corresponding to lower and upper x values and the lower and upper y values (in that order).
```
plt.imshow(moon, cmap='gray', origin='lower', extent=[-1, 1, 0, 2])
```
<section class="objectives panel panel-success">
<div class="panel-heading">
<h3><span class="fa fa-pencil"></span> Value and axes limits </h3>
</div>
<ol>
<li>Plot your chosen image again. Try changing the upper and lower limits of the plotted values to adjust how the image appears.</li>
<li>Assume that each pixel of your image has some defined size (you decide a value - not unity). Adjust the axis limits accordingly so that the ticks correspond to physical distances rather than pixel values.</li>
</ol>
</section>
```
# 1
# Display the coins image with adjusted value range
plt.imshow(my_image, cmap='cubehelix', vmin=60, vmax=180)
plt.colorbar()
# 2
pixelsize = 0.1
plt.imshow(my_image, cmap='cubehelix', vmin=60, vmax=180, extent=[0, my_image.shape[1]*pixelsize, 0, my_image.shape[0]*pixelsize])
plt.xlabel('x (cm)')
plt.ylabel('y (cm)')
plt.colorbar()
```
## Loading an image from a file
The image used in the examples above uses an image which is already supplied as an array by scikit-image. But what if we have been given an image file and we want to read it into Python?
There are many ways to do this, depending on the type of file. Typically in astronomy, images are stored in FITS format, which will be covered in detail later on. For now, we will return to the example of the Hubble image from earlier, which is stored in this repo in fig/galaxy.jpg. To load image data from a JPEG, we need the `plt.imread()` function. This takes a filename which points at an image file and loads it into Python as a NumPy array.
```
# Load the Hubble image from fig/galaxy.jpg
galaxy_image = plt.imread('fig/galaxy.jpg')
# PLot the image with imshow()
plt.imshow(galaxy_image)
```
You may notice that instead of using a colour map, this image has been plotted in full colour so it looks the same as the original image above. We can see why if we inspect the shape of the image array:
```
galaxy_image.shape
```
Rather than just being a 2D array with a shape equivalent to the image resolution, the array has an extra dimension of length 3. This is because the image has been split into red, blue and green components, each of which are stored in a slice of the array. When given an n x m x 3 array like this, `imshow()` interprets it as an RGB image and combines the layers into a single image.
However, if we wish to see the individual components they can be accessed and displayed by taking a slice of the array corresponding to the layer we wish to use.
```
plt.imshow(galaxy_image[..., 0], cmap='Reds') # Plot the red layer of the image
plt.show()
plt.imshow(galaxy_image[..., 1], cmap='Greens') # Plot the green layer of the image
plt.show()
plt.imshow(galaxy_image[..., 2], cmap='Blues') # Plot the blue layer of the image
plt.show()
```
## `plt.subplots()`
As we've already seen, multiple axes can be added to a single figure using `plt.add_subplot()`. There is also a function that allows you to define several axes and their arrangement at the same time as the figure, `plt.subplots()`.
This function returns a tuple of two objects - the figure and an array of axes objects with the specified shape. Referencing the axes array allows things to be plotted on the individual subplots.
```
# Make a grid of 1 x 3 plots and show the Hubble image on the right.
fig, ax = plt.subplots(1, 3)
ax[2].imshow(galaxy_image)
plt.show()
```
<section class="objectives panel panel-success">
<div class="panel-heading">
<h3><span class="fa fa-pencil"></span> Image components </h3>
</div>
<ol>
<li>Create a 2x2 grid of plots using `plt.subplots()`. For either the Hubble image or another RGB image of your choice from `skimage.data`, plot the true colour image and each RGB component on one of these subplots.</li>
</ol>
</section>
```
# 1
my_image = data.coffee()
# Create 2x2 grid of subplots
fig, axes = plt.subplots(2, 2)
# Plot image and image components with appropriate colour maps.
axes[0, 0].imshow(my_image)
axes[0, 1].imshow(my_image[..., 0], cmap='Reds')
axes[1, 0].imshow(my_image[..., 1], cmap='Greens')
axes[1, 1].imshow(my_image[..., 2], cmap='Blues')
```
## Slicing images
We saw above that an RGB image array can be sliced to access one colour component. But the array can also be sliced in one or both of the image dimensions to crop the image. For instance, the smaller galaxy at the bottom of the image above occupies the space between about 200 and 400 pixels in the x direction, and stretches from about 240 pixels to the edge of the image in the y direction. This information allows us to slice the array appropriately:
```
# Crop the image in x and y directions but keep all three colour components.
cropped_galaxy = galaxy_image[240:, 200:400, :]
plt.imshow(cropped_galaxy)
```
Similarly, if we need to reduce the image resolution for whatever reason, this can be done using array slicing operations.
```
# Crop athe image and use only every other pixel in each direction to reduce the resolution.
lowres_galaxy = galaxy_image[240::2, 200:400:2, :]
plt.imshow(lowres_galaxy)
```
IMPORTANT NOTE: you should probably never do the above with actual astronomical data, because you're throwing away three quarters of your measurement. There are better ways to reduce image resolution which preserve much more of the data's integrity, and we will talk about these later. But it's useful to remember you can reduce an image's size like this, as long as you don't need that image for any actual science.
## Interpolation
In order to display a smooth image, `imshow()` automatically interpolates to find what values should be displayed between the given data points. The default interpolation scheme is `'linear'`, which interpolates linearly between points, as you might expect. The interpolation can be changed with yet another keyword in `imshow()`. Here are a few examples:
```
# Image with default interpolation
fig, ax = plt.subplots(2, 2, figsize=(16, 16))
smallim = galaxy_image[:100, 250:350, :]
ax[0, 0].imshow(smallim) # Default (linear) interpolation
ax[0, 1].imshow(smallim, interpolation='bicubic') # Bicubic interpolation
ax[1, 0].imshow(smallim, interpolation='nearest') # Nearest-neighbour interpolation
ax[1, 1].imshow(smallim, interpolation='none') # No interpolation
```
This can be a useful way to change how the image appears. For instance, if the exact values of the data are extremely important, little or no interpolation may be appropriate so the original values are easier to discern, whereas a high level of interpolation can be used if the smoothness of the image is more important than the actual numbers.
Note that that `'none'` in the `imshow()` call above is NOT the same as `None`. `None` tells `imshow()` you are not passing it a variable for the `interpolation` keyword, so it uses the default, whereas `'none'` explicitly tells it not to interpolate.
## Animation
We have already seen animation of data points on basic plots in a previous lesson. Animating an image is no different in principle. To demonstrate this, we'll set up an animation that shows the Hubble image and then cycles through each of the RGB components. This task requires all the same parts as an animation of a line or scatter plot:
- First, we'll need `matplotlib.animation`, a figure and an axes. Then we'll plot the initial image we want to display and return the plot object to a variable we can use for the animation.
- Now we need to define the function that will adjust the image. This function, like the ones we used for line plots, needs to take as input an integer which counts the number of 'frames', adjust the displayed data and return the adjusted object.
- Then we can define the animation object and plot it to see the finished product.
```
import matplotlib.animation as ani
# We'll need this for displaying animations
%matplotlib nbagg
fig, ax = plt.subplots()
display = plt.imshow(galaxy_image)
titles = ['Red component', 'Green component', 'Blue component', 'Combined image']
cmaps = ['Reds_r', 'Greens_r', 'Blues_r']
def animate(i):
try:
display.set_data(galaxy_image[..., i])
display.set_cmap(cmaps[i])
except IndexError:
display.set_data(galaxy_image)
ax.set_title(titles[i])
return display
myanim = ani.FuncAnimation(fig, animate, range(4), interval=1000)
plt.show()
```
<section class="objectives panel panel-success">
<div class="panel-heading">
<h3><span class="fa fa-pencil"></span> Moving around an image </h3>
</div>
<ol>
<li>Plot a small portion at one end of your chosen image. Then animate this plot so that it pans across to the other side of the image.</li>
</ol>
</section>
```
# 1
fig, ax = plt.subplots()
y0 = 0
y_ext = 120
display = plt.imshow(galaxy_image[y0:y0+y_ext, 200:400])
def pan(i):
y1 = y0 + i
display.set_data(galaxy_image[y1:y1+y_ext, 200:400])
return display
panimation = ani.FuncAnimation(fig, pan, range(galaxy_image.shape[0]-y_ext), interval=10)
plt.show()
```
## FITS files
A type of image file that you are quite likely to come across in astronomy is FITS (Flexible Image Transport System) files. This file type is used for storing various types of astronomical image data, including solar images. The advantage of FITS files is that as well as storing the numerical data which makes up the image, they also store a header associated with these data. The header usually contains information such as the spatial extent of the image, the resolution, the time at which the observation was taken, and various other properties of the data which may be useful when using the image for research. These pairs of data arrays and associated headers are stored in a HDU (Header-Data Unit). Several HDUs can be stored in a FITS file, so they are kept in a container called HDUList.
```
from astropy.io import fits
import sunpy.data
#sunpy.data.download_sample_data()
aia_file = fits.open('/home/drew/sunpy/data/sample_data/aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits')
aia_file.verify('fix')
print(type(aia_file))
print(aia_file)
print(type(aia_file[1].data), type(aia_file[1].header))
print(aia_file[1].data)
print(aia_file[1].header['NAXIS1'])
print(aia_file[1].header['NAXIS2'])
print(aia_file[1].header['DATE-OBS'])
for tag in aia_file[1].header.keys():
print(tag, aia_file[1].header[tag])
```
| github_jupyter |
# Lab 1: Explore and Benchmark a BigQuery Dataset for Performance
## Overview
In this lab you will take an existing 2TB+ [TPC-DS benchmark dataset](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v2.10.0.pdf) and learn the data warehouse optimization methods you can apply to the dataset in BigQuery to improve performance.
### What you'll do
In this lab, you will learn how to:
- Use BigQuery to access and query the TPC-DS benchmark dataset
- Run pre-defined queries to establish baseline performance benchmarks
### Prerequisites
This is an __advanced level SQL__ lab. Before taking it, you should have experience with SQL. Familiarity with BigQuery is also highly recommended. If you need to get up to speed in these areas, you should take this Data Analyst series of labs first:
* [Quest: BigQuery for Data Analysts](https://www.qwiklabs.com/quests/55)
Once you're ready, scroll down to learn about the services you will be using and how to properly set up your lab environment.
### BigQuery
[BigQuery](https://cloud.google.com/bigquery/) is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without managing infrastructure or needing a database administrator. BigQuery uses SQL and takes advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
## TPC-DS Background
In order to benchmark the performance of a data warehouse we first must get tables and data to run queries against. There is a public organization, TPC, that provides large benchmarking datasets to companies explicitly for this purpose. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users.
The TPC-DS Dataset we will be using comprises of __25 tables__ and __99 queries__ that simulate common data analysis tasks. View the full documentation [here](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v2.11.0.pdf).
## Exploring TPC-DS in BigQuery
The TPC-DS tables have been loaded into BigQuery and you will explore ways to optimize the performance of common queries by using BigQuery data warehousing best practices. We have limited the size to 2TB for the timing of this lab but the dataset itself can be expanded as needed.
Note: The TPC Benchmark and TPC-DS are trademarks of the Transaction Processing Performance Council (http://www.tpc.org). The Cloud DW benchmark is derived from the TPC-DS Benchmark and as such is not comparable to published TPC-DS results.
## Exploring the Schema with SQL
Question:
- How many tables are in the dataset?
- What is the name of the largest table (in GB)? How many rows does it have?
```
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`dw-workshop.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
```
The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables):

These systems are driven by the core functions of our retail business. As you can see, our store accepts sales from online (web), mail-order (catalog), and in-store. The business must keep track of inventory and can offer promotional discounts on items sold.
### Exploring all available columns of data
Question:
- How many columns of data are in the entire dataset (all tables)?
```
%%bigquery
SELECT * FROM
`dw-workshop.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
```
Question:
- Are any of the columns of data in this baseline dataset partitioned or clustered?
```
%%bigquery
SELECT * FROM
`dw-workshop.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
```
Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
```
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`dw-workshop.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
```
### Previewing sample rows of data values
In the BigQuery UI, find the Resources panel and search for `catalog_sales`. You may need to add the `dw-workshop` project to your UI by clicking __+ Add Data -> Pin a project__ and entering `dw-workshop`
Click on the `catalog_sales` table name for the `tpcds_2t_baseline` dataset under `dw-workshop`
Question
- How many rows are in the table?
- How large is the table in TB?
Hint: Use the `Details` button in the web UI to quickly access table metadata
Question:
- `Preview` the data and find the Catalog Sales Extended Sales Price `cs_ext_sales_price` field (which is calculated based on product quantity * sales price)
- Are there any missing data values for Catalog Sales Quantity (`cs_quantity`)?
- Are there any missing values for cs_ext_ship_cost? For what type of product could this be expected? (Digital products)
### Create an example sales report
Write a query that shows key sales stats for each item sold from the Catalog and execute it in the BigQuery UI:
- total orders
- total unit quantity
- total revenue
- total profit
- sorted by total orders highest to lowest, limit 100
```
%%bigquery --verbose
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`dw-workshop.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
10
```
A note on our data: The TPC-DS benchmark allows data warehouse practicioners to generate any volume of data programatically. Since the rows of data are system generated, they may not make the most sense in a business context (like why are we selling our top product at such a huge profit loss!).
The good news is that to benchmark our performance we care most about the volume of rows and columns to run our benchmark against.
## Analyzing query performance
Click on __Execution details__
Refer to the chart below (which should be similar to your results) and answer the following questions.
Question
- How long did it take the query to run? 5.1s
- How much data in GB was processed? 150GB
- How much slot time was consumed? 1hr 24min
- How many rows were input? 2,881,495,086
- How many rows were output as the end result (before the limit)? 23,300
- What does the output rows mean in the context of our query? (23,300 unique cs_item_sk)

## Side note: Slot Time
We know the query took 5.1 seconds to run so what does the 1hr 24 min slot time metric mean?
Inside of the BigQuery service are lots of virtual machines that massively process your data and query logic in parallel. These workers, or "slots", work together to process a single query job really quickly. For accounts with on-demand pricing, you can have up to 2,000 slots.
So say we had 30 minutes of slot time or 1800 seconds. If the query took 20 seconds in total to run,
but it was 1800 seconds worth of work, how many workers at minimum worked on it?
1800/20 = 90
And that's assuming each worker instantly had all the data it needed (no shuffling of data between workers) and was at full capacity for all 20 seconds!
In reality, workers have a variety of tasks (waiting for data, reading it, performing computations, and writing data)
and also need to compare notes with eachother on what work was already done on the job. The good news for you is
that you don't need to worry about optimizing these workers or the underlying data to run perfectly in parallel. That's why BigQuery is a managed service -- there's an entire team dedicated to hardware and data storage optimization.
In case you were wondering, the worker limit for your project is 2,000 slots at once.
## Running a performance benchmark
To performance benchmark our data warehouse in BigQuery we need to create more than just a single SQL report. The good news is the TPC-DS dataset ships with __99 standard benchmark queries__ that we can run and log the performance outcomes.
In this lab, we are doing no adjustments to the existing data warehouse tables (no partitioning, no clustering, no nesting) so we can establish a performance benchmark to beat in future labs.
### Viewing the 99 pre-made SQL queries
We have a long SQL file with 99 standard queries against this dataset stored in our /sql/ directory.
Let's view the first 50 lines of those baseline queries to get familiar with how we will be performance benchmarking our dataset.
```
!head --lines=50 'sql/example_baseline_queries.sql'
```
### Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip: You can use the [--verbose flag](https://googleapis.dev/python/bigquery/latest/magics.html) in %%bigquery magics to return the job and completion time.
```
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `dw-workshop.tpcds_2t_baseline.store_sales` as store_sales
,`dw-workshop.tpcds_2t_baseline.household_demographics` as household_demographics
,`dw-workshop.tpcds_2t_baseline.time_dim` as time_dim,
`dw-workshop.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
```
It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically [cache the results](https://cloud.google.com/bigquery/docs/cached-results) from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
### Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the `bq` [command line interface](https://cloud.google.com/bigquery/docs/managing-jobs#viewing_information_about_jobs). Then we will get even more detail on our most recent job with the `bq show` command. Be sure to replace the job id with your own.
```
!bq ls -j -a -n 5
!bq show --format=prettyjson -j 612a4b28-cb5c-4e0b-ad5b-ebd51c3b2439
```
Looking at the job statistics we can see our most recent query hit cache
- `cacheHit: true` and therefore
- `totalBytesProcessed: 0`.
While this is great in normal uses for BigQuery (you aren't charged for queries that hit cache) it kind of ruins our performance test. While cache is super useful we want to disable it for testing purposes.
### Disabling Cache and Dry Running Queries
As of the time this lab was created, you can't pass a flag to `%%bigquery` iPython notebook magics to disable cache or to quickly see the amount of data processed. So we will use the traditional `bq` [command line interface in bash](https://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_query).
First we will do a `dry run` of the query without processing any data just to see how many bytes of data would be processed. Then we will remove that flag and ensure `nouse_cache` is set to avoid hitting cache as well.
```
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
"""\
select count(*)
from \`dw-workshop.tpcds_2t_baseline.store_sales\` as store_sales
,\`dw-workshop.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`dw-workshop.tpcds_2t_baseline.time_dim\` as time_dim, \`dw-workshop.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
"""
# Convert bytes to GB
132086388641 / 1e+9
```
132 GB will be processed. At the time of writing, [BigQuery pricing](https://cloud-dot-google-developers.appspot.com/bigquery/pricing_1d69e6dbde8ba1ab8219292f7dc765cd.frame?hl=en#on_demand_pricing_) is \\$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \\$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
```
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
"""\
select count(*)
from \`dw-workshop.tpcds_2t_baseline.store_sales\` as store_sales
,\`dw-workshop.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`dw-workshop.tpcds_2t_baseline.time_dim\` as time_dim, \`dw-workshop.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
"""
```
If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:

It's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.
One major advantage of using the `bq` command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results.
### Copy the dw-workshop dataset into your own GCP project
We will use the new [BigQuery Transfer Service](https://cloud.google.com/bigquery/docs/copying-datasets) to quickly copy our large dataset from the `dw-workshop` GCP project into your own so you can perform the benchmarking.
### Create a new baseline dataset in your project
```
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
echo "\nHere are your current datasets:"
bq ls
fi
```
### Use the BigQuery Data Transfer Service to copy an existing dataset
1. Enable the [BigQuery Data Transfer Service API](https://console.cloud.google.com/apis/library/bigquerydatatransfer.googleapis.com)
2. Navigate to the [BigQuery console and the existing `dw-workshop` dataset](https://console.cloud.google.com/bigquery?project=dw-workshop&p=dw-workshop&d=tpcds_2t_baseline&page=dataset)
3. Click Copy Dataset

4. In the pop-up, choose your __project name__ and the newly created __dataset name__ from the previous step

5. Click __Copy__
6. Wait for the transfer to complete
### Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the `project-id` ahead of the dataset name in the `FROM` clause, BigQuery will assume your default project.
```
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
```
### Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that:
1. Accepts a BigQuery dataset to benchmark
2. Accepts a list of semi-colon separated queries to run
3. Loops through each query and calls the `bq` query command
4. Records the execution time into a separate BigQuery performance table `perf`
Execute the below statement and follow along with the results as you benchmark a few example queries (don't worry, we've already ran the full 99 recently so you won't have to).
__After executing, wait 1-2 minutes for the benchmark test to complete__
```
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
```
## Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new `perf` BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran:
```
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
```
And finally, the overall statistics for the entire test:
```
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
```
## Benchmarking all 99 queries
As we mentioned before, we already ran all 99 queries and recorded the results and made them available for you to query in a public table:
```
%%bigquery
SELECT
TIMESTAMP_SECONDS(performance_test_num) AS test_date,
query_num,
TIMESTAMP_SECONDS(ran_on) AS query_ran_on,
TIMESTAMP_SECONDS(ran_on + elapsed_time_sec) AS query_completed_on,
elapsed_time_sec
FROM `dw-workshop.tpcds_2t_baseline.perf` # public table
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM `dw-workshop.tpcds_2t_baseline.perf`)
ORDER BY ran_on
```
And the results of the complete test:
```
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
ROUND(SUM(elapsed_time_sec)/60,2) AS total_time_min,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec,
ROUND(AVG(elapsed_time_sec),2) AS avg_query_time_sec
FROM
`dw-workshop.tpcds_2t_baseline.perf`
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM `dw-workshop.tpcds_2t_baseline.perf`)
```
Note the `total_time_sec` of __1766 seconds (or 29 minutes)__ which we will look to beat in future labs by applying BigQuery optimization techniques like:
- Partitioning and Clustering
- Nesting repeated fields
- Denormalizing with STRUCT data types
## Congratulations!
And there you have it! You successfully ran a performance benchmark test against your data warehouse. Continue on with the labs in this series to learn optimiztion strategies to boost your performance.
| github_jupyter |
## XenonPy-ISMD (Inversed Synthesizable Molecular Design) Totorial
This tutorial provides an introduction to ISMD, an inverse molecular design algorithm based on machine learning. By which we show a complete building process of doing your own inverse molecular design algorithm by XenonPy.
Overview - ISMD aims at finding reactant sets R, such that the properties Y of their synthetic product have a high probability of falling into a target region U, we want to sample from the posterior probability 𝑝(R|𝑌∈𝑈) that is proportional to 𝑝(𝑌∈𝑈|R)𝑝(R) by the Bayes' theorem. 𝑝(𝑌∈𝑈|R) is the likelihood function, 𝑝(R) is the prior that represents all possible candidates of R. ISMD is a numerical implementation for this Bayesian formulation based on sequential Monte Carlo sampling (SMC), all the building blocks to implement the SMC is supplied by XenonPy, the following of this tutorial will show how to customize each building block to solve the specific task in ISMD.
```
import warnings
warnings.filterwarnings('ignore')
```
### download data and model
download the data and model from the following link:\
https://drive.google.com/drive/folders/1-uHnS4EcV9dQdSEIlt1n8skCrsZJP-3r?usp=sharing \
which will be used in this tutorial. \
In your device, the a folder called ismd_assets should contain the following files:
- pool_sim_sparse.npz
- STEREO_id_reactant_product_xlogp_tpsa.csv
- STEREO_pool_df.csv
- STEREO_separated_augm_model_average_20.pt
- figure.png
```
import os
# set path to the downloaded folder
assets_path = "/home/qiz/data/ismd_assets"
from IPython.display import Image
Image(filename=os.path.join(assets_path, 'figure.png'), width=1000, height=600)
```
An illustruction of ISMD algorithm is shown above.\
ISMD is based on sequential Monte Carlo algorithm, which iterate the estimator(likelihood), modifier(prior), resample loop. Both of these blocks are available in XenonPy, however, we can customize them for a specific target.\
We will show how to customize and implement each blocks step by step.\
This tutorial will proceed as follow:
1. data introduction
- ground truth
- reactant pool
- similarity matrix
- initial data
2. estimator
- featurizer
- forward model
2. modifier
- proposal
- lookup pool
- reactor
3. a complete ismd run
### 1. data introduction
#### ground truth
Ground truth is a pandas dataframe of real chemical reactions, each row is a chemical reaction record include reactant, product, some physical properties of the product('ECFP', 'MACCS') and the index of each reactant associated with the reactant pool.
```
import pandas as pd
ground_truth_path = os.path.join(assets_path,"STEREO_id_reactant_product_xlogp_tpsa.csv")
data = pd.read_csv(ground_truth_path)
data.head()
import matplotlib.pyplot as plt
# show the property distribution of ground truth
plt.figure(figsize=(5,5))
plt.scatter(data['XLogP'],data['TPSA'],s=15,c='b',alpha = 0.1)
plt.title('iSMD sample data')
plt.xlabel('XLogP')
plt.ylabel('TPSA')
plt.show()
```
#### reactant pool
The reactant pool is a pandas dataframe, each row is a reactant which is usable in the reaction.
an id column is optional for corresponding to the similarity matrix, if id is not given, use index instead.
```
reactant_pool_path = os.path.join(assets_path,"STEREO_pool_df.csv")
reactant_pool = pd.read_csv(reactant_pool_path)
print("Number of reactants is {}".format(len(reactant_pool)))
reactant_pool.head()
```
#### similarity matrix
The similarity matrix is a pandas dataframe, the number of both row and column is the number of reactant in the reactant pool, each cell is the similarity between the reactant of the row's number and the column's number, so the values on the diagonal is 1.0.
```
%%time
from scipy import sparse
sim_matrix_path = os.path.join(assets_path,"pool_sim_sparse.npz")
reactant_pool_sim = sparse.load_npz(sim_matrix_path).tocsr()
sim_df = pd.DataFrame.sparse.from_spmatrix(reactant_pool_sim)
print("The size of similarity matrix is {} * {}".format(len(sim_df), len(list(sim_df))))
%%time
sim_df.head()
```
#### initial data
The sample data for SMC is a pandas dataframe which contains the following columns: reactant id, reactant SMILES, product SMILES.
In the SMC loop, the description of sample product will be cauculated firstly, then, the resample of reactant id will be performed. So, the initial data should at least contain the product and the reactant id.
```
sample_size = 100
data_sample = data.query("XLogP<3 & TPSA<50").sample(n=sample_size).reset_index(drop=True)
init_samples = pd.DataFrame({'reactant_idx': [],
'reactant_smiles': [],
'product_smiles': []})
init_samples['reactant_idx'] = [[int(a) for a in idx_str.split('.')] for idx_str in data_sample['reactant_index']]
init_samples['product_smiles'] = data_sample['product']
init_samples.head()
```
### estimator
#### featurizer
featurizer calculate the fingerprint of the sample product. We currently support all fingerprints and descriptors in the RDKit (Mordred will be added soon). In this tutorial, we only use the ECFP + MACCS in RDKit.
```
%%time
from xenonpy.descriptor import Fingerprints
product_descriptor = Fingerprints(featurizers=['ECFP', 'MACCS'], input_type='smiles', on_errors='nan', target_col='product_smiles')
samples_feature = product_descriptor.transform(init_samples)
samples_feature.head()
```
#### forward model
Forward model compute the likelihood of the product's property falling into the target likelihood.\
In this tutorial, we train two regerssion models for the two property respectively.
```
%%time
from sklearn.linear_model import BayesianRidge
from sklearn.model_selection import train_test_split
n_train = 100000
training_df = data.iloc[:n_train]
trainer_descriptor = Fingerprints(featurizers=['ECFP', 'MACCS'], input_type='smiles', on_errors='nan', target_col='product')
training_fp = trainer_descriptor.transform(training_df)
training_XLogP = training_df["XLogP"].tolist()
training_TPSA = training_df["TPSA"].tolist()
model_XLogP = BayesianRidge()
model_TPSA = BayesianRidge()
X_train_XLogP, X_test_XLogP, y_train_XLogP, y_test_XLogP = train_test_split(training_fp, training_XLogP, test_size=0.25, random_state=42)
X_train_TPSA, X_test_TPSA, y_train_TPSA, y_test_TPSA = train_test_split(training_fp, training_TPSA, test_size=0.25, random_state=42)
model_XLogP.fit(X_train_XLogP, y_train_XLogP)
model_TPSA.fit(X_train_TPSA, y_train_TPSA)
y_train_XLogP_pred = model_XLogP.predict(X_train_XLogP)
y_train_TPSA_pred = model_TPSA.predict(X_train_TPSA)
y_test_XLogP_pred = model_XLogP.predict(X_test_XLogP)
y_test_TPSA_pred = model_TPSA.predict(X_test_TPSA)
import numpy as np
y_all = training_XLogP
y_train_true = y_train_XLogP
y_train_pred = y_train_XLogP_pred
y_test_true = y_test_XLogP
y_test_pred = y_test_XLogP_pred
y_min = min(y_all)*0.95
y_max = max(y_all)*1.05
y_diff = y_max - y_min
plt.figure(figsize=(5,5))
plt.scatter(y_train_true, y_train_pred, c='b', alpha=0.1, label='train')
plt.scatter(y_test_true, y_test_pred, c='r', alpha=0.2, label='test')
plt.text(y_min+y_diff*0.7,y_min+y_diff*0.05,'MAE: {:5.2f}'.format(np.mean(np.abs(y_test_true - y_test_pred)),fontsize=12))
plt.xlim(y_min,y_max)
plt.ylim(y_min,y_max)
plt.legend(loc='upper left')
plt.xlabel('Observation')
plt.ylabel('Prediction')
plt.title('Property: XLogP')
plt.plot([y_min,y_max],[y_min,y_max],ls="--",c='k')
y_all = training_TPSA
y_train_true = y_train_TPSA
y_train_pred = y_train_TPSA_pred
y_test_true = y_test_TPSA
y_test_pred = y_test_TPSA_pred
y_min = min(y_all)*0.95
y_max = max(y_all)*1.05
y_diff = y_max - y_min
plt.figure(figsize=(5,5))
plt.scatter(y_train_true, y_train_pred, c='b', alpha=0.1, label='train')
plt.scatter(y_test_true, y_test_pred, c='r', alpha=0.2, label='test')
plt.text(y_min+y_diff*0.7,y_min+y_diff*0.05,'MAE: {:5.2f}'.format(np.mean(np.abs(y_test_true - y_test_pred)),fontsize=12))
plt.xlim(y_min,y_max)
plt.ylim(y_min,y_max)
plt.legend(loc='upper left')
plt.xlabel('Observation')
plt.ylabel('Prediction')
plt.title('Property: TPSA')
plt.plot([y_min,y_max],[y_min,y_max],ls="--",c='k')
```
Merge the featurizer and forward model into the estimator(likelihood) module, we get the first block of ISMD.\
In this tutorial, we use GaussianLogLikelihood as the the estimator.
```
from xenonpy.inverse.iqspr import GaussianLogLikelihood
## set property target
prop = ['XLogP', 'TPSA']
target_range = {'XLogP': (5, 10), 'TPSA': (50, 100)}
likelihood_calculator = GaussianLogLikelihood(descriptor=product_descriptor, targets=target_range, XLogP=model_XLogP, TPSA=model_TPSA)
samples_likelihood = likelihood_calculator(init_samples)
samples_likelihood.head()
```
### 3. modifier
#### proposal
Proposal substitute a randomly selected reactant by a similar one.
```
%%time
from xenonpy.contrib.ismd import ReactantPool
from xenonpy.contrib.ismd import load_reactor
model_path = os.path.join(assets_path,'STEREO_separated_augm_model_average_20.pt')
mol_reactor = load_reactor(model_path=model_path, device_id=-1)
mol_pool = ReactantPool(pool_df=reactant_pool, sim_df=sim_df, reactor=mol_reactor)
old_list = [str(r) for r in init_samples["reactant_idx"][:5]]
new_list = [mol_pool.single_proposal(reactant) for reactant in init_samples["reactant_idx"]]
init_samples["reactant_idx"] = new_list
print("{:25} ---> {:25}".format("old idx","new idx"))
for o,n in zip(old_list[:5], new_list[:5]):
print("{:25} ---> {:25}".format(str(o),str(n)))
```
#### lookup pool
By looking up the reactant pool, convert reactant id to SMILES.
```
init_samples["reactant_smiles"] = [mol_pool.single_index2reactant(id_str) for id_str in init_samples["reactant_idx"]]
print("{:25} ---> {:25}".format("reactant idx","reactant SMILES"))
for o,n in zip(init_samples["reactant_idx"][:5], init_samples["reactant_smiles"][:5]):
print("{:25} ---> {:25}".format(str(o),str(n)))
```
#### reactor
The reactor can predict the product given the reactant set.
```
init_samples["product_smiles"] = mol_pool._reactor.react(init_samples["reactant_smiles"])
init_samples.head()
```
### 4. a complete ismd run
In this tutorial, we use IQSPR4DF as the SMC framework, which is usable for pandas dataframe. Merge the estimator and the modifier into the IQSPR4DF module, then, a complete implementation is ready to run.
```
from xenonpy.inverse.iqspr import IQSPR4DF
ISMD = IQSPR4DF(estimator=likelihood_calculator, modifier=mol_pool, sample_col='product_smiles')
import numpy as np
beta = np.hstack([np.linspace(0.01,0.2,20),np.linspace(0.21,0.4,10),np.linspace(0.4,1,10),np.linspace(1,1,10)])
beta
%%time
np.random.seed(201906) # fix the random seed
# main loop of iQSPR
iqspr_samples1, iqspr_loglike1, iqspr_prob1, iqspr_freq1 = [], [], [], []
for s, ll, p, freq in ISMD(init_samples, beta, yield_lpf=True):
iqspr_samples1.append(s)
iqspr_loglike1.append(ll)
iqspr_prob1.append(p)
iqspr_freq1.append(freq)
# record all outputs
iqspr_results_reorder = {
"samples": iqspr_samples1,
"loglike": iqspr_loglike1,
"prob": iqspr_prob1,
"freq": iqspr_freq1,
"beta": beta
}
```
By looking at the likelihood of each steps, we can say that after few steps, most of the particles sampled have high likelihood.
```
import matplotlib.pyplot as plt
iqspr_results_reorder = {
"samples": iqspr_samples1,
"loglike": iqspr_loglike1,
"prob": iqspr_prob1,
"freq": iqspr_freq1,
"beta": beta
}
# set up the min and max boundary for the plots
tmp_list = [x.sum(axis = 1, skipna = True).values for x in iqspr_results_reorder["loglike"]]
flat_list = np.asarray([item for sublist in tmp_list for item in sublist])
y_max, y_min = max(flat_list), min(flat_list)
plt.figure(figsize=(10,5))
plt.xlim(0,len(iqspr_results_reorder["loglike"]))
plt.ylim(-30,y_max)
plt.xlabel('Step')
plt.ylabel('Log-likelihood')
for i, ll in enumerate(tmp_list):
plt.scatter([i]*len(ll), ll ,s=10, c='b', alpha=0.2)
plt.show()
#plt.savefig('iqspr_loglike_reorder.png',dpi = 500)
#plt.close()
y_max, y_min = np.exp(y_max), np.exp(y_min)
plt.figure(figsize=(10,5))
plt.xlim(0,len(iqspr_results_reorder["loglike"]))
plt.ylim(y_min,y_max)
plt.xlabel('Step')
plt.ylabel('Likelihood')
for i, ll in enumerate(tmp_list):
plt.scatter([i]*len(ll), np.exp(ll) ,s=10, c='b', alpha=0.2)
```
| github_jupyter |
# Setup
```
import random
import itertools
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import seaborn as sns
color = sns.color_palette()
import matplotlib.pyplot as plt
%matplotlib inline
from nltk.corpus import stopwords
STOP_WORDS = set(stopwords.words('english'))
from nltk import word_tokenize, ngrams
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import resample
from sklearn.metrics import log_loss
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import TruncatedSVD
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn import svm
from time import time
from collections import defaultdict
import wisardpkg as wp
```
# Pre-processing
## Importing...
```
df = {
"cooking": pd.read_csv('../../dataset/processed/cooking.csv', usecols=['title', 'content']),
"crypto": pd.read_csv('../../dataset/processed/crypto.csv', usecols=['title', 'content']),
"robotics": pd.read_csv('../../dataset/processed/robotics.csv', usecols=['title', 'content']),
"biology": pd.read_csv('../../dataset/processed/biology.csv', usecols=['title', 'content']),
"travel": pd.read_csv('../../dataset/processed/travel.csv', usecols=['title', 'content']),
"diy": pd.read_csv('../../dataset/processed/diy.csv', usecols=['title', 'content']),
}
```
## Generating new .csv file with title+content and class columns...
```
with open('../../dataset/processed/data.csv', 'w') as f:
f.write('title_content|label\n')
for _class in df:
df[_class]['title_content'] = df[_class][['title', 'content']].apply(lambda x: '{} {}'.format(x[0],x[1]),
axis=1)
df[_class]['label'] = _class
df[_class].to_csv(f, sep='|', columns=['title_content', 'label'], header=False, index=False)
```
# Data Analysis
## Exploration
```
dataset = pd.read_csv('../../dataset/processed/data.csv', sep='|')
dataset.describe()
```
## Labels distribution (percentage)
```
labels = dataset['label'].value_counts()
print(labels.sort_index()/labels.sum()*100)
```
### Histogram
```
fig = plt.figure(figsize=(20, 10))
ax1 = sns.countplot(dataset['label'].sort_values())
plt.ylabel('Observations', fontsize=12)
plt.xlabel('Labels', fontsize=12)
plt.xticks(rotation='vertical')
plt.title('Labels frequency histogram')
plt.show()
print(labels.sort_index())
```
## Word distribution
### Statistics of the number of words (size) of title_content text field
```
dataset['size'] = dataset['title_content'].apply(lambda x : len(str(x).split()))
sizes = dataset['size'].value_counts()
print('The top 20 most frequent size of title_content, and their respective frequency:')
print(sizes.nlargest(20))
```
### Histogram
```
fig = plt.figure(figsize=(20, 10))
ax1 = sns.barplot(sizes.index, sizes.values, alpha=0.8)
ax1.set_xticklabels([])
plt.title('Number of words frequency histogram')
plt.ylabel('Number of Occurrences', fontsize=12)
plt.xlabel('Number of words', fontsize=12)
plt.show()
```
## Data example
### Robotics texts with more than 200 words
```
filtered_data = dataset[(dataset.label == 'robotics') & (dataset.title_content.apply(lambda x : len(str(x).split())) > 200)]
filtered_data.describe()
```
### Cell content example
```
line=61041
print('TEXT: {0}'.format(dataset.loc[61041, 'title_content']))
print('LABEL: {0}'.format(dataset.loc[61041, 'label']))
print('LENGTH: {0} words.'.format(len(dataset.loc[61041, 'title_content'].split())))
ds = dataset
```
# Bag-of-Words
```
tfidf = TfidfVectorizer(analyzer='word',
stop_words=STOP_WORDS,
ngram_range=(1,1),
max_df=0.7, min_df=2,
sublinear_tf=True)
X = tfidf.fit_transform(ds['title_content'])
print(X.shape)
l_enc = LabelEncoder()
y = l_enc.fit_transform(ds['label'])
print('Encoded labels: ', list([(i, l_enc.classes_[i]) for i in range(0, len(l_enc.classes_))]))
```
# Dimensionality reduction
```
svd = TruncatedSVD(n_components=1000, algorithm='randomized')
X_svd = svd.fit_transform(X)
print('Shape of svd matrix: ', X_svd.shape)
```
# Split Train/Test examples
```
# X = np.concatenate([svd_titulo, svd_resumo], axis=1)
X_train, X_valid, y_train, y_valid = train_test_split(X_svd, y, test_size=0.2, random_state=283)
print('X_train matrix shape is: {0}'.format(X_train.shape))
print('X_valid matrix shape is: {0}'.format(X_valid.shape))
print('y_train matrix shape is: {0}'.format(y_train.shape))
print('y_valid matrix shape is: {0}'.format(y_valid.shape))
```
# Functions
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize=(12,6))
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.title(title + " normalized confusion matrix")
else:
plt.title(title + ' confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
#print(cm)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.show()
def benchmark(clf, X_train, y_train, X_test, y_test):
print("Training: ", clf)
t0 = time()
clf.fit(X_train, y_train)
train_time = time() - t0
print("train time: %0.3fs" % train_time)
t0 = time()
pred = clf.predict(X_test)
test_time = time() - t0
print("test time: %0.3fs" % test_time)
score = accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
clf_descr = str(clf).split('(')[0]
print("Done with " + clf_descr)
print('-' * 80)
return clf_descr, score, train_time, test_time
def one_hot(real_vec, min_value=-1.0, max_value=1.0, n=10):
vec = []
for v in real_vec:
if v == max_value:
rang = [0] * n
rang[-1] = 1
vec.extend(rang)
else:
rang = [0] * n
t = (max_value - min_value)
p = v - min_value
s = int((p / t) * n)
rang[s] = 1
vec.extend(rang)
return np.array(vec)
def show_retina(X, rows=50, columns=10):
pixels = np.array(X, dtype='uint8')
# Reshape the array into 28 x 28 array (2-dimensional array)
pixels = pixels.reshape((columns, rows))
# Plot
plt.imshow(pixels, cmap='gray', interpolation='nearest')
plt.show()
```
# KFold
```
kf = StratifiedKFold(n_splits=10)
resultsGaussianNB = []
resultsBernoulliNB = []
resultsRandomForest = []
resultsSVM = []
for train, test in kf.split(X_train, y_train):
resultsGaussianNB.append(benchmark(GaussianNB(), X_train[train], y_train[train], X_train[test], y_train[test]))
resultsBernoulliNB.append(benchmark(BernoulliNB(), X_train[train], y_train[train], X_train[test], y_train[test]))
resultsRandomForest.append(benchmark(RandomForestClassifier(), X_train[train], y_train[train], X_train[test], y_train[test]))
resultsSVM.append(benchmark(svm.LinearSVC(), X_train[train], y_train[train], X_train[test], y_train[test]))
_sum_nbg, _sum_nbb, _sum_rf, _sum_svm = 0, 0, 0, 0
for i in range(len(resultsGaussianNB)):
_sum_nbg += resultsGaussianNB[i][1]
_sum_nbb += resultsBernoulliNB[i][1]
_sum_rf += resultsRandomForest[i][1]
_sum_svm += resultsSVM[i][1]
print('Mean Accuracy of the Gaussian Naïve-Bayes Classifier was {0:0.2f}%.'.format(_sum_nbg/len(resultsGaussianNB)))
print('Mean Accuracy of the Bernoulli Naïve-Bayes Classifier was {0:0.2f}%.'.format(_sum_nbb/len(resultsBernoulliNB)))
print('Mean Accuracy of the Random Forrest Classifier was {0:0.2f}%.'.format(_sum_rf/len(resultsRandomForest)))
print('Mean Accuracy of the SVM Classifier was {0:0.2f}%.'.format(_sum_svm/len(resultsSVM)))
```
## Naïve-Bayes Gaussian
```
nbg = GaussianNB()
nbg.fit(X_train, y_train)
y_pred = nbg.predict(X_valid)
cm = confusion_matrix(y_valid, y_pred)
print('Accuracy: ' + str(accuracy_score(y_valid, y_pred)))
plot_confusion_matrix(cm, l_enc.classes_, title='Naïve-Bayes Gaussian', normalize=True)
```
## Naïve-Bayes Bernoulli
```
nbb = BernoulliNB()
nbb.fit(X_train, y_train)
y_pred = nbb.predict(X_valid)
cm = confusion_matrix(y_valid, y_pred)
print('Accuracy: ' + str(accuracy_score(y_valid, y_pred)))
plot_confusion_matrix(cm, l_enc.classes_, title='Naïve-Bayes Bernoulli', normalize=True)
```
## Random Forest
```
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_valid)
cm = confusion_matrix(y_valid, y_pred)
print('Accuracy: ' + str(accuracy_score(y_valid, y_pred)))
plot_confusion_matrix(cm, l_enc.classes_, title='Random Forest', normalize=True)
```
## SVM
```
lin_clf = svm.LinearSVC()
lin_clf.fit(X_train, y_train)
y_pred = lin_clf.predict(X_valid)
cm = confusion_matrix(y_valid, y_pred)
print('Accuracy: ' + str(accuracy_score(y_valid, y_pred)))
plot_confusion_matrix(cm, l_enc.classes_, title='SVM', normalize=True)
```
## WiSARD
### OneHot encoding of SVD matrix (transforming real into boolean)
```
AddressSize = 6
X_bool_train, y_bool_train = [], []
for example in X_train:
X_bool_train.append(one_hot(example, min_value=-1.0, max_value=1.0, n=2 ** AddressSize))
y_bool_train = l_enc.inverse_transform(y_train)
X_bool_valid, y_bool_valid = [], []
for example in X_valid:
X_bool_valid.append(one_hot(example, min_value=-1.0, max_value=1.0, n=2 ** AddressSize))
y_bool_valid = l_enc.inverse_transform(y_valid)
wsd = wp.Wisard(AddressSize, ignoreZero=True, verbose=True)
wsd.train(X_bool_train, y_bool_train)
y_pred = wsd.classify(X_bool_valid)
cm = confusion_matrix(l_enc.transform(y_bool_valid), l_enc.transform(y_pred))
print('Accuracy: ' + str(accuracy_score(l_enc.transform(y_bool_valid), l_enc.transform(y_pred))))
plot_confusion_matrix(cm, l_enc.classes_, title='WiSARD', normalize=True)
```
# Pendente!!!
## Tabela da acurácia e dos tempos de treinamento e previsão incluindo o desvio padrão
# Results - END OF NOTEBOOK
```
print(resultsGaussianNB)
print(resultsBernoulliNB)
print(resultsRandomForest)
print(resultsSVM)
indices = np.arange(len(resultsGaussianNB))
resultsGaussianNB = [[x[i] for x in resultsGaussianNB] for i in range(4)]
resultsBernoulliNB = [[x[i] for x in resultsBernoulliNB] for i in range(4)]
resultsRandomForest = [[x[i] for x in resultsRandomForest] for i in range(4)]
resultsSVM = [[x[i] for x in resultsSVM] for i in range(4)]
#ver se a concatenacao foi na direcao certa
clf_names, score, training_time, test_time = resultsGaussianNB + resultsBernoulliNB + resultsRandomForest + resultsSVM
#falta o que fazer com as 4 variaveis acima, e juntar nelas os dados de todos
#os classificadores pra fazer o grafico unico
print("comeca")
print(training_time)
training_time = np.array(training_time) / np.max(training_time)
test_time = np.array(test_time) / np.max(test_time)
plt.figure(figsize=(12, 8))
plt.title("Score")
plt.barh(indices, score, .2, label="score", color='navy')
plt.barh(indices + .3, training_time, .2, label="training time",
color='c')
plt.barh(indices + .6, test_time, .2, label="test time", color='darkorange')
plt.yticks(())
plt.legend(loc='best')
plt.subplots_adjust(left=.25)
plt.subplots_adjust(top=.95)
plt.subplots_adjust(bottom=.05)
for i, c in zip(indices, clf_names):
plt.text(-.3, i, c)
plt.show()
import numpy
#print(resultsGaussianNB)
#print(resultsBernoulliNB)
#print(resultsRandomForest)
#print(resultsSVM)
#TODO para cada um dos arrays acima, juntar ja os valores pra media e dp de cada metrica. São arrays de 4-tuplas
h = [[x[i] for x in resultsGaussianNB] for i in range(4)]
a, b, c, d = h
arr = numpy.array(b)
meanresultsGaussianNBAccuracy = numpy.mean(arr, axis=0)
sdresultsGaussianNBAccuracy = numpy.std(arr, axis=0)
arr = numpy.array(c)
meanresultsGaussianNBTrain = numpy.mean(arr, axis=0)
sdresultsGaussianNBTrain = numpy.std(arr, axis=0)
arr = numpy.array(d)
meanresultsGaussianNBTest = numpy.mean(arr, axis=0)
sdresultsGaussianNBTest = numpy.std(arr, axis=0)
h = [[x[i] for x in resultsBernoulliNB] for i in range(4)]
a, b, c, d = h
arr = numpy.array(b)
meanresultsBernoulliNBAccuracy = numpy.mean(arr, axis=0)
sdresultsBernoulliNBAccuracy = numpy.std(arr, axis=0)
arr = numpy.array(c)
meanresultsBernoulliNBTrain = numpy.mean(arr, axis=0)
sdresultsBernoulliNBTrain = numpy.std(arr, axis=0)
arr = numpy.array(d)
meanresultsBernoulliNBTest = numpy.mean(arr, axis=0)
sdresultsBernoulliNBTest = numpy.std(arr, axis=0)
h = [[x[i] for x in resultsRandomForest] for i in range(4)]
a, b, c, d = h
arr = numpy.array(b)
meanresultsRandomForestAccuracy = numpy.mean(arr, axis=0)
sdresultsRandomForestAccuracy = numpy.std(arr, axis=0)
arr = numpy.array(c)
meanresultsRandomForestTrain = numpy.mean(arr, axis=0)
sdresultsRandomForestTrain = numpy.std(arr, axis=0)
arr = numpy.array(d)
meanresultsRandomForestTest = numpy.mean(arr, axis=0)
sdresultsRandomForestTest = numpy.std(arr, axis=0)
h = [[x[i] for x in resultsSVM] for i in range(4)]
a, b, c, d = h
arr = numpy.array(b)
meanresultsSVMAccuracy = numpy.mean(arr, axis=0)
sdresultsSVMAccuracy = numpy.std(arr, axis=0)
arr = numpy.array(c)
meanresultsSVMTrain = numpy.mean(arr, axis=0)
sdresultsSVMTrain = numpy.std(arr, axis=0)
arr = numpy.array(d)
meanresultsSVMTest = numpy.mean(arr, axis=0)
sdresultsSVMTest = numpy.std(arr, axis=0)
#ver se a concatenacao foi na direcao certa
results = resultsGaussianNB + resultsBernoulliNB + resultsRandomForest + resultsSVM
#print("era assim")
#print(results)
indices = np.arange(len(results))
#resultsGaussianNB2 = [[x[i] for x in resultsGaussianNB] for i in range(4)]
#resultsBernoulliNB2 = [[x[i] for x in resultsBernoulliNB] for i in range(4)]
#resultsRandomForest2 = [[x[i] for x in resultsRandomForest] for i in range(4)]
#resultsSVM2 = [[x[i] for x in resultsSVM] for i in range(4)]
resultsFinal = [[x[i] for x in results] for i in range(4)]
#print("ficou assim")
#print(resultsFinal)
clf_names, score, training_time, test_time = resultsFinal
labels = ["Gaussian", "Bernoulli", "RandomForest", "SVM"]
print(labels)
print(clf_names)
indicesNew = np.arange(len(labels))
meanScore = meanresultsGaussianNBAccuracy, meanresultsBernoulliNBAccuracy, meanresultsRandomForestAccuracy, meanresultsSVMAccuracy
meanDPScore = sdresultsGaussianNBAccuracy, sdresultsBernoulliNBAccuracy, sdresultsRandomForestAccuracy, sdresultsSVMAccuracy
meanTrain = meanresultsBernoulliNBTrain, meanresultsBernoulliNBTrain, meanresultsRandomForestTrain, meanresultsSVMTrain
meanDPTrain = sdresultsGaussianNBTrain, sdresultsBernoulliNBTrain, sdresultsRandomForestTrain, sdresultsSVMTrain
meanTest = meanresultsGaussianNBTest, meanresultsBernoulliNBTest, meanresultsRandomForestTest, meanresultsSVMTest
meanDPTest = sdresultsGaussianNBTest, sdresultsBernoulliNBTest, sdresultsRandomForestTest, sdresultsSVMTest
########Tempos
plt.bar(indicesNew, meanTrain, width=0.2, label="Mean Train Time")
plt.bar(indicesNew + .2, meanTest, width=0.2, label="Mean Test Time")
plt.xlabel('Classificator', fontsize=5)
plt.ylabel('Times', fontsize=5)
plt.xticks(indicesNew, labels, fontsize=5, rotation=30)
plt.title('Means Times')
plt.legend(loc='best')
plt.show()
#########STTempos
plt.bar(indicesNew, meanDPTrain, width=0.2, label="ST Train Time")
plt.bar(indicesNew + .2, meanDPTest, width=0.2, label="ST Test Time")
plt.xlabel('Classificator', fontsize=5)
plt.ylabel('Times', fontsize=5)
plt.xticks(indicesNew, labels, fontsize=5, rotation=30)
plt.title('ST Times')
plt.legend(loc='best')
plt.show()
########## Accuracy
plt.bar(indicesNew, meanScore, width=0.2, label="Mean Accuracy")
plt.xlabel('Classificator', fontsize=5)
plt.ylabel('Accuracy', fontsize=5)
plt.xticks(indicesNew, labels, fontsize=5, rotation=30)
plt.title('Mean Accuracy')
plt.legend(loc='best')
plt.show()
########## ST Accuracy
plt.bar(indicesNew, meanDPScore, width=0.2, label="Mean Accuracy")
plt.xlabel('Classificator', fontsize=5)
plt.ylabel('Accuracy', fontsize=5)
plt.xticks(indicesNew, labels, fontsize=5, rotation=30)
plt.title('ST Accuracy')
plt.legend(loc='best')
plt.show()
```
| github_jupyter |
# wav2vec-u CV-sv - w2vu_generate
> "w2vu_generate on Common Voice Swedish"
- toc: false
- branch: master
- badges: true
- comments: true
- categories: [colab, wav2vec-u]
This is based on [this]({% post_url 2021-05-30-wav2vec-u-cv-swedish-gan %}). The main difference is the script that's being run, and setting up [flashlight's python bindings](https://github.com/flashlight/flashlight/tree/master/bindings/python)
I already had the GAN model in gdrive; those files are available [here](https://www.kaggle.com/jimregan/download-w2vu-sv-model-trained-on-colab).
## Preparation
```
!pip install condacolab
import condacolab
condacolab.install()
%%capture
!conda install -c pykaldi pykaldi -y
!git clone https://github.com/pytorch/fairseq/
!git clone https://github.com/kpu/kenlm
%%capture
!apt-get -y install libeigen3-dev liblzma-dev zlib1g-dev libbz2-dev
```
The python build doesn't build utils, so this is (probably) necessary
```
%cd /content/kenlm
!mkdir build
%cd build
!cmake ..
!make -j 4
%%capture
%cd /content/kenlm
!python setup.py install
%cd /tmp
import os
os.environ['PATH'] = f"{os.environ['PATH']}:/content/kenlm/build/bin/"
os.environ['FAIRSEQ_ROOT'] = '/content/fairseq'
%cd /content/fairseq/
```
For next cell, see [here](https://github.com/pytorch/fairseq/issues/3087)
```
%%capture
!pip install --editable ./
!python setup.py build develop
os.environ['HYDRA_FULL_ERROR'] = '1'
%%capture
!pip install editdistance
```
https://colab.research.google.com/github/corrieann/kaggle/blob/master/kaggle_api_in_colab.ipynb
```
%%capture
!pip install kaggle
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Then move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
%cd /content
!kaggle datasets download "jimregan/w2vu-cvsv-audio-processed"
%%capture
!unzip /content/w2vu-cvsv-audio-processed.zip
!kaggle datasets download -d jimregan/w2vu-cvsv-prepared-text
%%capture
!unzip w2vu-cvsv-prepared-text.zip
!rm *.zip
!cp /content/preppedtext/phones/dict* /content/precompute_pca512_cls128_mean
%cd /content
!git clone https://github.com/flashlight/flashlight
%%capture
!apt install -q libfftw3-dev
%cd flashlight/bindings/python
%%capture
!pip install packaging
!USE_MKL=0 KENLM_ROOT=/content/kenlm python setup.py install
```
## w2vu-generate
```
import torch
torch.version.cuda
torch.backends.cudnn.version()
%cd /content/fairseq
from google.colab import drive
drive.mount('/content/drive')
%cd /content/fairseq/examples/wav2vec/unsupervised
%%writefile rungan.sh
python w2vu_generate.py --config-dir config/generate --config-name viterbi \
fairseq.common.user_dir=/content/fairseq/examples/wav2vec/unsupervised \
fairseq.task.data=/content/precompute_pca512_cls128_mean \
fairseq.common_eval.path=/content/drive/MyDrive/w2vu/checkpoint_best.pt \
fairseq.dataset.gen_subset=valid results_path=/content/drive/MyDrive/w2vures
!bash rungan.sh
```
| github_jupyter |
# 1. 强化学习概述
典型的强化学习环境如下所示:

由上图可以看出,强化学习系统由环境和Agent组成,如下所示:
* 在第$t$时刻,Agent观察到环境状态$S_{t}$;
* 通过计算,Agent选择行动$A_{t}$;
* 由于Agent采取的行动,环境转移到$S_{t+1}$,并给Agent一个奖励信号$R_{t}$;
以上过程循环往复进行,直到过程结束,如游戏结束。
一个过程(如游戏的一局)我们称之为Trajectory,我们用$\tau$表示:
$$
\tau=\{ (r_{0}, s_{1}, a_{1}), (r_{1}, s_{2}, a_{2}), (r_{2}, s_{3}, a_{3}), ..., (r_{T-1}, s_{T}, a_{T}) \}
$$
我们可以计算某个Trajectory出现的概率:
$$
p_{\theta}(\tau)=p(s_{1})p_{\theta}(a_{1} | s_{1})p(s_{2} | s_{1}, a_{1})p_{\theta}(a_{2} | s_{2})... \\
=p(s_{1})\prod_{t=1}^{T} p_{\theta}(a_{t} | s_{t})p(s_{t+1} | s_{t}, a_{t})
$$
我们需要让Agent可以找到累积奖励最大的策略,我们首先定义累积奖励:
$$
R(\tau)=\sum_{t=1}^{T} r_{t}
$$
在实际中,我们以游戏为例,每局都具有随机性,所以得到的累积奖励也会不同,所以我们将我们的目标定为使累积奖励的平均值达到最大,我们定义平均累积奖励:
$$
\bar{R}(\tau)=\sum_{\tau}p_{\theta}(\tau)R(\tau)
$$
我们现在的任务是求这个值的最大值,在数学上,求最大值就是求微分,然后按梯度上升法求解。
$$
\nabla_{\theta} \bar{R}(\tau) = \sum_{\tau} R(\tau) \nabla_{\theta} p_{\theta}(\tau) = \sum_{\tau} R(\tau) p_{\theta}(\tau) \frac{\nabla_{\theta} p_{\theta}(\tau)}{p_{\theta}(\tau)} \\
=\sum_{\tau} R(\tau) p_{\theta}(\tau) \nabla_{\theta} \log p_{\theta}(\tau) \\
=E_{\tau \sim p_{\theta}(\tau)} R(\tau) \nabla_{\theta} \log p_{\theta}(\tau) \\
=\frac{1}{N}\sum_{n=1}^{N} R(\tau ^{n}) \nabla_{\theta} \log p_{\theta}(\tau ^{n}) \\
=\frac{1}{N}\sum_{n=1}^{N} R(\tau ^{n}) \sum_{t=1}^{T_{n}} \nabla_{\theta} \log p_{\theta}(a_{t}^{n} | s_{t}^{n}) \\
=\frac{1}{N}\sum_{n=1}^{N} \sum_{t=1}^{T_{n}} R(\tau ^{n}) \nabla_{\theta} \log p_{\theta}(a_{t}^{n} | s_{t}^{n})
$$
在上面的推导中,我们用到了一个求导公式:
$$
\frac{d(\log x)}{dx}=\frac{1}{x}
$$
根据求导的换元法:
$$
\nabla _{\theta} \log p_{\theta}(\tau) = \frac{ \partial{ \log p_{\theta}(\tau)} }{\partial{\theta}} = \frac{ \partial{ \log p_{\theta}(\tau)} }{ \partial{ p_{\theta}(\tau) } } \frac{{ \partial{ p_{\theta}(\tau) } }}{\partial{ \theta }}=\frac{1}{ p_{\theta}(\tau) } \nabla _{\theta} p_{\theta}(\tau)= \frac{\nabla _{\theta}p_{\theta}(\tau)}{ p_{\theta}(\tau) }
$$
以上是理认上的分析过程,我们在实际中怎么来操作呢?我们可以按照如下方法操作,我们首先利用Agent与环境产生互动,例如玩$N$局游戏,得到如下$N$个Trajectory:
$$
\tau ^{1}=\{ (r_{0}^{1}, s_{1}^{1}, a_{1}^{1}), (r_{1}^{1}, s_{2}^{1}, a_{2}^{1}), (r_{2}^{1}, s_{3}^{1}, a_{3}^{1}), ..., (r_{T-1}^{1}, s_{T}^{1}, a_{T}^{1}) \}, R^(\tau ^{1}) \\
\tau ^{2}=\{ (r_{0}^{2}, s_{1}^{2}, a_{1}^{2}), (r_{1}^{2}, s_{2}^{2}, a_{2}^{2}), (r_{2}^{2}, s_{3}^{2}, a_{3}^{2}), ..., (r_{T-1}^{2}, s_{T}^{2}, a_{T}^{2}) \}, R(\tau ^{2}) \\
...... \\
\tau ^{n}=\{ (r_{0}^{n}, s_{1}^{n}, a_{1}^{n}), (r_{1}^{n}, s_{2}^{n}, a_{2}^{n}), (r_{2}^{n}, s_{3}^{n}, a_{3}^{n}), ..., (r_{T-1}^{n}, s_{T}^{n}, a_{T}^{n}) \}, R(\tau ^{n}) \\
...... \\
\tau ^{N}=\{ (r_{0}^{N}, s_{1}^{N}, a_{1}^{N}), (r_{1}^{N}, s_{2}^{N}, a_{2}^{N}), (r_{2}^{N}, s_{3}^{N}, a_{3}^{N}), ..., (r_{T-1}^{N}, s_{T}^{N}, a_{T}^{N}) \}, R(\tau ^{N})
$$
接下来我们上述$N$个Trajectory,变为监督学习的数据集形式:
$$
(s_{1}^{1}, a_{1}^{1}) \oplus R(\tau ^{1}) \quad (s_{2}^{1}, a_{2}^{1}) \oplus R(\tau ^{1}) \quad (s_{3}^{1}, a_{3}^{1}) \oplus R(\tau ^{1}) \quad ... \quad (s_{T}^{1}, a_{T}^{1}) \oplus R(\tau ^{1}) \\
(s_{1}^{2}, a_{1}^{2}) \oplus R(\tau ^{2}) \quad (s_{2}^{2}, a_{2}^{2}) \oplus R(\tau ^{2}) \quad (s_{3}^{2}, a_{3}^{2}) \oplus R(\tau ^{2}) \quad ... \quad (s_{T}^{2}, a_{T}^{2}) \oplus R(\tau ^{2}) \\
...... \\
(s_{1}^{N}, a_{1}^{N}) \oplus R(\tau ^{N}) \quad (s_{2}^{N}, a_{2}^{N}) \oplus R(\tau ^{N}) \quad (s_{3}^{N}, a_{3}^{N}) \oplus R(\tau ^{N}) \quad ... \quad (s_{T}^{N}, a_{T}^{N}) \oplus R(\tau ^{N}) \\
$$
上面的每一项表示,在每个时刻,Agent观察到环境状态为$s_{t}$,如在自动驾驶中就是屏幕的画面,这个就作为监督学习中的样本数据,而此时Agent采取的行动$a_{t}$,就是监督学习中的Ground Truth标签,与监督学习唯一的不同是,我们除了有样本和标签之外,我们还有一个这局游戏最终累积奖励值,作为当看到画面$s_{t}$采取行动$a_{t}$的好坏程度。
我们的目的是求每局奖励的期望值$\bar{R}(\tau)$的最大值,根据高等数学中的相关知识,我们可以通过对$\bar{R}(\tau)$的求导来实现。
## 梯度上升算法
我们定义代价函数为:
$$
\bar{R}_{\theta}(\tau)=\sum_{\tau} R(\tau) p_{\theta}(\tau)
$$
我们对代价函数值进行求导:
$$
\nabla_{\theta} \bar{R}(\tau)=\frac{1}{N}\sum_{n=1}^{N}\sum_{t=1}^{T_{n}} R(\tau ^{n}) \nabla _{\theta} p_{\theta}(a_{t}^{n} | s_{t}^{t})
$$
我们用梯度上升算法求出极值:
$$
\theta = \theta + \alpha \nabla _{\theta} \bar{R}(\tau)
$$
## 代码实现
以上是数学理论上的推导,讲解当前比较流行的Poliyc Gradient算法的数学原理。然而数学原理不等于代码实现,在本节中我们将讨论代码实现的方法和原理。
## 实用技巧
上面的方法,在实际应用,还可以添加如下技巧来提高性能。
| github_jupyter |
```
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
num_p = 1000
num_n = 1000
n = num_n + num_p
d = 10
Xp = np.random.multivariate_normal(np.zeros(10)+1,np.eye(10),size=(num_p))
Xn = np.random.multivariate_normal(np.zeros(10)-1,np.eye(10),size=(num_n))
#w = np.random.uniform(0,1,10)
X = np.vstack([Xp,Xn])
print(X.shape)
w = np.ones(d)*4
Y = np.sign(np.dot(X,w))
perm = np.random.choice(range(n),n,replace=False)
X = X[perm]
Y = Y[perm]
X1 = X[:,0:5]
X2 = X[:,5:10]
def remove_from_U(U,s):
U = list(set(U).difference(set(s)))
U = np.array(U)
return U
def add_to_L(L,s):
L = list(L)
L.extend(list(s))
L = np.array(L)
return L
def add_to_Y(Y,Y_s):
Y = list(Y)
Y.extend(list(Y_s))
return np.array(Y)
def run_self_training():
model1 = LogisticRegression(fit_intercept=True,max_iter = 1000)
model2 = LogisticRegression(fit_intercept=True,max_iter = 1000)
perm = np.random.choice(range(2000),2000,replace=False)
L = perm[:100]
U = perm[100:]
Y_cur = Y[L]
lst_acc = []
a = 16
for i in range(100):
model1.fit(X1[L,:],Y[L])
model2.fit(X2[L,:],Y[L])
if(len(U)>a):
s1 = np.random.choice(U,a,replace=False )
else:
s1 = U
U = remove_from_U(U,s1)
if(len(U)>a):
s2 = np.random.choice(U,a,replace=False )
else:
s2 = U
U = remove_from_U(U,s2)
if(len(s1)>0 and len(s2)>0):
L = add_to_L(L,s1)
L = add_to_L(L,s2)
Y_s1 = model1.predict(X1[s1,:])
Y_s2 = model1.predict(X2[s2,:])
Y_cur = add_to_Y(Y_cur,Y_s1)
Y_cur = add_to_Y(Y_cur,Y_s2)
#print(len(Y_cur))
acc = accuracy_score(Y_cur[100:],Y[L[100:]])
lst_acc.append(acc)
return lst_acc
out = []
T = 10
for t in range(T):
out.append(run_self_training())
import matplotlib.pyplot as plt
for t in range(T):
plt.plot(range(len(out[t])),out[t])
plt.grid()
plt.xlabel('iteration')
plt.ylabel('accuracy of pseudo labels')
import matplotlib.pyplot as plt
y = np.mean(out,axis=0)
y_err = np.std(out,axis=0)
plt.plot(range(len(y)),y)
plt.fill_between(range(len(y)), y-y_err,y+y_err,alpha=0.2)
plt.grid()
plt.xlabel('iteration')
plt.ylabel('accuracy of pseudo labels')
import matplotlib.pyplot as plt
plt.plot(range(len(lst_acc)), lst_acc)
plt.grid()
plt.xlabel('iteration')
plt.ylabel('accuracy of pseudo labels')
```
| github_jupyter |
## Parametric Correlations
Answers to exercises found here:
https://canvas.upenn.edu/courses/1358934/discussion_topics/5600885
NOTE: Be sure to run the cells in order!!!
Dependencies: statsmodels (https://www.statsmodels.org/stable/index.html)
Copyright 2019 by Yale E. Cohen, University of Pennsylvania
Updated 01/01/20 by jig
Converted to Python 03/28/21 by jig
```
import platform
# Output on system used for development/testing:
# 3.9.2
print(platform.python_version())
# Uncomment and run to clear workspace
# %reset
import matplotlib.pyplot as plt
# Always run this cell to load the data
wing_length = [10.4, 10.8, 11.1, 10.2, 10.3, 10.2, 10.7, 10.5, 10.8, 11.2, 10.6, 11.4]
tail_length = [7.4, 7.6, 7.9, 7.2 ,7.4, 7.1, 7.4, 7.2, 7.8, 7.7, 7.8, 8.3]
# Plot the data
plt.plot(tail_length, wing_length, 'ko')
plt.xlabel('Tail Length (cm)')
plt.ylabel('Wing Length (cm)')
import numpy as np
# Compute r
# 1. Compute by hand
n = np.size(wing_length)
sample_mean_x = np.sum(wing_length)/n
sample_mean_y = np.sum(tail_length)/n
SSEX = np.sum((wing_length - sample_mean_x) ** 2)
SSEY = np.sum((tail_length - sample_mean_y) ** 2)
SCOVXY = np.sum((wing_length - sample_mean_x)*(tail_length - sample_mean_y))
rXY = SCOVXY/(np.sqrt(SSEX)*np.sqrt(SSEY))
rYX = SCOVXY/(np.sqrt(SSEY)*np.sqrt(SSEX))
# Use corrcoef
r_builtin = np.corrcoef(wing_length, tail_length)
# Show that they are all the same
print(f'rXY={rXY:.4f} (computed), {r_builtin[0,1]:.4f} (built-in)')
print(f'rYX={rYX:.4f} (computed), {r_builtin[1,0]:.4f} (built-in)')
import scipy.stats as st
# Compute standard error/confidence intervals, using the formulas in the Canvas discussion
# Compute standard error directly
standard_error_r = np.sqrt((1-rXY**2)/(n-2))
# Compute confidence intervals *of the full distribution* by:
# 1. using Fisher's z-tranformation to make a variable that is normally distributed
z = 0.5*np.log((1+rXY)/(1-rXY))
# 2. Compute the standard deviation of z
z_std = np.sqrt(1/(n-3))
# 3. Get the 95% CIs of the transformed variable from the std
scale = st.norm.ppf(0.025)*z_std
z_95CIs = np.array([z+scale, z-scale])
# 4. Convert back to r (inverse z-transformation)
CI95 = (np.exp(2*z_95CIs)-1)/(np.exp(2*z_95CIs)+1)
# Show it
print(f'r={rXY:.2f}, sem={standard_error_r:.4f}, 95 pct CI = [{CI95[0]:.4f}, {CI95[1]:.4f}]')
# Compute p-value for H0:r=0
# Two-tailed test
# Remember that the *mean* of r follows a Student's t distribution with two degrees of freedom
# First compute the t-statistic, which is the sample r divided by the sample standard error
t_val = rXY/standard_error_r
# Now compute the p-value. We want the probabily of getting a value of the test statistic that is *at least as large* as the one that we actually measured from our sample (i.e., tVal), given the null distribution. Here we define the null distribution as the t distribution with n-2 degrees of freedom (recall that the t-distribution for a "regular" t-test has n-1 degrees of freedom... here it's n-2 because we have two samples, X and Y). Because we are using a two-tailed test, this p-value is equal to twice the area under the null t distribution that is greater than tVal. The cumulative distribution is the area that is less than a particular value, so we want 1-cdf
prob = 2*(1-st.t.cdf(t_val,n-2))
# Print it nicely
print(f'p={prob:.4f} for H0: r=0')
# Is this r value different than r=0.75?
# Here we use a z-test on the z-transformed values, as described in the Canvas discussion
# z-transform the new referent
z_Yale = 0.5*np.log((1+0.75)/(1-0.75))
# Compute the text statistic as the difference in z-transformed values, divided by the sample standard deviation
plambda = (z-z_Yale)/z_std
# Get a p-value from a two-tailed test
prob2 = 2*(1-st.norm.cdf(plambda))
print(f'p={prob2:.4f} for H0: r=0.75')
import statsmodels.stats as sm
# Estimate power: That is, p(reject H0|H1 is true)
# Compute the test statistic as above
r_ref = 0.5;
z_ref = 0.5*np.log((1+r_ref)/(1-r_ref))
plambda = (z-z_ref)/np.sqrt(1/(n-3))
# Set a criterion based on alpha
alpha = 0.05
z_criterion = st.norm.ppf(1-alpha/2)
# Power is proportion of expected sample distribution to the right of the criterion
power = 1-st.norm.cdf(z_criterion-plambda)
# Calculate the n needed to ensure that H0 (r=0) is rejected 99% of the time when |r|>= 0.5 at a 0.05 level of significance
#
# Derivation:
# power = 1-normcdf(z_criterion-lambda)
# 1 - power = normcdf(z_criterion-lambda)
# zCriterion-lambda = norminv(1 - power)
# plambda = z_criterion - norminv(1 - power)
# (z-z_ref)/sqrt(1/(n-3)) = z_criterion - norminv(1 - power)
# sqrt(1/(n-3)) = (z-z_ref) / (z_criterion - norminv(1 - power))
# n = 1/((z-z_ref) / (z_criterion - norminv(1 - power)))^2+3
desired_power = 0.99
predicted_n = np.ceil(1/((z-z_ref) / (z_criterion - st.norm.ppf(1-desired_power)))**2+3)
print(f'power = {power:.4f}, predicted n = {int(predicted_n)}')
```
| github_jupyter |
# Linear Models
Part 1 of the Lecture: Why are linear models "machine learning" but insufficient for most problems?
```
%matplotlib inline
from matplotlib import pyplot as plt
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
import numpy as np
import os
np.random.seed(123)
```
Configuration variables
```
n_points = 32
```
## Linear Regression
Illustrative examples using Linear Regression
### Case 1: Actually Linear Data
Fitting linear regression to data that shows a linear trend
```
X = np.random.uniform(0, 10, (n_points, 1)) # 2D array with one column
y = 2 * X + 1 + np.random.normal(size=(n_points, 1))
lr = LinearRegression().fit(X, y)
fig, ax = plt.subplots()
x_plot = np.linspace(0, 10, 128)[:, None] # Makes a 2D array
ax.scatter(X, y, color='teal', alpha=0.5)
ax.plot(x_plot, lr.predict(x_plot), 'k', lw=2)
ax.set_xlabel('$X$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
fig.set_size_inches(3.7, 2.5)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'linear-regression.svg'), transparent=True)
```
### Case 2: Data is Nonlinear
Showing the unsurprisingly limitations of linear regression for non-linear data
```
y = -0.6 * X * (X - 11) + np.random.normal(size=(n_points, 1))
```
Showing linear regression
```
lr = LinearRegression().fit(X, y)
fig, ax = plt.subplots()
x_plot = np.linspace(0, 10, 128)[:, None] # Makes a 2D array
ax.scatter(X, y, color='teal', alpha=0.5)
ax.plot(x_plot, lr.predict(x_plot), 'k', lw=2)
ax.set_xlabel('$X$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_ylim(0, 20)
fig.set_size_inches(3, 2)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'linear-regression-fails-quadratic.svg'), transparent=True)
```
Showing polynomial regression of degree 10
```
pr = Pipeline([
('poly', PolynomialFeatures(degree=10)),
('lr', LinearRegression())
]).fit(X, y)
fig, ax = plt.subplots()
x_plot = np.linspace(0, 10, 128)[:, None] # Makes a 2D array
ax.scatter(X, y, color='teal', alpha=0.5)
ax.plot(x_plot, pr.predict(x_plot), 'k', lw=2)
ax.set_xlabel('$X$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_ylim(0, 20)
fig.set_size_inches(3, 2)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'polynomial-regression-overfitting.svg'), transparent=True)
```
## Logistic Regression
Some examples illustrating Logistic Regression to classify samples
### Case 1: Linearly-Separable Data
Showing logistic regression on a simple, linearly-separable set
```
y = (X > 5).astype('int') # Puts a decision surface at x == 5
lr = LogisticRegression().fit(X, y)
fig, ax = plt.subplots()
x_plot = np.linspace(0, 10, 128)[:, None] # Makes a 2D array
ax.scatter(X, y, color=['teal' if yi else 'orangered' for yi in y], alpha=0.5)
ax.plot(x_plot, lr.predict_proba(x_plot)[:, 1], 'k', lw=2)
ax.set_xlabel('$X$', fontsize=16)
ax.set_ylabel('$P(y)=$"Yes"', fontsize=16)
fig.set_size_inches(3.7, 2.5)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'logistic-regression.svg'), transparent=True)
```
### Case 2: Non-linearly separable data
Show that simple Logistic Regression doesn't work here
```
y = np.logical_or(X > 7, X < 3).astype('int') # Puts a decision surface at x == 5
lr = LogisticRegression().fit(X, y)
fig, ax = plt.subplots()
x_plot = np.linspace(0, 10, 128)[:, None] # Makes a 2D array
ax.scatter(X, y, color=['teal' if yi else 'orangered' for yi in y], alpha=0.5)
ax.plot(x_plot, lr.predict_proba(x_plot)[:, 1], 'k', lw=2)
ax.set_xlabel('$X$', fontsize=16)
ax.set_ylabel('$P(y)=$"Yes"', fontsize=16)
fig.set_size_inches(3.7, 2.5)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'logistic-regression-failure.svg'), transparent=True)
```
| github_jupyter |
```
import requests
results = []
q = "created:>2017-01-01"
def search_repo_paging(q):
params = {'q' : q, 'sort' : 'forks', 'order': 'desc', 'per_page' : 100}
url = 'https://api.github.com/search/repositories'
while True:
res = requests.get(url, params = params)
result = res.json()
results.extend(result['items'])
params = {}
try:
url = res.links['next']['url']
except:
break
from pandas.io.json import json_normalize
import json
import pandas as pd
import bson.json_util as json_util
sanitized = json.loads(json_util.dumps(results))
normalized = json_normalize(sanitized)
df = pd.DataFrame(normalized)
df.head(5)
```
# Data Processing
```
from langdetect import detect
df = df.dropna(subset=['description'])
df['lang'] = df.apply(lambda x: detect(x['description']),axis=1)
df = df[df['lang'] == 'en']
import nltk, string
from nltk import word_tokenize
from nltk.corpus import stopwords
def clean(text = '', stopwords = []):
#tokenize
tokens = word_tokenize(text.strip())
#lowercase
clean = [i.lower() for i in tokens]
#remove stopwords
clean = [i for i in clean if i not in stopwords]
#remove ponctuation
punctuations = list(string.punctuation)
clean = [i.strip(''.join(punctuations)) for i in clean if i not in
punctuations]
return " ".join(clean)
df['clean'] = df['description'].apply(str) #make sure description is a
string
df['clean'] = df['clean'].apply(lambda x: clean(text = x, stopwords = stopwords.words('english')))
df[['watchers_count','size','forks_count','open_issues']].describe()
```
# Data Analysis
## Top technologies
```
import nltk
from nltk.collocations import *
list_documents = df['clean'].apply(lambda x: x.split()).tolist()
bigram_measures = nltk.collocations.BigramAssocMeasures()
bigram_finder = BigramCollocationFinder.from_documents(list_documents)
bigram_finder.apply_freq_filter(3)
bigrams = bigram_finder.nbest(bigram_measures.raw_freq,20)
scores = bigram_finder.score_ngrams(bigram_measures.raw_freq)
ngram = list(bigram_finder.ngram_fd.items())
ngram.sort(key=lambda item: item[-1], reverse=True)
frequency = [(" ".join(k), v) for k,v in ngram]
df=pd.DataFrame(frequency)
import matplotlib.pyplot as plt
plt.style.use('ggplot')
df.set_index([0], inplace = True)
df.sort_values(by = [1], ascending = False).head(20).plot(kind = 'barh')
plt.title('Trending Technologies')
plt.ylabel('Technology')
plt.xlabel('Popularity')
plt.legend().set_visible(False)
plt.axvline(x=14, color='b', label='Average', linestyle='--', linewidth=3)
for custom in [0, 10, 14]:
plt.text(14.2, custom, "Neural Networks", fontsize = 12, va = 'center',bbox = dict(boxstyle='square', fc='white', ec='none'))
plt.show()
```
## Programming languages
```
queries = ["created:>2017-01-01", "created:2015-01-01..2015-12-31","created:2016-01-01..2016-12-31"]
df = pd.DataFrame()
for query in queries:
data = search_repo_paging(query)
data = pd.io.json.json_normalize(data)
df = pd.concat([df, data])
df['created_at'] = df['created_at'].apply(pd.to_datetime)
df = df.set_index(['created_at'])
fig, ax = plt.subplots()
dx = pd.DataFrame(df.groupby(['language', df.index.year])['language'].count())
dx.unstack().plot(kind='bar', title = 'Programming Languages per Year', ax = ax)
ax.legend(['2015', '2016', '2017'], title = 'Year')
plt.show()
```
## Programming languages in top technologies
```
technologies_list = ['software engineering', 'deep learning', 'open source', 'exercise practice']
for tech in technologies_list:
print(tech)
print(set(df[df['clean'].str.contains(tech)]['language']))
```
## Top repositories by technology
```
technologies_list = ['software engineering', 'deep learning', 'open source', 'exercise practice']
result = df.sort_values(by='watchers_count', ascending=False)
for tech in technologies_list:
subset = result[result['clean'].str.contains(tech)].head(5)
print(tech)
for i,line in subset.iterrows():
print(line['name'])
print(line['description'])
print('\n')
```
## Forks, open_issues, size, watchers count
```
df.groupby('technology')['forks', 'watchers', 'size', 'open_issues'].mean()
df.groupby('technology')['forks', 'watchers', 'size', 'open_issues'].min()
df.groupby('technology')['forks', 'watchers', 'size', 'open_issues'].max()
```
## Forks vs open issues
```
x = df['forks']
y = df['open_issues']
fig, ax = plt.subplots()
colors = dict(zip(set(df['technology']), ['red', 'blue']))
ax.scatter(x=x, y=y, c=df['technology'].apply(lambda x: colors[x]), s = 200, alpha = 0.5)
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Number of forks', ylabel='Number of open issues')
plt.show()
```
## Forks vs size
```
x = df['forks']
y = df['size']
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Number of forks', ylabel='Repos size')
plt.show()
```
## Forks vs watchers
```
x = df['watchers']
y = df['forks']
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Number of watchers', ylabel='Number of forks')
plt.show()
```
## Open issues versus size
```
x = df['open_issues']
y = df['size']
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Number of open issues', ylabel='Repos size')
plt.show()
```
## Open issues versus Watchers
```
x = df['open_issues']
y = df['watchers']
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Number of open issues', ylabel='Number of watchers')
plt.show()
```
## Size vs watchers
```
x = df['size']
y = df['watchers']
ax.set(title='Deep Learning and Open Source Technologies', xlabel='Repos size', ylabel='Number of watchers')
plt.show()
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv("scpppi/processed_data_l.csv")
df = df.drop(columns=["Unnamed: 0","datetime","granule_id"])
df["location"]
df["datetime_dt"] = pd.to_datetime(df["datetime_dt"] )
def clean_df(df):
df = df.drop(columns=["Unnamed: 0","datetime","granule_id"])
df["datetime_dt"] = pd.to_datetime(df["datetime_dt"] )
df["month"] = [i.month for i in df["datetime_dt"]]
df["month"] = df["month"].astype(str).astype("category")
one_hot = pd.get_dummies(df["month"])
df = df.join(one_hot)
df = df.drop('month',axis = 1)
df = df.drop('date',axis = 1)
one_hot = pd.get_dummies(df["location"])
df = df.join(one_hot)
df = df.drop('location',axis = 1)
one_hot = pd.get_dummies(df["grid_id"])
df = df.join(one_hot)
df = df.drop('grid_id',axis = 1)
return df
df["month"] = [i.month for i in df["datetime_dt"]]
df["month"] = df["month"].astype(str).astype("category")
one_hot = pd.get_dummies(df["month"])
one_hot
df = df.join(one_hot)
df = df.drop('month',axis = 1)
df = df.drop('date',axis = 1)
one_hot = pd.get_dummies(df["location"])
df = df.join(one_hot)
df = df.drop('location',axis = 1)
one_hot = pd.get_dummies(df["grid_id"])
df = df.join(one_hot)
df = df.drop('grid_id',axis = 1)
df.head()
df
X = df[df.columns[2:]].values
y = df["value"].values
X.shape
import torch
import torch.optim as optim
import torch.nn as nn
from torch.utils.data import Dataset, TensorDataset, DataLoader
import pytorch_lightning as pl
from pytorch_lightning import Trainer
# from pytorch_lightning.profiler import Profiler, AdvancedProfiler
l_rate = 0.3
mse_loss = nn.MSELoss(reduction = 'mean')
class RegressionDataset(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2
from sklearn.preprocessing import MinMaxScaler
# scaler = MinMaxScaler()
# X_train = scaler.fit_transform(X_train)
# X_val = scaler.transform(X_val)
# X_test = scaler.transform(X_test)
# X_train, y_train = np.array(X_train), np.array(y_train)
# X_val, y_val = np.array(X_val), np.array(y_val)
# X_test, y_test = np.array(X_test), np.array(y_test)
train_dataset = RegressionDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).float())
val_dataset = RegressionDataset(torch.from_numpy(X_val).float(), torch.from_numpy(y_val).float())
test_dataset = RegressionDataset(torch.from_numpy(X_test).float(), torch.from_numpy(y_test).float())
EPOCHS = 150
BATCH_SIZE = 64
LEARNING_RATE = 0.001
NUM_FEATURES = X.shape[1]
train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True)
val_loader = DataLoader(dataset=val_dataset, batch_size=1)
test_loader = DataLoader(dataset=test_dataset, batch_size=1)
NUM_FEATURES
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
model = Regression(NUM_FEATURES)
model.to(device)
criterion = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
loss_stats = {
'train': [],
"val": []
}
from tqdm import tqdm
print("Begin training.")
for e in tqdm(range(1, EPOCHS+1)):
# TRAINING
train_epoch_loss = 0
model.train()
for X_train_batch, y_train_batch in train_loader:
X_train_batch, y_train_batch = X_train_batch.to(device), y_train_batch.to(device)
optimizer.zero_grad()
y_train_pred = model(X_train_batch)
train_loss = criterion(y_train_pred, y_train_batch.unsqueeze(1))
train_loss.backward()
optimizer.step()
train_epoch_loss += train_loss.item()
# VALIDATION
with torch.no_grad():
val_epoch_loss = 0
model.eval()
for X_val_batch, y_val_batch in val_loader:
X_val_batch, y_val_batch = X_val_batch.to(device), y_val_batch.to(device)
y_val_pred = model(X_val_batch)
val_loss = criterion(y_val_pred, y_val_batch.unsqueeze(1))
val_epoch_loss += val_loss.item()
loss_stats['train'].append(train_epoch_loss/len(train_loader))
loss_stats['val'].append(val_epoch_loss/len(val_loader))
print(f'Epoch {e+0:03}: | Train Loss: {train_epoch_loss/len(train_loader):.5f} | Val Loss: {val_epoch_loss/len(val_loader):.5f}')
1+1
y_pred_list = []
with torch.no_grad():
model.eval()
for X_batch, _ in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_pred_list.append(y_test_pred.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
mse = mean_squared_error(y_test, y_pred_list)
r_square = r2_score(y_test, y_pred_list)
print("Mean Squared Error :",mse)
print("R^2 :",r_square)
# Mean Squared Error : 907.6883032426905
# R^2 : 0.8434949212954438
# Mean Squared Error : 604.0072439588212
# R^2 : 0.8958560984908686
train_dataset = RegressionDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).float())
df_testi['7F1D1'] = np.zeros(len(df_testi))
df_testi['WZNCR'] = np.zeros(len(df_testi))
df_testi = df_testi[df.columns]
df_test.drop_duplicates()
df_testi = clean_df(df_test)
X_t.shape
X_t = df_testi[df_testi.columns[2:]].values
y_t = df_testi["value"].values
df_testi
test_inderence_dataset = RegressionDataset(torch.from_numpy(X_t).float(), torch.from_numpy(y_t).float())
test_inference_loader = DataLoader(dataset=test_inderence_dataset, batch_size=1,shuffle=False)
y_pred_list = []
with torch.no_grad():
model.eval()
for X_batch, _ in test_inderence_dataset:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_pred_list.append(y_test_pred.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
len(y_pred_list)
df_test["value"]=y_pred_list
test_labels = pd.read_csv("submission_format.csv")
# test_labels["datetime"] = pd.to_datetime(test_labels["datetime"])
test_labels
test_labels["datetime"][0]
df_final = df_test#[df_test["location"]=="Delhi"]
filtered =df_final[['grid_id','value','datetime']]
# filtered.columns = ['grid_id','value','datetime']
df_f = test_labels.merge(filtered,on=['grid_id','datetime'],how="left")
df_f
dates = list(test_labels["datetime"])
vals = pd.read_csv("dumbSameMonthAverage.csv")
filtered = filtered.groupby(["datetime","value"]).mean().reset_index()
finals = vals.merge(df_f,on=['grid_id','datetime'],how="left")
finals
resi = []
for i in finals[["value","value_y"]].values:
if(i[1]!=i[1]):
resi.append(i[0])
else:
resi.append(8/12 *i[0] + 5/12*i[1] )
max(finals["value"]),max(resi)
len(resi)
finals["value3"] = resi
grdi =finals[["datetime","grid_id","value3"]]
grdi.columns = vals.columns
grdi.describe()
grdi.to_csv("hardwork3.csv",index=False)
df_f = df_f.groupby(["datetime","grid_id"]).mean().reset_index()
# final = df_f[df_f["datetime"].isin(dates)]
final.groupby(["datetime","grid_id"]).mean().reset_index()
test_labels.groupby(["datetime","grid_id"]).mean().reset_index()
filtered["datetime_dt"]= pd.to_datetime(filtered["datetime_dt"])
filtered.sort_values("datetime_dt")#.reset_index()["value"].plot()
# y_pred_list
class Regression(pl.LightningModule):
def __init__(self,NUM_FEATURES):
super(Regression, self).__init__()
self.fc1 = nn.Linear(NUM_FEATURES, 1024)
self.fc2 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 256)
self.fc4 = nn.Linear(256, 128)
self.fc5 = nn.Linear(128, 64)
self.f = nn.Linear(64, 1)
self.activation = nn.LeakyReLU()
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
x = self.activation(self.fc4(x))
x = self.activation(self.fc5(x))
x = self.f(x)
return x
torch.tensor(X[0]).shape
n_samples = 5
X.shape
class SentimentNet(nn.Module):
def __init__(self, output_size=1, embedding_dim=128, hidden_dim=8, n_layers=8, drop_prob=0.5):
super(SentimentNet, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.lstm = nn.LSTM(39450, hidden_dim, n_layers, dropout=drop_prob, batch_first=True)
self.dropout = nn.Dropout(drop_prob)
self.fc = nn.Linear(hidden_dim, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x, hidden):
batch_size = x.size(0)
x = x.long()
embeds = x
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
out = self.sigmoid(out)
out = out.view(batch_size, -1)
out = out[:,-1]
return out, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device))
return hidden
model = SentimentNet()
model.init_hidden()
X
import torch.optim as optim
model = LSTM()
criterion = nn.MSELoss()
optimiser = optim.Adam(model.parameters(), lr=0.08)
torch.tensor(X[:5]).shape,y.shape
model(torch.tensor(X[:5]),y[:5])
def training_loop(n_epochs, model, optimiser, loss_fn,
train_input, train_target, test_input, test_target):
for i in range(n_epochs):
def closure():
optimiser.zero_grad()
out = model(train_input)
loss = loss_fn(out, train_target)
loss.backward()
return loss
optimiser.step(closure)
with torch.no_grad():
future = 1000
pred = model(test_input, future=future)
# use all pred samples, but only go to 999
loss = loss_fn(pred[:, :-future], test_target)
y = pred.detach().numpy()
# print the loss
out = model(train_input)
loss_print = loss_fn(out, train_target)
print("Step: {}, Loss: {}".format(i, loss_print))
train_prop = 0.95
train_samples = round(N * train_prop)
test_samples = N - train_samples
train_input = torch.from_numpy(y[test_samples:, :-1]) # (train_samples, L-1)
train_target = torch.from_numpy(y[test_samples:, 1:]) # (train_samples, L-1)
test_input = torch.from_numpy(y[:test_samples, :-1]) # (train_samples, L-1)
test_target = torch.from_numpy(y[:test_samples, 1:]) # (train_samples, L-1)
N = 100 # number of theoretical series of games
L = 11 # number of games in each series
x = np.empty((N,L), np.float32) # instantiate empty array
x[:] = np.arange(L)
y = (1.6*x + 4).astype(np.float32)
# add some noise
for i in range(len(y)):
y[i] += np.random.normal(10, 1)
training_loop(n_epochs = 10,
model = model,
optimiser = optimiser,
loss_fn = criterion,
L = L,
train_input = train_input,
train_target = train_target,
test_input = test_input,
test_target = test_target)
df_values = df[df.columns[8:]].values
df_values.shape
vectors=df_values
target = df_values[0]
df_test = pd.read_csv("processed_data_test.csv")
df_test= df_test.drop(columns=["Unnamed: 0","datetime","granule_id"])
df_test.head()
# import torch
# print(torch.rand(1, device="cuda"))
class LSTM(nn.Module):
def __init__(self, hidden_layers=64):
super(LSTM, self).__init__()
self.hidden_layers = hidden_layers
# lstm1, lstm2, linear are all layers in the network
self.lstm1 = nn.LSTMCell(1, self.hidden_layers)
self.lstm2 = nn.LSTMCell(self.hidden_layers, self.hidden_layers)
self.linear = nn.Linear(self.hidden_layers, 1)
def forward(self, y, future_preds=0):
outputs, num_samples = [], y.size(0)
h_t = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
c_t = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
h_t2 = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
c_t2 = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
for time_step in y.split(1, dim=1):
# N, 1
h_t, c_t = self.lstm1(input_t, (h_t, c_t)) # initial hidden and cell states
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2)) # new hidden and cell states
output = self.linear(h_t2) # output from the last FC layer
outputs.append(output)
for i in range(future_preds):
# this only generates future predictions if we pass in future_preds>0
# mirrors the code above, using last output/prediction as input
h_t, c_t = self.lstm1(output, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs.append(output)
# transform list to tensor
outputs = torch.cat(outputs, dim=1)
return outputs
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#IGV-sandbox" data-toc-modified-id="IGV-sandbox-1"><span class="toc-item-num">1 </span>IGV sandbox</a></span><ul class="toc-item"><li><span><a href="#Var" data-toc-modified-id="Var-1.1"><span class="toc-item-num">1.1 </span>Var</a></span></li><li><span><a href="#setup" data-toc-modified-id="setup-1.2"><span class="toc-item-num">1.2 </span>setup</a></span><ul class="toc-item"><li><span><a href="#conda-env" data-toc-modified-id="conda-env-1.2.1"><span class="toc-item-num">1.2.1 </span>conda env</a></span></li><li><span><a href="#Connect" data-toc-modified-id="Connect-1.2.2"><span class="toc-item-num">1.2.2 </span>Connect</a></span></li><li><span><a href="#Start" data-toc-modified-id="Start-1.2.3"><span class="toc-item-num">1.2.3 </span>Start</a></span></li></ul></li><li><span><a href="#running" data-toc-modified-id="running-1.3"><span class="toc-item-num">1.3 </span>running</a></span></li></ul></li><li><span><a href="#Trying-with-DeepLift-bigwig" data-toc-modified-id="Trying-with-DeepLift-bigwig-2"><span class="toc-item-num">2 </span>Trying with DeepLift bigwig</a></span><ul class="toc-item"><li><span><a href="#Sample-11" data-toc-modified-id="Sample-11-2.1"><span class="toc-item-num">2.1 </span>Sample 11</a></span></li><li><span><a href="#Sample-23" data-toc-modified-id="Sample-23-2.2"><span class="toc-item-num">2.2 </span>Sample 23</a></span></li></ul></li><li><span><a href="#Converting-from-bigwig-to-WIG" data-toc-modified-id="Converting-from-bigwig-to-WIG-3"><span class="toc-item-num">3 </span>Converting from bigwig to WIG</a></span></li></ul></div>
# IGV sandbox
## Var
```
work_dir = '/ebio/abt3_projects/software/dev/tmp/igv/'
```
## setup
### conda env
```
conda create -n igv -y bioconda::igv
```
### Connect
```
ssh -L 9720:localhost:9720 -Y nyoungblut@rick.eb.local
```
### Start
```
(igv) @ rick:/ebio/abt3_projects/software/dev/tmp/igv
igv
```
Note: `igv` must be run in the terminal in which X11 was enabled
## running
**Test input files**
```
(igv) @ rick:/ebio/abt3_projects/software/dev/tmp/igv
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/n1000_r30/assembly/1/metaspades/contigs_filtered.fasta .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/n1000_r30/map/1/metaspades.bam .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/n1000_r30/map/1/metaspades.bam.bai .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/n1000_r30/map/1/metaspades/features.tsv.gz .
```
**prokka to add gene info**
For testing of loading track info
```
(igv) @ rick:/ebio/abt3_projects/software/dev/tmp/igv
conda install prokka
prokka --cpus 12 --force --strain sim1 --outdir prokka_annot --prefix sim1 --kingdom Bacteria --metagenome contigs_filtered.fasta
mv prokka_annot/sim1.gff prokka_annot/sim1.gff3
```
# Trying with DeepLift bigwig
## Sample 11
* Mateo's "6" is "6" for test_data_n25
**Input files**
```
@ rick:/ebio/abt3_projects/software/dev/tmp/igv/deeplift/test_n100_r30/11/
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/assembly/11/megahit/contigs_filtered.fasta .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/11/megahit.bam .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/11/megahit.bam.bai .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/11/megahit/features.tsv.gz .
```
**prokka to add gene info**
For testing of loading track info
```
(igv) @ rick:/ebio/abt3_projects/software/dev/tmp/igv
conda install prokka
prokka --cpus 12 --force --strain sim1 --outdir prokka_annot --prefix sim1 --kingdom Bacteria --metagenome contigs_filtered.fasta
mv prokka_annot/sim1.gff prokka_annot/sim1.gff3
```
## Sample 23
* Mateo's "23" is "23" for test_data_n25
**Input files**
```
@ rick:/ebio/abt3_projects/software/dev/tmp/igv/deeplift/test_n100_r30/23/
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/assembly/23/megahit/contigs_filtered.fasta .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/23/megahit.bam .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/23/megahit.bam.bai .
cp -f /ebio/abt3_projects/databases_no-backup/DeepMAsED/test_runs/n100_r25/map/23/megahit/features.tsv.gz .
```
**prokka to add gene info**
For testing of loading track info
```
(igv) @ rick:/ebio/abt3_projects/software/dev/tmp/igv
conda install prokka
prokka --cpus 12 --force --strain sim1 --outdir prokka_annot --prefix sim1 --kingdom Bacteria --metagenome contigs_filtered.fasta
mv prokka_annot/sim1.gff prokka_annot/sim1.gff3
```
# Converting from bigwig to WIG
```
# using `igv` conda env
bigWigToWig unk.bw unk.wig
```
| github_jupyter |
# CCP-BioSim Training Course:
### Basic MD data analysis with MDTraj
IPython (Jupyter) notebooks are a great environment in which to do exploratory analysis of MD data. This notebook illustrates how two key Python packages - *numpy* and *matplotlib* - can be used with the MD analysis package [*mdtraj*](http://mdtraj.org/1.9.0/) for some simple standard analysis tasks.
I am assuming you have a basic knowledge of Python and Jupyter notebooks; if anything is unfamiliar a quick web search should enable you to fill in the gaps.
Along with this notebook you should have downloaded two data files:
1. **bpti.gro** : Structure file for the protein BPTI in Gromacs format.
2. **bpti.xtc** : Gromacs trajectory file for BPTI.
-----
To begin with, run the code in the following cell to check that you have all the data files and Python packages required:
```
import os.path as op
problems = False
top_file = 'data/basic_analysis/bpti.gro'
traj_file = 'data/basic_analysis/bpti.xtc'
try:
import mdtraj as mdt
except ImportError:
print('Error: you don\'t seem to have mdtraj installed - use pip or similar to get it then try again.')
problems = True
try:
import numpy as np
except ImportError:
print('Error: you don\'t seem to have numpy installed - use pip or similar to get it then try again.')
problems = True
try:
import matplotlib.pyplot as plt
# This next line makes matplotlib show its graphs within the notebook.
%matplotlib inline
except ImportError:
print('Error: you don\'t seem to have matplotlib installed - use pip or similar to get it then try again.')
problems = True
try:
import nglview as nv
except ImportError:
print('Error: you don\'t seem to have nglview installed - use pip or similar to get it then try again.')
problems = True
if not op.exists(top_file):
print('Error: you don\'t seem to have the data file {} in this directory.'.format(top_file))
problems = True
if not op.exists(traj_file):
print('Error: you don\'t seem to have the data file {} in this directory.'.format(traj_file))
problems = True
if problems:
print('Fix the errors above then re-run this cell')
else:
print('Success! You seem to have everything ready to run the notebook.')
plt.rcParams.update({'font.size': 15}) #This sets a better default label size for plots
```
## Preparation: loading the trajectory data.
Success? If so let's start by loading the MD data.
In the cell below, we create an MDTraj *Trajectory* object from the trajectory and topology files, then use the nglviewer app to have a quick look at the trajectory:
```
traj = mdt.load(traj_file, top=top_file)
import nglview as nv
view = nv.show_mdtraj(traj)
view
```
Having got a feel for the molecule we are dealing with, and its dynamics, let's look at the Python representation of it.
Let's have a little dig around the trajectory object *traj*:
```
print(traj)
print(type(traj.xyz))
print(traj.xyz.shape)
```
So you can see that the trajectory coordinate data is stored in the numpy array *traj.xyz*. This is a three-dimensional array of shape [n_frames, n_atoms, 3].
**NB:** MDTraj stores coordinates in nanometer units.
---
## Example 1: Calculate and plot the end-to-end distance as a function of time.
Let's start with something simple. The cell below defines a function that calculates, for each frame, the distance between the first atom and the last. Then we apply the function to the trajectory object, and plot the result.
```
def end_to_end(xyz):
'''
Calculate end to end distances
Arguments:
xyz : numpy array of shape (n_frames, n_atoms, 3)
Returns:
numpy array of shape (n_frames)
'''
d = np.zeros(len(xyz)) # create an array of zeros of length n_frames
for i in range(len(xyz)):
dxyz = xyz[i, 0] - xyz[i, -1] # note use of '-1' to get the last element of the array (the last atom)
# The next two lines are just Pythagoras:
dxyz2 = dxyz * dxyz
d[i] = np.sqrt(dxyz2.sum()) # If you are not familiar with numpy sum() you might want to look it up
return d
e2e = end_to_end(traj.xyz)
plt.plot(e2e)
```
### Follow-up challenges:
Tweak the code in the cell above to:
1. Add suitable x- and y-axis labels to the plot, and maybe a title (hint: the matplotlib documentation is [here](https://matplotlib.org)).
2. Plot the end-to-end distance in angstroms instead of nanometers.
---
## Example 2: Plotting RMSD data.
### 2a: The easy way.
*MDTraj* includes many useful trajectory analysis tools. Here we use the *rmsd* function to calculate the RMSD of each trajectory frame from the first, and then we plot it.
```
rmsd = mdt.rmsd(traj, traj[0])
plt.plot(rmsd)
```
### 2b: The harder way.
Let's try using *numpy* to calculate the same thing.
If you are quite new to *numpy* you may want to research what the *axis* argument to the *sum()* function does.
```
def myrmsd(xyz):
'''
Calculate rmsd for a trajectory
Arguments:
xyz: numpy array of shape (n_frames, n_atoms, 3)
Returns:
numpy array of shape (n_frames)
'''
# calculate all displacements relative to the first frame:
dxyz = xyz - xyz[0]
# square them
dxyz2 = dxyz * dxyz
# Pythagoras: r2 = x2 + y2 + z2
dr2 = dxyz2.sum(axis=2)
# square root of mean value of r2 for each snapshot:
rmsd = np.sqrt(dr2.mean(axis=1))
return rmsd
rmsd2 = myrmsd(traj.xyz)
plt.plot(rmsd2)
```
Whoops! That doesn't look right, does it? This is because it is standard practice when calculating RMSDs to least-squares fit the two snapshots together first, to remove translation and rotation. *MDTraj* did this, but the simple *numpy* code did not, so the RMSD values are much bigger.
There is no simple way to fix this with *numpy*, but with *MDTraj* it is easy:
```
traj.superpose(traj[0])
```
Now the trajectory data is all least-squares fitted to the first snapshot ("traj[0]"), we can try the simple *numpy* function again:
```
rmsd3 = myrmsd(traj.xyz)
plt.plot(rmsd3)
```
### Follow-up challenge:
Well it looks like the *MDTraj* result, doesn't it? But to check, write some code in the cell below to calculate and plot the difference between the RMSDs calculated using the two approaches.
```
# Write your code here:
```
---
## Example 3: Calculating RMSFs.
### 3a: The simple approach
While *MDTraj* has a simple function to calculate RMSDs, there is no equivalent for RMSFs (root-mean-square fluctuations) (Yes, there is an approach if you dig around a bit, but it's a bit clunky, IMHO). But with a bit of *numpy* its easy to create our own:
```
def myrmsf(xyz):
'''
Calculate RMSF from trajectory data
Arguments:
xyz : numpy array of shape (n_frames, n_atoms, 3)
Returns:
numpy array of shape (n_atoms)
'''
# calculate all displacements relative to the mean structure:
dxyz = xyz - xyz.mean(axis=0)
# square them
dxyz2 = dxyz * dxyz
# Pythagoras: r2 = x2 + y2 + z2
dr2 = dxyz2.sum(axis=2)
# Square root of mean value of r2 for each atom, over all frames:
rmsf = np.sqrt(dr2.mean(axis=0))
return rmsf
rmsf = myrmsf(traj.xyz)
plt.plot(rmsf)
```
### Follow-up challenge:
Add suitable axis labels to the plot.
### 3b: A more advanced approach.
Instead of plotting the RMSF for each atom, how about the average RMSF for each residue? To do this we need to:
1. Find the index of the first atom in each residue
2. Average the atomic RMSFs from this atom to one less than the first atom of the next residue, or the last atom if it is the last residue.
To do the first you will see a bit of slightly more advanced wrangling with *MDTraj*, then the second is pretty straightforward:
```
first_atoms = [r.atom(0).index for r in traj.topology.residues] # You may want to do a bit of research on this line.
def residue_rmsf(rmsf, first_atoms):
'''
Convert atom-resolution RMSFs into residue-resolution values
Arguments:
rmsf : rmsf data as numpy array of shape (n_atoms)
first_atoms : indices of first atoms in each residue as numpy array of shape (n_residues)
Returns:
numpy array of rmsf values of shape (n_residues)
'''
n_res = len(first_atoms)
result = np.zeros(n_res)
for i in range(n_res - 1):
result[i] = rmsf[first_atoms[i]:first_atoms[i+1]].mean()
result[n_res - 1] = rmsf[first_atoms[n_res -1]:].mean()
return result
res_rmsf = residue_rmsf(rmsf, first_atoms)
plt.plot(res_rmsf)
```
### Follow-up challenge:
You can get a numpy array with the indices of all the backbone (N, CA, C and O) atoms using the command:
```
backbone_list = traj.topology.select('backbone')
```
In the cell below write some code to calculate and plot the residue-based RMSF values for backbone atoms only:
```
# Write your code here:
```
---
## Example 4: Secondary structure analysis.
*MDTraj* includes the DSSP algorithm to assign secondary structure. In its simplest form, this returns an [n_frames, n_residues] array where in each frame, each residue is labelled as 'H' (helical), 'E' (beta-strand), or 'C' (random coil):
```
ss = mdt.compute_dssp(traj)
print (ss.shape)
print (ss[0])
```
Let's plot the number of helical residues in each snapshot. If you are fairly new to *numpy* then note that when the sum() function is applied to an array of logical (Boolean) values, it counts the number of True elements.
```
n_helical = (ss == 'H').sum(axis=1)
plt.plot(n_helical)
```
In the cell below, we get a little cleverer and calculate the percentage native structure for each snapshot (assuming 'native' means 'same as the first snapshot'):
```
plt.plot((ss == ss[0]).sum(axis=1) * 100 / 58)
```
### Follow-up challenge:
This graph might look nicer if the data was smoothed over, for example, a running window of 5 points. In the cell below create a function to do this and plot the results
```
# Write your code here:
```
## Summary
Hopefully this brief introduction has convinced you that iPython/Jupyter notebooks are a very useful approach to MD data analysis. We have only scratched the surface of what *MDTraj* can do, and not even begun to look at how, with a bit of extra work, you can get publication-quality graphs out of *matplotlib*.
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
from google.colab import files
def getLocalFiles():
_files = files.upload()
if len(_files) >0:
for k,v in _files.items():
open(k,'wb').write(v)
getLocalFiles()
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='https://github.com/leandromouralima/deep-learning/blob/master/intro-to-pytorch/assets/mnist.png?raw=1'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
print(labels)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
print(labels[1])
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Your solution
## Flatten the images
images.shape
images_flat = images.squeeze().reshape(64,1,-1).squeeze() #Rm grayscale level
#Then reshape into 64,1,784 then remove the single level again.
images_flat.shape
#Create activaction function
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
## Random DATA
#features = torch.randn((1,images_flat.shape[1]))
#features = images_flat[1].unsqueeze(dim=0)
features = images_flat
n_inputs = features.shape[1]
n_hidden = 256
n_outputs = 10
#Create Weights
W1 = torch.randn((n_inputs,n_hidden))
W2 = torch.randn((n_hidden,n_outputs))
#Creat Bias
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_outputs))
display(features.shape)
display(W1.shape)
display(W2.shape)
display(B1.shape)
display(B2.shape)
out = torch.mm(activation(torch.mm(features,W1)+B1),W2)+B2
out.shape
out1 = out[1]
out1.shape
#out = # output of your network, should have shape (64,10)
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='https://github.com/leandromouralima/deep-learning/blob/master/intro-to-pytorch/assets/image_distribution.png?raw=1' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
def softmax(x):
## TODO: Implement the softmax function here
num = torch.exp(x)
den = torch.exp(x).sum(dim=1).unsqueeze(dim=1)
return num/den
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the sigmoid activation function, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="https://github.com/leandromouralima/deep-learning/blob/master/intro-to-pytorch/assets/activation.png?raw=1" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="https://github.com/leandromouralima/deep-learning/blob/master/intro-to-pytorch/assets/mlp_mnist.png?raw=1" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
```
## Your solution here
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.fc1 = nn.Linear(784, 128)
# Inputs to hidden layer2 linear transformation
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
# Hidden layer with ReLU activation
x = F.relu(self.fc1(x))
# Hidden layer with ReLU activation
x = F.relu(self.fc2(x))
# Output layer with softmax activation
x = F.softmax(self.fc3(x), dim=1)
return x
model = Network()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight.shape)
print(model.fc1.bias.shape)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
ps
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.
The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
```
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = \
'--conf spark.cassandra.connection.host=cassandra --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.0.2,com.datastax.spark:spark-cassandra-connector_2.11:2.0.2 pyspark-shell'
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from pyspark.sql import SQLContext
from pyspark.sql import functions as F
from pyspark.sql.types import *
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
sc = SparkContext(appName="BigDataRiver")
sc.setLogLevel("WARN")
sc.setCheckpointDir('checkpoint/')
ssc = StreamingContext(sc, 60)
sql = SQLContext(sc)
kafkaStream = KafkaUtils.createDirectStream(ssc, ['bdr'], {"metadata.broker.list": 'kafka:9092'})
parsed = kafkaStream.map(lambda v: v[1])
#split is_purchase column into two
separateClicksSchema = StructType([
StructField("purchased_count", LongType(), False),
StructField("clicked_count", LongType(), False)
])
def separateClicks(is_purchase):
return (is_purchase, 1-is_purchase)
separateClicks_udf = F.udf(separateClicks, separateClicksSchema)
def buildCFModel(train):
def isProductToRating(productCount, clickCount):
return (productCount * 3.0) + clickCount
ratings = train.rdd.\
map(lambda r: Rating(r.user_id, r.product, isProductToRating(r.purchased_count, r.clicked_count)))
rank = 10
numIterations = 20
lambdaFactor = 0.01
alpha = 0.01
seed = 42
return ALS.trainImplicit(ratings, rank, numIterations, alpha, seed=seed)
def recommendTopProducts(dfModel):
numberOfRecommendationsRequired = 5
rdd = dfModel.recommendProductsForUsers(numberOfRecommendationsRequired)
recommendations = rdd.map(lambda (user,ratings): (user, map(lambda r: r.product, ratings)))
topRecommendationsSchema = StructType([
StructField("user_id", IntegerType(), False),
StructField("recommended_products", ArrayType(IntegerType()), False)
])
return sql.createDataFrame(recommendations, topRecommendationsSchema)
def processStream(rdd):
df = sql.read.json(rdd)
if(len(df.columns)):
#store updated counters in C*
df.withColumn('c', separateClicks_udf(df['is_purchase'])).\
select("user_id","product","c.purchased_count","c.clicked_count").\
write.format("org.apache.spark.sql.cassandra").mode('append').\
options(table="users_interests", keyspace="bdr").save()
#read all data from C*
usersInterests = sql.read.format("org.apache.spark.sql.cassandra").\
options(table="users_interests", keyspace="bdr").load().cache()
dfModel = buildCFModel(usersInterests.select("user_id","product","clicked_count","purchased_count"))
top5 = recommendTopProducts(dfModel)
top5.show()
top5.write.format("org.apache.spark.sql.cassandra").mode('append').options(table="cf", keyspace="bdr").save()
print "Saved"
else:
print "Empty"
parsed.foreachRDD(lambda rdd: processStream(rdd))
ssc.start()
ssc.awaitTermination()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.