code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Global Warming affects on Agriculture - India
#
# Global Warming and Agriculture are the two of the many things which always excites me, and I'm always curious about it. So why not collab both under the same roof.
#
# Global Warming can be the next biggest global crisis after the current COVID-19 pendamic. It has been affecting the agriculture around the world and will more in future. So in this data analysis we'll analyze - how does the global warming affects the agriculture.
#
# ## Contents
#
# - How to run the code
# - Data Preparation and Cleaning
# - Exploratory Analysis and Visualization
# - Asking and Answering Questions
# - Inferences and Conclusion
# - References and Futher readings
# ## How to run the code
#
# This is an executable [Jupyter notebook](https://jupyter.org) hosted on [Jovian.ml](https://jovian.ml), a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.
#
#
# ### Option 1: Running using free online resources (1-click, recommended)
#
# The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle", but you'll need to create an account on [Google Colab](https://colab.research.google.com) or [Kaggle](https://kaggle.com) to use these platforms.
#
# ### Option 2: Running on your computer locally
#
# 1. Install Conda by following these instructions. Add Conda binaries to your system `PATH`, so you can use the `conda` command on your terminal.
#
#
# 2. Create a Conda environment and install the required libraries by running these commands on the terminal:
#
# ```
# conda create -n zerotopandas -y python=3.8
# conda activate zerotopandas
# pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
#
# ```
#
# 3. Press the "Clone" button above to copy the command for downloading the notebook, and run it on the terminal. This will create a new directory and download the notebook. The command will look something like this:
#
# ```
# jovian clone notebook-owner/notebook-id
#
# ```
#
# 4. Enter the newly created directory using `cd directory-name` and start the Jupyter notebook.
#
# ```
# jupyter notebook
#
# ```
#
# You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a `.ipynb` extension) to open it.
# ## Data Preparation and Cleaning
#
# Lets now, talk a little about data prepartions and cleaning. For this analysis I've used various datasets from kaggle. The main focus of the dataset is to analyze and visualize the impact on agriculture of global warming.
#
# I've used to different datasets - one for global temperatures and other for the indian crop production datasets. After preparing the datasets at the end this topic I got made data frames `india_temp_df` and `india_crop_production_df` and saved it as and `.csv` file.
#
# #### Activities Performed
#
# - Load the dataset into a data frame using Pandas
# - Explore the number of rows & columns, ranges of values etc.
# - Handle missing, incorrect and invalid data
# - Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)
import os
os.listdir()
import pandas as pd
global_temp_by_state_raw_df = pd.read_csv('GlobalLandTemperaturesByState.csv')
global_temp_by_state_raw_df.sample(5)
global_temp_by_state_raw_df.columns.tolist()
india_temp_raw_df = global_temp_by_state_raw_df[global_temp_by_state_raw_df.Country == 'India' ].copy()
india_temp_raw_df.sample(5)
# +
selected_cols = ['dt',
'AverageTemperature','State']
india_temp_df = india_temp_raw_df[selected_cols].copy()
india_temp_df.sample(7)
# -
india_temp_df.shape
# ### Working with Dates
#
# Now, if you check the date series(column) `dt` data type is shows object. Now the reason for this is pandas assign the `object` datatype to the unidentified one's. so for better analysis we should convert it's datatype to `datetime` dtype.
#
# To do that, we can use the pandas `pd.to_datetime` method.
india_temp_df.info()
#converting the date dtype to date time
india_temp_df['dt'] = pd.to_datetime(india_temp_df['dt'])
india_temp_df.info()
# We'll it seems like everythig worked well. So let's save this dataframe as a `.csv` file.
# +
# saving the data frame
india_temp_df.to_csv('india-temp.csv')
# -
os.listdir()
os.listdir('cropProduction')
raw_df_1 = pd.read_csv('cropProduction/datafile.csv')
raw_df_1.sample(5)
raw_df_2 = pd.read_csv('cropProduction/datafile (2).csv')
raw_df_2.sample(5)
raw_df_3 = pd.read_csv('cropProduction/datafile (3).csv')
raw_df_3.sample(5)
raw_df_1 = pd.read_csv('cropProduction/datafile (1).csv')
raw_df_1.sample(5)
raw_df_1.columns.tolist()
# +
# Selecting the required columns and making a new data frame 'crop_prod_sel_df'
sel_cols = ['Crop',
'State']
crop_prod_sel_df = raw_df_1[sel_cols].copy()
crop_prod_sel_df.sample(5)
# -
crop_prod_sel_df.shape
raw_df_4 = pd.read_csv('cropProduction/produce.csv')
raw_df_4.sample(3)
crop_prod_sel_df.Crop.sample(5)
# So, if look at the Crop names, it is in uppercase so turning then it into lower case
crop_prod_sel_df['Crop'] = crop_prod_sel_df.Crop.str.capitalize()
crop_prod_sel_df.sample(5)
# Once we are done with the captilizing the columns, we'll merge the df with the `raw_df_2` on `Crop`.
raw_df_2.sample(5)
# +
raw_df_2.columns.tolist()
# +
# Lets select the required columns
req_cols = ['Crop ',
'Production 2006-07',
'Production 2007-08',
'Production 2008-09',
'Production 2009-10',
'Production 2010-11',]
india_crop_production_raw_df = raw_df_2[req_cols].copy()
india_crop_production_raw_df.sample(5)
# -
# So this is the data frame with which we want to combine the `State` column. But before merging let's fix small error like extra spaces in the columns.
india_crop_production_raw_df = india_crop_production_raw_df.rename(columns={'Crop ' : 'Crop'})
india_crop_production_raw_df.columns.tolist()
crop_prod_sel_df.sample(5)
india_crop_production_raw_df.sample(5)
# +
# merging the data frame
india_crop_production_df = india_crop_production_raw_df.merge(crop_prod_sel_df, on="Crop")
# -
india_crop_production_df.sample(5)
# +
# saving the data frame
india_crop_production_df.to_csv('india-crop-production.csv')
# -
# Let us save and upload our work to Jovian before continuing
# !pip install jovian --upgrade --quiet
import jovian
jovian.commit(files=['india-crop-production.csv', 'india-temp.csv'])
# ## Exploratory Analysis and Visualization
#
#
# Let's begin by importing `matplotlib.pyplot` and `seaborn`.
# +
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 13
matplotlib.rcParams['figure.figsize'] = (15, 8)
matplotlib.rcParams['figure.facecolor'] = '#00000000'
# -
# ### Understanding the Average Temperature Data of Different States
#
# By looking at the graph we get an idea about the average temperatures at different states. Sikkim and Jammu and Kashmir have the most coolest temperature, whereas Delhi and Rajasthan have the most hottest temperature.
# +
# ploting the average-temperature graph with respect to its states
plt.scatter(india_temp_df.State, india_temp_df.AverageTemperature);
plt.xticks(rotation=80);
plt.title('India Average Temperature')
plt.legend(['Average Temperature'])
plt.xlabel('Different States of India')
plt.ylabel('Temperature of Different States');
# -
# ### Exploring the Crops Production Data of Different States
# +
# Plotting a graph for the crop production from 2006 to 2011
india_crop_production_df.plot();
plt.title('Crop Production from 2006 - 11');
plt.ylabel('Crops Production (tons per hecter)');
# +
# To understand the complete dataset- pair plot is the best
sns.pairplot(india_crop_production_df);
# -
# ## Asking and Answering Questions
#
# We've already gained some insights about our datasets - `india_temp_df` and `india_crop_prod_df`. we have seen which are the most warm and coldest states of india, simply by plotting the line graph using the `matplotlib` python library. So now let's try to answer some interesting questions.
#
# ### Q1: Which are the states and what the different crops grown?
#
# To answer this question we'll use the `.unique()` method to display the list of different crops and states.
#
# >`.unique()` -- While analyzing the data, many times the user wants to see the unique values in a particular column, which can be done using Pandas unique() function.
# +
#list of states
dif_states = india_crop_production_df['State'].unique().tolist()
dif_states
# +
# list of crops
dif_crops = india_crop_production_df['Crop'].unique().tolist()
dif_crops
# -
# ### Q2: Which is the most and the least produced crops?
#
# To understand this questions, well have to create a new columns `Total Production` which will store the sum of all the production of particular crop from 2006 to 2011, and after that we can easily we can answer the question. Using the `.nlargest()` and `.nsmallest()` pandas method. We can also use the `.sort_values` method.
#
#
# > `.sort_values` -- Sort by the values along either axis. `ascending` parameter : bool or list of bool, default True Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by.
#
#
# > `.nlargest()` -- Get the rows of a DataFrame sorted by the `n` largest values of `columns`.
#
#
india_crop_production_df['Total Production'] = india_crop_production_df.sum(axis=1)
india_crop_production_df.head()
india_crop_production_df['Total Production'].nlargest(1)
# +
# Sorting the most grown crops using the .sort_values method
india_crop_production_df.sort_values('Total Production',ascending = False).head(1)
# +
# Sorting the least grown crops using the .sort_values method
india_crop_production_df.sort_values('Total Production',ascending = True).head(1)
# +
crops_total_production = india_crop_production_df['Total Production'].unique()
crops_total_production
# +
# plotting the bar for the most and the least grown crop
plt.bar(dif_crops, crops_total_production);
plt.title('Total Crops Produced');
plt.xlabel('Different Crops');
plt.ylabel('Total Production (tons per hecter)');
plt.legend(['Different Crops']);
# -
# So, now by looking at the visual picture of all the crops grown, we can conclude/ answer the question, saying Arhar is the least grown crop and Maize is the most grown crop.
# ### Merging the data frames
#
# Merging the `india_temp_df` to `india_crop_production_df` on `States` using the pandas `.merge()` function/method.
#
# > `.merge()` -- Merge DataFrame objects by performing a database-style join operation by columns or indexes. If joining columns on columns, the DataFrame indexes *will be ignored*. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on.
# +
# merging both the data frames
main_df = india_temp_df.merge(india_crop_production_df, on="State")
# -
main_df.sample(5)
# ### Q3: Which states produces the most and least crops?
#
# To answer this question, we'll create a seprate dataframe using the pandas `.groupby` method.
#
# > `.groupby` -- Group series using mapper (dict or key function, apply given function to group, return result as series) or by a series of columns.
# +
crops_by_states = india_crop_production_df.groupby(['State']).size().to_frame()
# crops_by_states.sort_values('State')
# +
# plotting the simple line graph
plt.plot(crops_by_states);
plt.xticks(rotation=80);
plt.title('Crop Produced by States')
plt.legend(['Crops'])
plt.xlabel('Different States of India')
plt.ylabel('Number of Crops');
# -
# After looking at the graph we get an idea about the most and the least crops producing states. Most producing states are - Andhra Pradesh and Uttar Pradesh and the least producing states Bihar and Punjab.
# ## Q4: Which are the states that produces Arhar and Maize?
#
# Now by looking at the data frame and graph, we get our answer. Andhra Pradesh, Bihar, Karnataka, Rajasthan, Uttar Pradesh, Gujrat and Maharastra are the states which produces Arhar and Maize.
#
# If we look at the graph properly we see states like Andhra Pradesh, Uttar Pradesh, and Karanataka are the states which produces both arhar and maize, where states like Bihar, Gujrat and Maharastra only produces one crop.
india_crop_production_df.sample(5)
# +
maize = india_crop_production_df[india_crop_production_df['Crop'] == "Maize"]
arhar = india_crop_production_df[india_crop_production_df['Crop'] == "Arhar"]
arhar
# -
maize
# +
plt.scatter(maize.State, maize.index);
plt.scatter(arhar.State, arhar.index);
plt.title("States Producing Arhar and Maize" );
plt.legend(['Maize', 'Arhar'], loc=7);
# -
# So, with this we've completed answer questions about agriculture dataset. Now we'll be analysing our this results with the average India temperatures. above we've also created a dataframe `main_df` which was the combination of both the temperature and agriculture dataframe.
# ### Q5: How temp affects crops
india_temp_df.sample(4)
# +
sel_states_df = india_temp_df[(india_temp_df['State'] == "Bihar") |
(india_temp_df['State'] == "Karnataka") |
(india_temp_df['State'] == "Andhra Pradesh") |
(india_temp_df['State'] == "Gujarat") |
(india_temp_df['State'] == "Maharashtra") |
(india_temp_df['State'] == "Rajasthan") |
(india_temp_df['State'] == "Uttar Pradesh")
]
sel_states_df.shape
# +
plt.plot(sel_states_df.State, sel_states_df.AverageTemperature);
plt.xticks(rotation=80);
plt.title('India Average Temperature')
plt.legend(['Average Temperature'])
plt.xlabel('Different States of India')
plt.ylabel('Temperature of Different States');
# +
# Most and least crop producing states
ml_states_df = india_temp_df[(india_temp_df['State'] == "Bihar") |
(india_temp_df['State'] == "Andhra Pradesh") |
(india_temp_df['State'] == "Punjab") |
(india_temp_df['State'] == "Uttar Pradesh")
]
ml_states_df.head()
# +
plt.plot(ml_states_df.State, ml_states_df.AverageTemperature);
plt.title('India Average Temperature')
plt.legend(['Average Temperature'])
plt.xlabel('Different States of India')
plt.ylabel('Temperature of Different States');
# -
# we saw a graph which show, states having cold temperature are not good at producing more crops, place having warm - intermediate temperatures grows good agriculture.
# ## Inferences and Conclusions
#
# At the end of this analysis we get an and conclusion about how the global warming is affecting on agriculture.
#
# - So we started with finding the list of different states and crops.
#
#
# - At very first we analysed the temperature dataset, and we found that coldest and the hotest states of India, which are Sikkim, Jammu and Kashmir and Hotest states are Delhi and Rajasthan.
#
#
# - We analysed agriculture data, and found the most and the least grown crops, which was - Maize and Arhar.
#
#
# - After that we looked the states which produced the most and the least crops, and then looked at the states which produces Arhar and Maize -- which was Uttar Pradesh, and Andhra pradesh and Least- Bihar and Punjab
#
#
# - We also used the temparature data to analyse how data tempearature affects the crops.
#
#
# - we saw a graph which show, states having cold temperature are not good at producing more crops, place having warm - intermediate temperatures grows good agriculture.
#
#
# ## References and Futher readings
#
# These are some of the links, which I found useful.
#
# * [We are prepared for the Next Global Crisis](https://www.sachinshrmaa.com/health/we-are-not-ready-for-the-next-global-crisis/) post on my blog.
#
# * [Climate Change: Earth Surface Temperature Data](https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data) on kaggle
#
# * [Pandas Guide](https://pandas.pydata.org/docs/getting_started/intro_tutorials/03_subset_data.html) Pandas documentation
#
#
# Lets save and upload our work at jovian
jovian.commit()
|
global warming affects on agriculture India.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="6-w6JdXpymm5"
# # **Assignment 2** #
#
# **Delivery Instructions**: This assignment contains some theoretical questions but also coding. For the coding you are encouraged to work with Python, even if it is not your favorite language, because it will likely prepare you for future courses or professional requirements. If you do work with Python, you will be required to deliver a notebook similar to this one (you can actually take this and work on it directly). If you work with another language you will be required to return a link to a [repl.it](https://repl.it/), where any text will be in the form of comments. During the week, I will give you more information about how to upload your work.
#
#
# + [markdown] colab_type="text" id="-d604ucx7LtF"
# ### **Q1. Implementing a max heap** ###
#
# The implementation of heapMaxRemove(H) in the following cell omits some important details. In a text cell, briefly discuss what the problem is, and then give a corrected implementation in a code cell.
#
# + colab={} colab_type="code" id="pBFb9-cA7ai2"
# The following code implements a **max** Heap
#
# Strictly speaking, the following functions will take O(n) time
# in Python, because changing an input array within the body of a function
# causes the language to copy the entire array. We will soon see how to do this
# better, using Python classes
# function for inserting element in heap
def heapInsert(H,x):
n = len(H)
H.append(x) # append in last leaf (next available position in array/list)
# now bubble up x
p = n; # current position of bubble-up
while True:
parent_pos = (pos-1)//2
if H[parent_pos] < H[pos]:
H[pos] = H[parent_pos] # copy parent value to current position
H[parent_pos] = x # move x to parent's position
pos = parent_pos # update current position
else:
break # break the bubble-up loop
return H
# function for removing max element from heap
# WARNING: This function is intentionally incomplete --
# You will fix this in the assignment
def heapMaxRemove(H):
x = H.pop() # pop last element
H[0] = x # put it in the place of max
# now bubble-down x
pos = 0
while True:
c1_pos = 2*pos+1 # child 1 position
c2_pos = 2*pos+2 # child 2 position
if H[c1_pos] > H[c_2]:
c_pos = c1_pos
else:
c_pos = c2_pos # which child is active in possible swap
if H[pos]< H[c_pos]:
H[pos] = H[c_pos] # swap
H[c_pos] = x
pos = c_pos # update current position
else:
break # break
# + [markdown] colab_type="text" id="NfG6AGxO-st8"
# *your description of the problem goes here*
#
#
# + colab={} colab_type="code" id="bsxuuFMd-6St"
# your code correction goes here
# + [markdown] colab_type="text" id="gWtNntjZJJxO"
# ### **Q2. Finding the minimum element in a Max Heap**
#
# In a text cell, give an abstract description of an algorithm that takes as input a Max Heap, and finds the minimum in it. The algorithm should perform the **minimum possible number of comparisons**. In a code cell, give an implementation of your algorithm. Your code should take as input an array representing a Max Heap and return the minimum element.
#
#
# + [markdown] colab_type="text" id="Sn5GvWFsA2uv"
# *your algorithm description goes here*
# + colab={} colab_type="code" id="Vl_FTA95A1J7"
# your code goes here
# + [markdown] colab_type="text" id="SVvBSS47ElLP"
# ### **Q3. Sorting with original position memory**
#
# The function mergeSort($A$) we discussed in the first lecture returns an array $B$ which is the sorted version of the input array $A$. Give a modification of mergeSort so that it in addition to $B$ it returns an array $P$, such that $P[j]$ contains the position of element $B[j]$ in array $A$.
#
# **example**: <br>
# A = [10, 3, 5] <br>
# B = [3, 5, 10] <br>
# P = [2, 3, 1]
#
#
#
# + colab={} colab_type="code" id="WmJpG_qHDbM5"
# your code goes here
# + [markdown] colab_type="text" id="BieVTa-EKUCi"
# ### **Q4. A theoretical question**
#
# Suppose we start from number $n>2$, and we keep hitting the $\sqrt{\cdot }$ (square-root) button in a scientific calculator. How many times (as a function of $n$) do we need to push the button before we see a number smaller than $2$ in the output? Can you give a justification?
#
#
#
# + [markdown] colab_type="text" id="PZv6UZpCOT2f"
# *your answer goes here*
# + colab={} colab_type="code" id="Hnl1gIh-NMRL"
|
notebooks/work/prior/Assignment-2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Bu ders yazılım optimizasyonunu(software optimization-SO) tanıtıyor. Yazılım optimizasyonunu ne zaman ve neden kullanmamız gerektiğini ve SO algoritmalarının geniş sınıflarını göreceğiz.
# ## Eğitmen Tanıtımı
# [Youtube Video](https://youtu.be/7tRirMjZWDU)
|
iot-icin-intel-edge-ai/bolum-4-optimizasyon-teknikleri-ve-araclari/ders-1-yazilim-optimizasyonu-tanitimi.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Salmon Mapping
# ## Prepare mapping commands
#
import pandas as pd
import pathlib
fastq_meta = pd.read_csv('./metadata/trimmed_fastq_metadata.csv', index_col=0)
fastq_meta
# ## Prepare salmon command for each sample
# +
# output dir
output_dir = pathlib.Path('quant/').absolute()
output_dir.mkdir(exist_ok=True)
# set all the directories
index_dir = pathlib.Path('../ref/Salmon/salmon_index/').absolute()
# -
# I use my own server, change this number to 4 if using laptop
# also, because salmon run in parallel internally, we just run salmon commands one by one
threads = 45
# make command for each RNA-seq sample based on the metadata
commands = {}
for (tissue, time, rep), sub_df in fastq_meta.groupby(['tissue', 'dev_time', 'replicate']):
fastq_paths_str = ' '.join(sub_df['file_name'])
output_name = output_dir / f'{tissue}_{time}_{rep}.quant'
# assemble the final command
command = f'salmon quant -i {index_dir} -l A -r {fastq_paths_str} --threads {threads} --validateMappings -o {output_name}'
commands[f'{tissue}_{time}_{rep}'] = command
# a example command
command
# ## Run salmon
import subprocess
for name, command in commands.items():
# once command is finished, you may want to keep a physical record, so you know its finished for sure
# you can also use this physical to prevent rerun the command, if the execution stopped in some place
if pathlib.Path(output_dir / name).exists():
print('EXISTS', name)
continue
subprocess.run(command, shell=True, check=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf8')
print('FINISH', name)
with open(output_dir / name, 'w') as f:
f.write('Oh Yeah')
# ## Clean up the flag
for name in commands.keys():
subprocess.run(f'rm {output_dir / name}', shell=True)
# ## Make a metadata for salmon output
# find out all the trimmed fastq, make a dict
fastq_list = list(output_dir.glob('**/quant.sf'))
fastq_list[:5]
pd.read_csv(fastq_list[0], nrows=10, sep='\t', index_col=0)
# +
records = []
for path in fastq_list:
tissue, time, rep = path.parent.name.split('_')
records.append([tissue, time, rep, str(path)])
salmon_metadata = pd.DataFrame(records, columns=['tissue', 'dev_time', 'replicate', 'salmon_count_path'])
salmon_metadata.to_csv('metadata/salmon_metadata.csv')
# -
|
data/DevFB/3.mapping.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## 3.1 MNIST
import os
from scipy.io import loadmat
mnist_path = "./mnist-original.mat"
mnist_raw = loadmat(mnist_path)
mnist = {
"data": mnist_raw["data"].T,
"target": mnist_raw["label"][0],
"COL_NAMES": ["label", "data"],
"DESCR": "mldata.org dataset: mnist-original",
}
print("Done!")
# -
mnist
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
# +
# using matplotlib to display the images
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap=matplotlib.cm.binary, interpolation="nearest")
plt.axis("off")
plt.show()
y[36000]
# +
# mnist is already split into training and test set: first 60 000 and last 10 000
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
# We can shuffle the training set
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
# +
## 3.2 Training a Binary Classifier
# simplify the problem and try to identify number "5"
y_train_5 = (y_train == 5)
y_train_5
# -
y_test_5 = (y_test == 5)
# +
# SGD classifier: capable of handling large data sets
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
# -
sgd_clf.predict([some_digit])
## 3.3 Performance Measure
# 3.3.1 Measure accuracy using Cross Validation
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
# +
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
# +
# 3.3.2 Confusion Matrix
# use cross_val_predict() function
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
# Use the confusion_matrix() function:
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
# +
# Each row in the confusion matrix represents an actual class
# Each column is a predicted class
# Precision: accuracy of the positive predictions
# Precision = TP / (TP + FP)
# Recall: TP/(TP + FN)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
# + active=""
#
# -
recall_score(y_train_5, y_train_pred)
# +
# We usually combine precision and recall into one metric called F1 score
# It is a harmonic mean of precision and recall
# F1 = 2 / (1 / precision + 1 / recall)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
# +
# F1 score tends to be large when both precision and recall are good
# but it may not always be what you want
# To catch shoplifters, you will want high recall even if you get many false alarms
# but we cannot always get high precisions and recalls, there is a tradeoff
# 3.3.4 Precision Recall Tradeoff
# sklearn does not let your set threshold(score) directly but it does give you access to the decision scores
# that it uses to make predictions instead of calling predict() method, you can call its decision_function()
# method which returns a score for each instance, and then make predictions based on those scores using any threshold
# you want
y_scores = sgd_clf.decision_function([some_digit])
y_scores
# -
threshold = -80000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
# +
# How can you decide which threshold to use?
# First you need to get scores of all instances in the training set using cross_val_predict() function
# but specifying that you want it to return decision scores instead of predictions
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method="decision_function")
# compute precision and recall for all possible thresholds using precision_recall_curve()
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision")
plt.plot(thresholds, recalls[:-1], "g-", label="Recall")
plt.xlabel("Threshold")
plt.legend(loc="upper left")
plt.ylim([0, 1])
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.show()
# -
# Now you decide to aim for 90% precision, and the score is 700000
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
# +
# Precision can sometimes go down a bit, and finally goes up, e.g from 4/5 down to 3/4
# 3.3.5 ROC Curve
# ROC : TPR / FPR
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plot_roc_curve(fpr, tpr)
plt.show()
# +
# There is also a tradeoff: the higher TPR, also higher FPR
# One way to compare classifiers is to use ROC AUC (area under curve)
# A perfect classifier will have AUC = 1, and a purely random classifier will have a AUC = 0.5
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
# +
# You should choose Precision/Recall curve whenever the positive class is rare or when
# you care more about the false positives than false negatives and the ROC curve otherwise
# in this case, because we have much more negatives(non-5s), so the ROC curve is fairly good
# but the PR curve shows theres is room for improvement
# Let's train a RandomForestClassifier, but it does not have decision_function(),
# instead it has predict_proba() method which returns an array containing a row per instance and
# a column per class, each containing the probability that the given instance belongs to each class
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # Score = probas of positive class in our case
fpr_forest, tpr_forest, threshold_forest = roc_curve(y_train_5, y_scores_forest)
plt.plot(fpr, tpr, "b:", label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="bottom right")
plt.show()
# -
roc_auc_score(y_train_5, y_scores_forest)
## 3.4 Multiclass Classification
# For algorithms which cannot do multiclass classification, we can adopt strategies of OvA and OvO
# Scikit-learn detects when you try to use a binary classification algorithm for a multiclass classification task
# and it automatically runs OvA (except for SVM it runs OvO). Let's try this with SGDClassifier
sgd_clf.fit(X_train, y_train) # y_train not y_train_5
sgd_clf.predict([some_digit])
# Scores from the 10 binary classifiers under the hood
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
# +
# If you want Scikit-learn to force use one-versus-one and one-versus-all you
# can use the OneVsOneClassifier or OneVsRestClassifier classes, pass a binary classifier to its constructor
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
# -
# There are 10 * (10 - 1) / 2 ovo classifiers
# for each pair of category of the label
len(ovo_clf.estimators_)
# Training a randomForestClassifier
# RandomForestClassifiers can classify instances into multiple classes
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
# Now we use cross validationsgd_clf to validate the classifiers
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
# Simply scaling the input will increase the accuracy to above 90%
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
print(scaler.fit(X_train))
print(scaler.mean_)
X_train_scaled = scaler.transform(X_train)
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
# +
## 3.5 Error Analysis
# We need to find ways to imporve the model
# analyse the types of errors it made
# First you can look at the confusion matrix
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
# -
plt.matshow(conf_mx, cmap=plt.cm.gray)
plt.show()
# 5s are a bit darker, which means there could be fewer 5s as on other digits
# or that the classifier does not perform as well on 5s as on other digits
# Focus on errors, divide each value in the confusion matrix by the number of images in the corresponding class
# so you can compare error rates instead of absolute number of errors
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
norm_conf_mx
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()
# +
# column 8 and 9 are quite bright, which means many images get misclassified
# Rows for 8 9 are bright too, means 8 and 0 are usually confused with other digits
# you could try gather more data for 5 and 3 digits (they are easily confused with each other)
# or you could engineer more features to help with the classifier e.g write an algorithm to compute the number of
# closed loops
# We can also investigate individual errors
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
plt.show()
# +
# Onew way to imporve is to ensure they are all centered and not too rotated
# +
## 3.6 MultiLabel Classification
# Some times you may want your classifier to output multiple labels for an instance
# Say it is trained to recognize <NAME> and Charlie, when Alice and Charlie pops up, the classifier
# should be able to recognize both
# This kind of system is called Multilabel classification system
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
y_multilabel
# -
# KNeighborsClassifier supports multilabel classification, but not all classifiers do
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
# +
# Measure a Multilabel classifier
# use F1 score for each individual label then simply calculate the average score
#y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)
#f1_score(y_multilabel, y_train_knn_pred, average="macro")
# +
# but if you want give more weight to classifier's score on pictures of Alice
# one option is to give each label a weight equal to its support (number of instances with that target label)
# set average = "weighted"
# +
## 3.7 Multioutput classification
# a generalization of Multilabel classification, where each label can have multiple classes
# Start by creating the training and test set by taking MNIST images and adding noises to their pixels
# using numpy's randint() function the target images will be the original images
import numpy.random as rnd
noise = rnd.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = rnd.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
# -
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
|
ml/ageron_tutorial/classification/ageron_ch3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import dxchange
import matplotlib.pyplot as plt
from xlearn.transform import train
from xlearn.transform import model
batch_size = 800
nb_epoch = 10
dim_img = 20
nb_filters = 32
nb_conv = 3
patch_step = 1
patch_size = (dim_img, dim_img)
img_x = dxchange.read_tiff('../../test/test_data/training_input.tiff')
img_y = dxchange.read_tiff('../../test/test_data/training_output.tiff')
plt.imshow(img_x, cmap='Greys_r')
plt.show()
plt.imshow(img_y, cmap='Greys_r')
plt.show()
mdl = train(img_x, img_y, patch_size, patch_step, dim_img, nb_filters, nb_conv, batch_size, nb_epoch)
mdl.save_weights('training_weights.h5')
|
doc/demo/transform_train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### You are not alone! There are many resources for `Jupyter Notebooks` and `Python`:
# ---
# #### `Project Jupyter`
#
# - `Project Jupyter` [Homepage](http://jupyter.org/)
# - `Project Jupyter` [Google group](https://groups.google.com/forum/#!forum/jupyter)
# - `Jupyter` [documentation](https://jupyter.readthedocs.io/en/latest/)
# - [GitHub](https://github.com/jupyter/help)
# - Free `Project Jupyter` tutorials:
# - [Readthedocs](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/)
# - [YouTube](https://www.youtube.com/watch?v=Rc4JQWowG5I)
# #### `Python`
#
# - [`Python`](https://www.python.org/)
# - `Python` [documentation](https://docs.python.org/3/)
# - `Python` [Google group](https://groups.google.com/forum/#!forum/comp.lang.python) - Note, there are many!
# - [Stack Overflow](http://stackoverflow.com/questions/tagged/python)
# - `Python` [Help](https://www.python.org/about/help/)
# - Free `Python` tutorials:
# - [Google's Python tutorial](https://developers.google.com/edu/python/)
# - [Data Camp](https://www.datacamp.com/)
# - [Berkeley Institute for Data Science Python Boot Camp](https://www.youtube.com/playlist?list=PLKW2Azk23ZtSeBcvJi0JnL7PapedOvwz9)
|
notebooks/Resources.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How to use this website and prepare for each week
# Before going further, make sure that you have read the [Getting Started](./getting_started.ipynb) instructions.
# ## Organizing your notebooks
# The notebooks are central to this lecture. We won't use anything else then notebooks to write code (+ text!) and you will have to return notebooks as final project report.
# With the exception of Week 01 (for which I recommend to use MyBinder), **all notebooks should be downloaded and run on your machine**. You will also be asked to download data files ([instructions to be added]()). I recommend to organize your code and data so that you can have easy access to both. For example, my notebooks are organized in a weekly folder, and alongside my weekly folders there is a `data` folder where my data is stored. This way, all my paths in notebooks look like:
#
# ```
# ds = xr.open_dataset(r'../data/CERES_EBAF-TOA_Ed4.1_Clim-2005-2015.nc')
# ```
#
# The `../` is to navigate to the parent folder, and from there I can find the data file.
# ## Preparing for each week
# Each week is organized around two notebooks:
# - a "lesson", where you will learn new tools and which is mostly "click through" code that I wrote (and that you'll re-use later) with some understanding questions in between.
# - an "assignment", where you'll apply the tools you've learned before.
# ```{note}
# The **main objective of the practicals is to learn about the climate system from the plots and analyses**. Learning to use the notebooks and xarray is important as well, but only secondary!
# ```
# ## Tips for the presenting group
# Each week, a volunteer group will present their results to the rest of the class. This part is not graded and will have no negative effect whatsoever on the rest of the class. The main objective of the group presentations is for me to focus on what is *really* creating problems for students (as opposed to me *assuming* what will).
#
# **Here are a few suggestions to prepare your contributions**:
# - Make sure that you have managed to do parts of the requested analyses. It doesn't have to be all of them or being 100% perfect (nothing is!), but if you have substantial problems with one exercise please contact me beforehand.
# - In your presentations, focus on the "assignment" notebook. If one of the points or questions in the "lesson" notebook was unclear for you, feel free to discuss as well!
# - Focus on the main plots / analyses and what you didn't understand well: there are no stupid questions or comments! If you don't understand something, feel free to address this in your presentation: it is very likely that all of us have the same issues.
|
book/howto.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Matplotlib 3D Plot
#
# Are you tired with the same old 2D plots? Do you want to take your plots to the next level? Well look no further, it's time to learn how to make 3D plots in matplotlib.
#
# In addition to `import matplotlib.pyplot as plt` and calling `plt.show()`, to create a 3D plot in matplotlib, you need to:
# 1. Import the `Axes3D` object
# 2. Initialize your `Figure` and `Axes3D` objects
# 3. Get some 3D data
# 4. Plot it using `Axes` notation and standard function calls
#
# +
# Standard import
import matplotlib.pyplot as plt
# Import 3D Axes
from mpl_toolkits.mplot3d import axes3d
# Set up Figure and 3D Axes
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Get some 3D data
X = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Y = [2, 5, 8, 2, 10, 1, 10, 5, 7, 8]
Z = [6, 3, 9, 6, 3, 2, 3, 10, 2, 4]
# Plot using Axes notation and standard function calls
ax.plot(X, Y, Z)
plt.show()
# -
# Awesome! You've just created your first 3D plot! Don't worry if that was a bit fast, let's dive into a more detailed example.
#
# ## Matplotlib 3D Plot Example
#
# If you are used to plotting with `Figure` and `Axes` notation, making 3D plots in matplotlib is almost identical to creating 2D ones. If you are not comfortable with `Figure` and `Axes` plotting notation, check out [this](https://blog.finxter.com/matplotlib-subplots/#Matplotlib_Figures_and_Axes) article to help you.
#
# Besides the standard `import matplotlib.pyplot as plt`, you must also`from mpl_toolkits.mplot3d import axes3d`. This imports a 3D `Axes` object on which a) you can plot 3D data and b) you will make all your plot calls with respect to.
#
# You set up your `Figure` in the standard way
#
# ```python
# fig = plt.figure()
# ```
#
# And add a subplots to that figure using the standard `fig.add_subplot()` method. If you just want a single `Axes`, pass `111` to indicate it's 1 row, 1 column and you are selecting the 1st one. Then you need to pass `projection='3d'` which tells matplotlib it is a 3D plot.
#
# From now on everything is (almost) the same as 2D plotting. All the functions you know and love such as `ax.plot()` and `ax.scatter()` accept the same keyword arguments but they now also accept three positional arguments - `X`,`Y` and `Z`.
#
# In some ways 3D plots are more natural for us to work with since we live in a 3D world. On the other hand, they are more complicated since we are so used to 2D plots. One amazing feature of Jupyter Notebooks is the magic command `%matplotlib notebook` which, if ran at the top of your notebook, draws all your plots in an interactive window. You can change the orientation by clicking and dragging (right click and drag to zoom in) which can really help to understand your data.
#
# As this is a static blog post, all of my plots will be static but I encourage you to play around in your own Jupyter or IPython environment.
#
# ## Matplotlib 3D Plot Line Plot
#
# Here's an example of the power of 3D line plots utilizing all the info above.
# +
# Standard imports
import matplotlib.pyplot as plt
import numpy as np
# Import 3D Axes
from mpl_toolkits.mplot3d import axes3d
# Set up Figure and 3D Axes
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Create space of numbers for cos and sin to be applied to
theta = np.linspace(-12, 12, 200)
x = np.sin(theta)
y = np.cos(theta)
# Create z space the same size as theta
z = np.linspace(-2, 2, 200)
ax.plot(x, y, z)
plt.show()
# -
# To avoid repetition, I won't explain the points I have already made above about imports and setting up the `Figure` and `Axes` objects.
#
# I created the variable `theta` using [`np.linspace`](https://blog.finxter.com/np-linspace/) which returns an array of 200 numbers between -12 and 12 that are equally spaced out i.e. there is a linear distance between them all. I passed this to `np.sin()` and `np.cos()` and saved them in variables `x` and `y`.
#
# If you just plotted `x` and `y` now, you would get a circle. To get some up/down movement, you need to modify the z-axis. So, I used `np.linspace` again to create a list of 200 numbers equally spaced out between -2 and 2 which can be seen by looking at the z-axis (the vertical one).
#
# Note: if you choose a smaller number of values for `np.linspace` the plot is not as smooth.
# <div>
# <img src='figures/spring.png' align='left' width=400 />
# </div>
# For this plot, I set the third argument of `np.linspace` to 25 instead of 200. Clearly, this plot is much less smooth than the original and hopefully gives you an understanding of what is happening under the hood with these plots. 3D plots can seem daunting at first so my best advice is to go through the code line by line.
#
# ## Matplotlib 3D Plot Scatter
#
# Creating a scatter plot is exactly the same as making a line plot but you call `ax.scatter` instead.
#
# Here's a cool plot that I adapted from [this](https://www.youtube.com/watch?v=wJQIGXSq504) video. If you sample a normal distribution and create a 3D plot from it, you get a ball of points with the majority focused around the center and less and less the further from the center you go.
# +
import random
random.seed(1)
# Create 3 samples from normal distribution with mean and standard deviation of 1
x = [random.normalvariate(1, 1) for _ in range(400)]
y = [random.normalvariate(1, 1) for _ in range(400)]
z = [random.normalvariate(1, 1) for _ in range(400)]
# Set up Figure and Axes
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot
ax.scatter(x, y, z)
plt.show()
# -
# First, I imported the [python random module](https://blog.finxter.com/python-random-module/) and set the seed so that you can reproduce my results. Next, I used three list comprehensions to create 3 x 400 samples of a normal distribution using the `random.normalvariate()` function. Then I set up the `Figure` and `Axes` as normal and made my plot by calling `ax.scatter()`.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X, Y, Z)
plt.show()
# In this example, I plotted the same `X`, `Y` and `Z` lists as in the very first example. I want to highlight to you that some of the points are darker and some are more transparent - this indicates depth. The ones that are darker in color are in the foreground and those further back are more see-through.
#
# If you plot this in IPython or an interactive Jupyter Notebook window and you rotate the plot, you will see that the transparency of each point changes as you rotate.
#
# ## Matplotlib 3D Plot Rotate
#
# The easiest way to rotate 3D plots is to have them appear in an interactive window by using the Jupyter magic command `%matplotlib notebook` or using IPython (which always displays plots in interactive windows). This lets you manually rotate them by clicking and dragging. If you right-click and move the mouse, you will zoom in and out of the plot. To save a static version of the plot, click the save icon.
#
# It is possible to rotate plots and even create animations via code but that is out of the scope of this article.
#
# ## Matplotlib 3D Plot Axis Labels
#
# Setting axis labels for 3D plots is identical for 2D plots except now there is a third axis - the z-axis - you can label.
#
# You have 2 options:
# 1. Use the `ax.set_xlabel()`, `ax.set_ylabel()` and `ax.set_zlabel()` methods, or
# 2. Use the `ax.set()` method and pass it the keyword arguments `xlabel`, `ylabel` and `zlabel`.
#
# Here is an example using the first method.
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X, Y, Z)
# Method 1
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
plt.show()
# -
# Now each axis is labeled as expected.
#
# You may notice that the axis labels are not particularly visible using the default settings. You can solve this by manually increasing the size of the `Figure` with the `figsize` argument in your `plt.figure()` call.
#
# One thing I don't like about method 1 is that it takes up 3 lines of code and they are boring to type. So, I much prefer method 2.
# +
# Set Figure to be 8 inches wide and 6 inches tall
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X, Y, Z)
# Method 2 - set all labels in one line of code!
ax.set(xlabel='X axis', ylabel='Y axis', zlabel='Z axis')
plt.show()
# -
# Much better! Firstly, because you increased the size of the `Figure`, all the axis labels are clearly visible. Plus, it only took you one line of code to label them all. In general, if you ever use a `ax.set_<something>()` method in matplotlib, it can be written as `ax.set(<something>=)` instead. This saves you space and is nicer to type, especially if you want to make numerous modifications to the graph such as also adding a title.
#
# ## Matplotlib 3D Plot Legend
#
# You add legends to 3D plots in the exact same way you add legends to any other plots. Use the `label` keyword argument and then call `ax.legend()` at the end.
# +
import random
random.seed(1)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot and label original data
ax.scatter(X, Y, Z, label='First Plot')
# Randomly re-order the data
for data in [X, Y, Z]:
random.shuffle(data)
# Plot and label re-ordered data
ax.scatter(X, Y, Z, label='Second Plot')
ax.legend(loc='upper left')
plt.show()
# -
# In this example, I first set the random seed to 1 so that you can reproduce the same results as me. I set up the `Figure` and `Axes` as expected, made my first 3D plot using `X`, `Y` and `Z` and labeled it with the `label` keyword argument and an appropriate string.
#
# To save me from manually creating a brand new dataset, I thought it would be a good idea to make use of the data I already had. So, I applied the `random.shuffle()` function to each of `X`, `Y` and `Z` which mixes the values of the lists in place. So, calling `ax.plot()` the second time, plotted the same numbers but in a different order, thus producing a different looking plot. Finally, I labeled the second plot and called `ax.legend(loc='upper left')` to display a legend in the upper left corner of the plot.
#
# All the usual things you can do with legends are still possible for 3D plots. If you want to learn more than these basic steps, check out my [comprehensive guide to legends in matplotlib](https://blog.finxter.com/matplotlib-legend/).
#
# Note: If you run the above code again, you will get a different looking plot. This is because you will start with the shuffled `X`, `Y` and `Z` lists rather than the originals you created further up inb the post.
#
# ## Matplotlib 3D Plot Background Color
#
# There are two backgrounds you can modify in matplotlib - the `Figure` and the `Axes` background. Both can be set using either the `.set_facecolor('color')` or the `.set(facecolor='color')` methods. Hopefully, you know by now that I much prefer the second method over the first!
#
# Here's an example where I set the `Figure` background color to green and the `Axes` background color to red.
# +
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
ax.plot(X, Y, Z)
# Axes color is red
ax.set(facecolor='r')
# Figure color is green
fig.set(facecolor='g')
plt.show()
# -
# The first three lines are the same as a simple line plot. Then I called `ax.set(facecolor='r')` to set the `Axes` color to red and `fig.set(facecolor='g')` to set the `Figure` color to green.
#
# In an example with one `Axes`, it looks a bit odd to set the `Figure` and `Axes` colors separately. If you have more than one `Axes` object, it looks much better.
# +
# Set up Figure and Axes in one function call
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(8, 6),
subplot_kw=dict(projection='3d'))
colors = ['r', 'g', 'y', 'b']
# iterate over colors and all Axes objects
for c, ax in zip(colors, axes.flat):
ax.plot(X, Y, Z)
# Set Axes color
ax.set(facecolor=c)
# Set Figure color
fig.set(facecolor='pink')
plt.show()
# -
# In this example, I used `plt.subplots()` to set up an 8x6 inch `Figure` containing four 3D `Axes` objects in a 2x2 grid. The `subplot_kw` argument accepts a dictionary of values and these are passed to `add_subplot` to make each `Axes` object. For more info on using `plt.subplots()` check out [my article](https://blog.finxter.com/matplotlib-subplots/).
#
# Then I created the list `colors` containing 4 matplotlib color strings. After that, I used a for loop to iterate over `colors` and `axes.flat`. In order to iterate over `colors` and `axes` together, they need to be the same shape. There are several ways to do this but using the `.flat` attribute works well in this case.
#
# Finally, I made the same plot on each `Axes` and set the facecolors. It is clear now why setting a `Figure` color can be more useful if you create subplots - there is more space for the color to shine through.
#
# ## Conclusion
#
# That's it, you now know the basics of creating 3D plots in matplotlib!
#
# You've learned the necessary imports you need and also how to set up your `Figure` and `Axes` objects to be 3D. You've looked at examples of line and scatter plots. Plus, you can modify these by rotating them, adding axis labels, adding legends and changing the background color.
#
# There is still more to be learned about 3D plots such as surface plots, wireframe plots, animating them and changing the aspect ratio but I'll leave those for another article.
#
# ## Where To Go From Here?
#
# Do you wish you could be a programmer full-time but don’t know how to start?
#
# Check out the pure value-packed webinar where Chris – creator of Finxter.com – teaches you to become a Python freelancer in 60 days or your money back!
#
# https://tinyurl.com/become-a-python-freelancer
#
# It doesn’t matter if you’re a Python novice or Python pro. If you are not making six figures/year with Python right now, you will learn something from this webinar.
#
# These are proven, no-BS methods that get you results fast.
#
# This webinar won’t be online forever. Click the link below before the seats fill up and learn how to become a Python freelancer, guaranteed.
#
# https://tinyurl.com/become-a-python-freelancer
|
3dplot/3dplot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="PbQo5QPSWTLU"
# Poniższy notebook pokazuje jak wykorzystać darmowe GPU na Google Colab aby wyliczyć predykcje z modelu BERT. Przed uruchomieniem notebooka warto upewnić się, że włączyliśmy GPU w Colabie wybierając ścieżkę `Runtime`, `Change runtime type`, `GPU` i kilkając przycisk `Save`.
# + [markdown] id="2cW5QmuhUMxZ"
# # Parametry
# + [markdown] id="vo-umXqyVDy6"
# Najpierw wprowadzamy szczegóły modelu BERT, którego chcemy użyć (jest kilka rodzajów BERTa, które różnią się złożonością architektury). Im większy BERT tym większa moc obliczeniowa (i czas) potrzebny do wyliczenia predykcji.
# Szczegóły możemy przeczytać na [repo BERTA na git-hub](https://github.com/google-research/bert). Po kliknięciu PPM na wybrany model można skopiować link, który zawiera info na temat daty (`date`) i nazwy (`name`) modelu.
# + id="NxeY_4rBWD9w"
bert_model_name = 'uncased_L-12_H-128_A-2'
bert_model_date = '2018_10_18'
bert_model_path = f'https://storage.googleapis.com/bert_models/{bert_model_date}/{bert_model_name}.zip'
uncased=False
# + [markdown] id="UKAhfA-cVwc2"
# Następnie należy określić ścieżki dostępu, które będą używane w notebooku:
# - ścieżka z parametrami modelu (model data),
# - ścieżka z danymi wejściowymi (input),
# - ścieżka gdzie zapisywane będą wyniki predykcji (output)
#
# Ścieżka `./drive/My Drive/` wskazuje na miejsce na Twoim dysku Google Drive gdzie można stworzyć odpowiednie foldery do przechowywania danych wejściowych i wyjściowych.
# + id="cKjR19LvUwj8"
model_dir = f'./{bert_model_name}'
data_dir = './drive/My Drive/Colab Notebooks/input'
train_dir = f'{data_dir}/train_fake.csv'
test_dir = f'{data_dir}/test_fake.csv'
out_dir = './drive/My Drive/Colab Notebooks/output'
# + [markdown] id="4OM5OIHTTysB"
# # Setup
# + [markdown] id="mU2klIOaT7ha"
# ## Montaż google drive
# + [markdown] id="2l2sEh2RX9j-"
# Aby uzyskać dostęp do folderów znajdujących się w środku, musisz zamontować dysk Google Drive. Po uruchomieniu tej komórki zobaczysz link umożliwiający autoryzację dostępu.
# + colab={"base_uri": "https://localhost:8080/"} id="f9htjq7LWQeo" outputId="fe4e2cc9-8c03-4197-c475-aa2106d0830c"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="xAFXdMQVYMXN"
# ## Pobranie plików z parametrami modelu
# + colab={"base_uri": "https://localhost:8080/"} id="kjBSyiQlWMNn" outputId="50c4f88f-4bc5-4c0b-c28a-517e38d98d3f"
# !wget {bert_model_path}
# !unzip the file
# !unzip {bert_model_name}.zip
# + [markdown] id="J6qugUpOYnx_"
# ## Instalacja potrzebnych bibliotek Python
# + colab={"base_uri": "https://localhost:8080/"} id="o_WqPYNaXBYv" outputId="a995d5a5-2ebe-4968-e95c-419050882ba3"
# !pip install keras_bert
# !pip install transformers
# + [markdown] id="JmaKZU00WD9i"
# # Import bibliotek
# + id="DEmx5yddWD9j"
import numpy as np
import pandas as pd
# + id="QAMw1OeNZFFw"
from keras_bert import load_trained_model_from_checkpoint
from transformers import BertTokenizer
# + [markdown] id="fID5q14RWD9s"
# # Funkcje pomocnicze
# + id="B9ITU0kiWD9s"
#funkcja wyświetlająca pełny dataframe
def display_all(df, num_rows=10000, num_cols=10000, col_width=-1):
with pd.option_context('display.max_rows', num_rows, 'display.max_columns', num_cols, 'display.max_colwidth', -1):
display(df)
#funcja do inicjalizacji modelu BERT
def init_tokenizer_and_load_bert_model(model_dir, model_name, model_trainable=True, lowercase=True):
vocab_path = f'{model_dir}/vocab.txt'
config_path = f'{model_dir}/bert_config.json'
checkpoint_path = f'{model_dir}/bert_model.ckpt'
tokenizer = BertTokenizer(vocab_path, do_lower_case=lowercase)
model = load_trained_model_from_checkpoint(config_path, checkpoint_path, trainable=model_trainable)
print('vocab_size:', len(tokenizer.vocab))
print('loaded model: ', model_name)
return tokenizer, model
#funckja wyliczająca wektory input_ids, token_type_ids, attention_mask specyficzne dla modelu BERT (tokenizacja)
#funkcja zwraca wektory w postaci słownika
def get_bert_vectors(tokenizer, df, col_name, vector_length=512):
tokenize = lambda sentence: tokenizer.encode_plus(sentence, max_length=vector_length, padding='max_length', truncation=True)
df[f'{col_name}_tokens'] = df[col_name].map(tokenize)
df[f'{col_name}_input_ids'] = df[f'{col_name}_tokens'].map(lambda x: x['input_ids'])
df[f'{col_name}_token_type_ids'] = df[f'{col_name}_tokens'].map(lambda x: x['token_type_ids'])
df[f'{col_name}_attention_mask'] = df[f'{col_name}_tokens'].map(lambda x: x['attention_mask'])
input_ids = np.stack(df[f'{col_name}_input_ids'])
token_type_ids = np.stack(df[f'{col_name}_token_type_ids'])
attention_mask = np.stack(df[f'{col_name}_attention_mask'])
vectors = {'input_ids': input_ids, 'token_type_ids': token_type_ids, 'attention_mask': attention_mask}
return vectors
#funkcja wyliczająca predykcje klasyfikacji modelu BERT w batchach, aby nie wyczerpać pamięci RAM
def bert_predict_in_batches(vectors, num_batches, output_shape):
vector_input_ids_batches = np.array_split(vectors['input_ids'], num_batches)
vector_token_type_ids_batches = np.array_split(vectors['token_type_ids'], num_batches)
vector_attention_mask_batches = np.array_split(vectors['attention_mask'], num_batches)
X = np.array([]).reshape((0, output_shape))
input_vectors = zip(vector_input_ids_batches, vector_token_type_ids_batches, vector_attention_mask_batches)
for input_ids, token_type_ids, attention_mask in input_vectors:
all_vectors = (input_ids, token_type_ids, attention_mask)
predictions = bert_model.predict(all_vectors, verbose=1)
X_batch = predictions[:, 0, :]
print('current predictions shape: ', X_batch.shape)
X = np.concatenate([X, X_batch])
print('all predictions shape: ', X.shape)
return X
# + [markdown] id="ircroN0wWD91"
# # Wczytanie danych treningowych i testowych
# + colab={"base_uri": "https://localhost:8080/"} id="QBfpIHf6WD92" outputId="80d6f22b-2b93-4bc8-f86d-8d6545267c1e"
train_fake = pd.read_csv(train_dir)
train_fake['is_fake'] = train_fake['is_fake'].astype('int8')
test_fake = pd.read_csv(test_dir)
train_fake.shape, test_fake.shape
# + [markdown] id="xvKBcS0_C8Lk"
# Niektóre dane są puste (nan), więc zamieniamy je na string 'unknown', aby móc wyliczyć predykcje
# + id="NOUbrk1T7lTv"
train_fake.fillna('unknown', inplace=True)
test_fake.fillna('unknown', inplace=True)
# + [markdown] id="oeVDToVpWD95"
# # Model BERT
# + colab={"base_uri": "https://localhost:8080/"} id="1EPmFeS4WD95" outputId="caf7c3e1-327b-4090-d147-8682d90b99bc"
tokenizer, bert_model = init_tokenizer_and_load_bert_model(model_dir, bert_model_name, model_trainable=True, lowercase=uncased)
# + [markdown] id="dMh8y0DMWD98"
# ## Tokenizacja
# + colab={"base_uri": "https://localhost:8080/"} id="NtPy55ueL3Xf" outputId="29e3cf99-458e-483c-e9b3-78cae8789af5"
# %%time
train_title_vectors = get_bert_vectors(tokenizer, train_fake, 'title', vector_length=512)
[vec.shape for vec in train_title_vectors.values()]
# + colab={"base_uri": "https://localhost:8080/"} id="0nS_5uGmWD-F" outputId="5e1c8956-feb4-4e22-af71-00f98d96f709"
# %%time
train_text_vectors = get_bert_vectors(tokenizer, train_fake, 'text', vector_length=512)
[vec.shape for vec in train_text_vectors.values()]
# + colab={"base_uri": "https://localhost:8080/"} id="sTEThvwh-5Ps" outputId="ec457cde-b1fc-457f-86c5-80849f5951b6"
# %%time
test_title_vectors = get_bert_vectors(tokenizer, test_fake, 'title', vector_length=512)
[vec.shape for vec in test_title_vectors.values()]
# + colab={"base_uri": "https://localhost:8080/"} id="G_q1RirB-4-4" outputId="67c341f9-a383-49c7-b99c-d5ae8845ca9d"
# %%time
test_text_vectors = get_bert_vectors(tokenizer, test_fake, 'text', vector_length=512)
[vec.shape for vec in test_text_vectors.values()]
# + [markdown] id="sEwRH4bQWD-N"
# ## Wyliczenie predykcji i zapisanie wyników
# -
bert_output_shape = bert_model.layers[-1].output_shape[2]
# + id="e4WauoSt-RNO"
train_X_title = bert_predict_in_batches(train_title_vectors, 5, bert_output_shape)
np.save(f'{out_dir}/train_X_title_{bert_model_name}.npy', train_X_title)
# + id="HLrb6UkD-nQn"
train_X_text = bert_predict_in_batches(train_text_vectors, 15, bert_output_shape)
np.save(f'{out_dir}/train_X_text_{bert_model_name}.npy', train_X_text)
# + id="vAkVrODw_I1-"
test_X_title = bert_predict_in_batches(test_title_vectors, 5, bert_output_shape)
np.save(f'{out_dir}/test_X_title_{bert_model_name}.npy', test_X_title)
# + id="E8Y2A0Sq_Ihv"
test_X_text = bert_predict_in_batches(test_text_vectors, 15, bert_output_shape)
np.save(f'{out_dir}/test_X_text_{bert_model_name}.npy', test_X_text)
|
Jupyter Notebooks/Notebook for BERT predicts in Colab.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Meghababu1999/sserd/blob/main/Untitled2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="pziQxHNNxgnu" outputId="08d004a0-a4c4-4f45-e871-a86b7a0cae74"
import pandas as pd
import numpy as np
import math as m
G={'Name of pulsar':['J0205+6449','J0218+4232','J0437-4715','J0534+2200','J1105-6107','J1124-5916','J1617-5055','J1930+1852','J2124-3358','J2229+6114'],'po':[0.06571592849324,0.00232309053151224,0.005757451936712637,0.0333924123,0.0632021309179,0.13547685441,0.069356847,0.136855046957,0.00493111494309662,0.05162357393],'pdot':[1.93754256e-13,7.73955e-20,5.729215e-20,4.20972e-13,1.584462e-14,7.52566e-13,1.351e-13,7.5057e-13,2.05705e-20,7.827e-14],'D in Kpc':[3.200,3.150,0.157,2.000,2.360, 5.000, 4.743, 7.000, 0.410, 3.000],'Age':[5.37e+03,4.76e+08,1.59e+09,1.26e+03,6.32e+04,2.85e+03, 8.13e+03, 2.89e+03, 3.8e+09, 1.05e+04],'B_s':[3.61e+12,4.29e+08,4.29e+08,3.79e+12,1.01e+12,1.02e+13,3.1e+12,1.03e+13,3.22e+08,2.03e+12],
'Edot':[2.7e+37,2.4e+35,1.2e+34,4.5e+38,2.5e+36,1.2e+37,1.6e+37,1.2e+37,6.8e+33,2.2e+37],
'Edot2':[2.6e+36,2.5e+34,4.8e+35,1.1e+38,4.4e+35,4.8e+35,7.1e+35,2.4e+35,2.4e+35,2.5e+36],
'B_Lc':[1.19e+05,3.21e+05,2.85e+04,9.55e+05,3.76e+04,3.85e+04, 8.70e+04, 3.75e+04, 2.52e+04, 1.39e+05]}
l=pd.DataFrame(G)
l.to_csv('High_energy_pulsar.csv')
l
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="sFFol9gGxl3y" outputId="5c00ffe0-40b0-4a29-daa6-d7d745b579c9"
R={'Name of pulsar':['J0100-7211','J0525-6607','J1708-4008','J1808-2024','J1809-1943','J1841-0456','J1907+0919','J2301+5852','J1745-2900','J0525-6607'],'po':[8.020392, 0.35443759451370,11.0062624,7.55592,5.540742829,11.7889784, 5.198346,6.9790709703,3.763733080,8.0470],'pdot':[1.88e-11, 7.36052e-17,1.960e-11,5.49e-10,2.8281e-12,4.092e-11,9.2e-11,4.7123e-13,1.756e-11,6.5e-11],'D in Kpc':[59.700, 1.841,3.800,13.000,3.600,9.600,'NaN',3.300,8.300,'NaN'],'Age':[6.76e+03,7.63e+07,8.9e+03,218,3.1e+04,4.57e+03,895,2.35e+05,3.4e+03,1.96e+03],'B_s':[3.93e+14,1.63e+11,4.7e+14,2.06e+15,1.27e+14,7.03e+14,7e+14,5.8e+13,2.6e+14,7.32e+14],'Edot':[1.4e+33,6.5e+31,5.8e+32,5.0e+34,6.6e+32,9.9e+32,2.6e+34,5.5e+31,1.3e+34,4.9e+33],'Edot2':[4.0e+29,1.9e+31,4.0e+31,3.0e+32,5.1e+31,1.1e+31,'NaN',5.0e+30,1.9e+32,'NaN'],'B_Lc':[7.14e+00,3.44e+01,3.30e+00,4.48e+01,6.98e+00,4.02e+00,4.67e+01,1.60e+00,4.57e+01,1.32e+01]}
c=pd.DataFrame(R)
c.to_csv('magnetor_pulsar.csv')
c
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="Ymzk3k9sxncc" outputId="0a040331-20b8-4262-e1db-86b128f9c2d3"
A={'Name of pulsar':['J0537-6910','J0633+1746','J0543+2329','J1811-1925','J1846-0258','J0628+0909','J0633+1746','J0636-4549','J1811-4930','J1812-1718'],'po':[0.0161222220245,0.2370994416923,0.245983683333,0.06466700,0.32657128834,3.763733080,0.2370994416923,1.98459736713,1.4327041968,1.20537444137],'pdot':[5.1784338e-14,1.097087e-14,1.541956e-14,4.40e-14,7.107450e-12,0.5479e-15,1.097087e-14,3.1722e-15,2.254e-15,1.9077e-14],'D in Kpc':[49.700,0.190,1.565,5.000,5.800,1.771,0.190,0.383,1.447,3.678],'Age':[4.93e+03,3.42e+05,2.53e+05,2.33e+04,728,3.59e+07,3.42e+05,9.91e+06,1.01e+07,1e+06],'B_s':[9.25e+11,1.63e+12,1.97e+12,1.71e+12,4.88e+13,8.35e+11,1.63e+12,2.54e+12,1.82e+12,4.85e+12],'Edot':[4.9e+38,3.2e+34,4.1e+34,6.4e+36,8.1e+36,1.1e+31,3.2e+34,1.6e+31,3.0e+31,4.3e+32],'Edot2':[2.0e+35,9.0e+35,1.7e+34,2.6e+35,2.4e+35,3.6e+30,9.0e+35,1.1e+32,1.4e+31,3.2e+31],'B_Lc':[2.07e+06,1.15e+03,1.24e+03,5.92e+04,1.31e+04,4.09e+00,1.15e+03,3.05e+00,5.80e+00,2.60e+01]}
p=pd.DataFrame(A)
p.to_csv('Non_Radio_pulsar.csv')
p
# + colab={"base_uri": "https://localhost:8080/", "height": 910} id="o5ylL1rSxqDj" outputId="9a8da415-5fe7-4fd5-ed37-c2ff089bf6ee"
o=pd.concat([l,p,c],ignore_index=True)
o.to_csv('Combined_data.csv')
o
# + id="H8Uk5Drqxsd4"
age_comb = o['Age'] # charactersitic age (yr) #comb - combined data
dist_comb = o['D in Kpc'] # distance in kpc
p_0_comb = o['po'] # period of rotation (s)
pdot_comb = o['pdot'] # time derivative of period
# radio luminosity at 400 MHz (mJy kpc**2)
b_s_comb = o['B_s'] # surface dipole magnetic field (Gauss)
e_dot_comb = o['Edot'] # spin down energy loss rate (erg s**-1)
e_dot2_comb = o['Edot2'] # energy flux at sun (ergs s**-1 kpc**-2)
# surface magnetic dipole from P_1_i (period derivative corrected for schklovski effect) (Gauss)
b_lc_comb = o['B_Lc'] # Magnetic field at light cylinder (Gauss)
#Radio high energy pulsars
age_r = l['Age'] # charactersitic age (yr) # r - radio
dist_r = l['D in Kpc'] # distance in kpc
p_0_r = l['po'] # period of rotation (s)
pdot_r = l['pdot'] # time derivative of period # radio luminosity at 400 MHz (mJy kpc**2)
b_s_r = l['B_s'] # surface dipole magnetic field (Gauss)
e_dot_r = l['Edot'] # spin down energy loss rate (erg s**-1)
e_dot2_r = l['Edot2'] # energy flux at sun (ergs s**-1 kpc**-2)
# surface magnetic dipole from P_1_i (period derivative corrected for schklovski effect) (Gauss)
b_lc_r = l['B_Lc'] # Magnetic field at light cylinder (Gauss)
# Non radio Pulsars
age_nr = p['Age'] # charactersitic age (yr) #nr - non radio
dist_nr = p['D in Kpc'] # distance in kpc
p_0_nr = p['po'] # period of rotation (s)
pdot_nr = p['pdot'] # time derivative of period
# radio luminosity at 400 MHz (mJy kpc**2)
b_s_nr = p['B_s'] # surface dipole magnetic field (Gauss)
e_dot_nr = p['Edot'] # spin down energy loss rate (erg s**-1)
e_dot2_nr = p['Edot2'] # energy flux at sun (ergs s**-1 kpc**-2)
# surface magnetic dipole from P_1_i (period derivative corrected for schklovski effect) (Gauss)
b_lc_nr = p['B_Lc'] # Magnetic field at light cylinder (Gauss)
# Magnetars
age_m = c['Age'] # charactersitic age (yr) #m - magnetars
dist_m = c['D in Kpc'] # distance in kpc
p_0_m = c['po'] # period of rotation (s)
pdot_m = c['pdot'] # time derivative of period
# radio luminosity at 400 MHz (mJy kpc**2)
b_s_m = c['B_s'] # surface dipole magnetic field (Gauss)
e_dot_m = c['Edot'] # spin down energy loss rate (erg s**-1)
e_dot2_m = c['Edot2'] # energy flux at sun (ergs s**-1 kpc**-2)
# surface magnetic dipole from P_1_i (period derivative corrected for schklovski effect) (Gauss)
b_lc_m = c['B_Lc'] # Magnetic field at light cylinder (Gauss)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="A7Hp1M71xyQe" outputId="fdc9eb36-07a5-4471-a7b0-e75ed3e09754"
import seaborn as sns
sns.pairplot(o)
# + colab={"base_uri": "https://localhost:8080/", "height": 498} id="wjmhyTGPx5eO" outputId="1128ce67-37a2-4ac1-ff1c-25f43bb69716"
import matplotlib.pyplot as plt
Ir=e_dot_r*p_0_r**3/(4*np.pi**2*pdot_r)
print(Ir)
plt.scatter(e_dot_r,Ir)
plt.title('Radio high energy pulsar')
plt.xlabel('spin down luminosity e_dot (ergs s^-1)')
plt.ylabel('moment of inertia I (g.cm^2)')
# + colab={"base_uri": "https://localhost:8080/", "height": 481} id="bEzM-7Ilx9EL" outputId="93341e3d-a8d0-422b-cf59-23b6be2b34bb"
import matplotlib.pyplot as plt
Inr=e_dot_nr*p_0_nr**3/(4*np.pi**2*pdot_nr)
plt.scatter(e_dot_nr,Inr,marker='d')
plt.title('non Radio pulsar')
plt.xlabel('spin down luminosity e_dot (ergs s^-1)')
plt.ylabel('moment of inertia I (g.cm^2)')
Inr
# + colab={"base_uri": "https://localhost:8080/", "height": 481} id="x90e24t4yOLa" outputId="91981ec7-bc5e-448a-bdca-971392176dce"
import matplotlib.pyplot as plt
Im=e_dot_m*p_0_m**3/(4*np.pi**2*pdot_m)
plt.scatter(e_dot_m,Im,marker='d')
plt.title('MAGNETARS')
plt.xlabel('spin down luminosity e_dot (ergs s^-1)')
plt.ylabel('moment of inertia I (g.cm^2)')
Im
# + colab={"base_uri": "https://localhost:8080/", "height": 817} id="8_qnBr7hyReM" outputId="26861ad3-62f0-4d9b-8c43-ce06550904ed"
import matplotlib.pyplot as plt
Ic=e_dot_comb*p_0_comb**3/(4*np.pi**2*pdot_comb)
plt.scatter(e_dot_comb,Ic,marker='d')
plt.title('MAGNETARS')
plt.xlabel('spin down luminosity e_dot (ergs s^-1)')
plt.ylabel('moment of inertia I (g.cm^2)')
Ic
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="EMume7fQyWC0" outputId="a0b1700e-f504-4c6a-8f64-d49ecacc788a"
plt.title('combined data')
plt.scatter(e_dot_m,Im,marker='s')
plt.scatter(e_dot_nr,Inr,marker='d')
plt.scatter(e_dot_r,Ir)
plt.legend(['magnetars','non radio','radio'])
plt.xlabel('spin down luminosity e_dot (ergs s^-1)')
plt.ylabel('moment of inertia I (g.cm^2)')
# + id="SpQ0dZ4iybMM" colab={"base_uri": "https://localhost:8080/", "height": 692} outputId="264bd007-def6-4c95-e91d-ad28e854229f"
plt.figure(figsize = (16,11))
plt.subplot(231)
plt.hist(Ir)
plt.title('Radio Pulsars - High Energy')
plt.xlabel('Moment of inertia I (g.cm^2)')
plt.ylabel('Numer of Pulsars in that range')
plt.subplot(232)
plt.hist(Inr, color='k')
plt.title('Non Radio Pulsars')
plt.xlabel('Moment of inertia I (g.cm^2)')
plt.ylabel('Numer of Pulsars in that range')
plt.subplot(233)
plt.hist(Im, color='r')
plt.title('Magnetars')
plt.xlabel('Moment of inertia I (g.cm^2)')
plt.ylabel('Numer of Pulsars in that range')
plt.subplot(235)
plt.hist([Ir, Inr, Im])
plt.legend(['Radio', 'Non Radio', 'Magnetars'])
plt.title('Combined data')
plt.xlabel('Moment of inertia I (g.cm^2)')
plt.ylabel('Numer of Pulsars in that range')
# + id="bSdK7JL5ye4T"
|
Untitled2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Parameterized inference from multidimensional data
#
# <NAME>, <NAME>, <NAME>, March 2016.
#
# For the sake of the illustration, we will assume 5-dimensional feature $\mathbf{x}$ generated
# from the following process $p_0$:
#
# - $\mathbf{z} := (z_0, z_1, z_2, z_3, z_4)$, such that
# $z_0 \sim {\cal N}(\mu=\alpha, \sigma=1)$,
# $z_1 \sim {\cal N}(\mu=\beta, \sigma=3)$,
# $z_2 \sim {\text{Mixture}}(\frac{1}{2}\,{\cal N}(\mu=-2, \sigma=1), \frac{1}{2}\,{\cal N}(\mu=2, \sigma=0.5))$,
# $z_3 \sim {\text{Exponential}(\lambda=3)}$, and
# $z_4 \sim {\text{Exponential}(\lambda=0.5)}$;
#
# - $\mathbf{x} := R \mathbf{z}$, where $R$ is a fixed semi-positive definite $5 \times 5$ matrix defining a fixed projection of $\mathbf{z}$ into the observed space.
#
# +
# %matplotlib inline
import matplotlib.pyplot as plt
plt.set_cmap("viridis")
import numpy as np
import theano
from scipy.stats import chi2
# +
from carl.distributions import Join
from carl.distributions import Mixture
from carl.distributions import Normal
from carl.distributions import Exponential
from carl.distributions import LinearTransform
from sklearn.datasets import make_sparse_spd_matrix
# Parameters
true_A = 1.
true_B = -1.
A = theano.shared(true_A, name="A")
B = theano.shared(true_B, name="B")
# Build simulator
R = make_sparse_spd_matrix(5, alpha=0.5, random_state=7)
p0 = LinearTransform(Join(components=[
Normal(mu=A, sigma=1),
Normal(mu=B, sigma=3),
Mixture(components=[Normal(mu=-2, sigma=1),
Normal(mu=2, sigma=0.5)]),
Exponential(inverse_scale=3.0),
Exponential(inverse_scale=0.5)]), R)
# Define p1 at fixed arbitrary value theta1 := 0,0
p1 = LinearTransform(Join(components=[
Normal(mu=0, sigma=1),
Normal(mu=0, sigma=3),
Mixture(components=[Normal(mu=-2, sigma=1),
Normal(mu=2, sigma=0.5)]),
Exponential(inverse_scale=3.0),
Exponential(inverse_scale=0.5)]), R)
# Draw data
X_true = p0.rvs(500, random_state=314)
# -
# Projection operator
print(R)
# Plot the data
import corner
fig = corner.corner(X_true, bins=20, smooth=0.85, labels=["X0", "X1", "X2", "X3", "X4"])
#plt.savefig("fig3.pdf")
# ## Exact likelihood setup
# +
# Minimize the exact LR
from scipy.optimize import minimize
def nll_exact(theta, X):
A.set_value(theta[0])
B.set_value(theta[1])
return (p0.nll(X) - p1.nll(X)).sum()
r = minimize(nll_exact, x0=[0, 0], args=(X_true,))
exact_MLE = r.x
print("Exact MLE =", exact_MLE)
# +
# Exact contours
A.set_value(true_A)
B.set_value(true_B)
bounds = [(exact_MLE[0] - 0.16, exact_MLE[0] + 0.16),
(exact_MLE[1] - 0.5, exact_MLE[1] + 0.5)]
As = np.linspace(exact_MLE[0] - 0.16, exact_MLE[0] + 0.16, 100)
Bs = np.linspace(exact_MLE[1] - 0.5, exact_MLE[1] + 0.5, 100)
AA, BB = np.meshgrid(As, Bs)
X = np.hstack((AA.reshape(-1, 1), BB.reshape(-1, 1)))
exact_contours = np.zeros(len(X))
i = 0
for a in As:
for b in Bs:
exact_contours[i] = nll_exact([a, b], X_true)
i += 1
exact_contours = 2. * (exact_contours - r.fun)
# +
plt.contour(As, Bs, exact_contours.reshape(AA.shape).T,
levels=[chi2.ppf(0.683, df=2),
chi2.ppf(0.9545, df=2),
chi2.ppf(0.9973, df=2)], colors=["w"])
plt.contourf(As, Bs, exact_contours.reshape(AA.shape).T, 50, vmin=0, vmax=30)
cb = plt.colorbar()
plt.plot([true_A], [true_B], "r.", markersize=8)
plt.plot([exact_MLE[0]], [exact_MLE[1]], "g.", markersize=8)
#plt.plot([gp_MLE[0]], [gp_MLE[1]], "b.", markersize=8)
plt.axis((*bounds[0], *bounds[1]))
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
#plt.savefig("fig4a.pdf")
plt.show()
# -
# ## Likelihood-free setup
#
# In this example we will build a parametrized classifier $s(x; \theta_0, \theta_1)$ with $\theta_1$ fixed to $(\alpha=0, \beta=0)$.
# +
# Build classification data
from carl.learning import make_parameterized_classification
bounds = [(-3, 3), (-3, 3)]
X, y = make_parameterized_classification(
p0, p1,
1000000,
[(A, np.linspace(*bounds[0], num=30)),
(B, np.linspace(*bounds[1], num=30))],
random_state=1)
# +
# Train parameterized classifier
from carl.learning import as_classifier
from carl.learning import make_parameterized_classification
from carl.learning import ParameterizedClassifier
from sklearn.neural_network import MLPRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import RandomizedSearchCV
clf = ParameterizedClassifier(
make_pipeline(StandardScaler(),
as_classifier(MLPRegressor(learning_rate="adaptive",
hidden_layer_sizes=(40, 40),
tol=1e-6,
random_state=0))),
[A, B])
clf.fit(X, y)
# -
# For the scans and Bayesian optimization we construct two helper functions.
# +
from carl.learning import CalibratedClassifierCV
from carl.ratios import ClassifierRatio
def vectorize(func):
def wrapper(X):
v = np.zeros(len(X))
for i, x_i in enumerate(X):
v[i] = func(x_i)
return v.reshape(-1, 1)
return wrapper
def objective(theta, random_state=0):
print(theta)
# Set parameter values
A.set_value(theta[0])
B.set_value(theta[1])
# Fit ratio
ratio = ClassifierRatio(CalibratedClassifierCV(
base_estimator=clf,
cv="prefit", # keep the pre-trained classifier
method="histogram", bins=50))
X0 = p0.rvs(n_samples=250000)
X1 = p1.rvs(n_samples=250000, random_state=random_state)
X = np.vstack((X0, X1))
y = np.zeros(len(X))
y[len(X0):] = 1
ratio.fit(X, y)
# Evaluate log-likelihood ratio
r = ratio.predict(X_true, log=True)
value = -np.mean(r[np.isfinite(r)]) # optimization is more stable using mean
# this will need to be rescaled by len(X_true)
return value
# -
from GPyOpt.methods import BayesianOptimization
bounds = [(-3, 3), (-3, 3)]
solver = BayesianOptimization(vectorize(objective), bounds)
solver.run_optimization(max_iter=50, true_gradients=False)
approx_MLE = solver.x_opt
print("Approx. MLE =", approx_MLE)
solver.plot_acquisition()
solver.plot_convergence()
# +
# Minimize the surrogate GP approximate of the approximate LR
def gp_objective(theta):
theta = theta.reshape(1, -1)
return solver.model.predict(theta)[0][0]
r = minimize(gp_objective, x0=[0, 0])
gp_MLE = r.x
print("GP MLE =", gp_MLE)
# -
# Here we plot the posterior mean of the Gaussian Process surrogate learned by the Bayesian Optimization algorithm.
# +
# Plot GP contours
A.set_value(true_A)
B.set_value(true_B)
bounds = [(exact_MLE[0] - 0.16, exact_MLE[0] + 0.16),
(exact_MLE[1] - 0.5, exact_MLE[1] + 0.5)]
As = np.linspace(*bounds[0], 100)
Bs = np.linspace(*bounds[1], 100)
AA, BB = np.meshgrid(As, Bs)
X = np.hstack((AA.reshape(-1, 1), BB.reshape(-1, 1)))
# +
from scipy.stats import chi2
gp_contours, _= solver.model.predict(X)
gp_contours = 2. * (gp_contours - r.fun) * len(X_true) # Rescale
cs = plt.contour(As, Bs, gp_contours.reshape(AA.shape),
levels=[chi2.ppf(0.683, df=2),
chi2.ppf(0.9545, df=2),
chi2.ppf(0.9973, df=2)], colors=["w"])
plt.contourf(As, Bs, gp_contours.reshape(AA.shape), 50, vmin=0, vmax=30)
cb = plt.colorbar()
plt.plot(solver.X[:, 0], solver.X[:, 1], 'w.', markersize=8)
plt.plot([true_A], [true_B], "r.", markersize=8)
plt.plot([exact_MLE[0]], [exact_MLE[1]], "g.", markersize=8)
plt.plot([gp_MLE[0]], [gp_MLE[1]], "b.", markersize=8)
plt.axis((*bounds[0], *bounds[1]))
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
#plt.savefig("fig4b.pdf")
plt.show()
# -
# Finally, we plot the approximate likelihood from a grid scan. Statistical fluctuations in the calibration lead to some noise in the scan. The Gaussian Process surrogate above smooths out this noise providing a smoother approximate likelihood.
# +
# Contours of the approximated LR
A.set_value(true_A)
B.set_value(true_B)
bounds = [(exact_MLE[0] - 0.16, exact_MLE[0] + 0.16),
(exact_MLE[1] - 0.5, exact_MLE[1] + 0.5)]
As = np.linspace(*bounds[0], 16)
Bs = np.linspace(*bounds[1], 16)
AA, BB = np.meshgrid(As, Bs)
X = np.hstack((AA.reshape(-1, 1),
BB.reshape(-1, 1)))
# +
approx_contours = np.zeros(len(X))
i = 0
for a in As:
for b in Bs:
approx_contours[i] = objective([a, b])
i += 1
approx_contours = 2. * (approx_contours - approx_contours.min()) * len(X_true)
# +
plt.contour(As, Bs, approx_contours.reshape(AA.shape).T,
levels=[chi2.ppf(0.683, df=2),
chi2.ppf(0.9545, df=2),
chi2.ppf(0.9973, df=2)], colors=["w"])
plt.contourf(As, Bs, approx_contours.reshape(AA.shape).T, 50, vmin=0, vmax=30)
plt.colorbar()
plt.plot([true_A], [true_B], "r.", markersize=8)
plt.plot([exact_MLE[0]], [exact_MLE[1]], "g.", markersize=8)
plt.plot([gp_MLE[0]], [gp_MLE[1]], "b.", markersize=8)
plt.axis((*bounds[0], *bounds[1]))
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
#plt.savefig("fig4c.pdf")
plt.show()
|
examples/Parameterized inference from multidimensional data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# In this notebook, we'll learn how to use GANs to do semi-supervised learning.
#
# In supervised learning, we have a training set of inputs $x$ and class labels $y$. We train a model that takes $x$ as input and gives $y$ as output.
#
# In semi-supervised learning, our goal is still to train a model that takes $x$ as input and generates $y$ as output. However, not all of our training examples have a label $y$. We need to develop an algorithm that is able to get better at classification by studying both labeled $(x, y)$ pairs and unlabeled $x$ examples.
#
# To do this for the SVHN dataset, we'll turn the GAN discriminator into an 11 class discriminator. It will recognize the 10 different classes of real SVHN digits, as well as an 11th class of fake images that come from the generator. The discriminator will get to train on real labeled images, real unlabeled images, and fake images. By drawing on three sources of data instead of just one, it will generalize to the test set much better than a traditional classifier trained on only one source of data.
# + deletable=true editable=true
# %matplotlib inline
import pickle as pkl
import time
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
# There are two ways of solving this problem.
# One is to have the matmul at the last layer output all 11 classes.
# The other is to output just 10 classes, and use a constant value of 0 for
# the logit for the last class. This still works because the softmax only needs
# n independent logits to specify a probability distribution over n + 1 categories.
# We implemented both solutions here.
extra_class = 0
# + deletable=true editable=true
# !mkdir data
# + deletable=true editable=true
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
# + deletable=true editable=true
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
# + deletable=true editable=true
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
# + deletable=true editable=true
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
# + deletable=true editable=true
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=True, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
# The SVHN dataset comes with lots of labels, but for the purpose of this exercise,
# we will pretend that there are only 1000.
# We use this mask to say which labels we will allow ourselves to use.
self.label_mask = np.zeros_like(self.train_y)
self.label_mask[0:1000] = 1
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.train_x = self.scaler(self.train_x)
self.valid_x = self.scaler(self.valid_x)
self.test_x = self.scaler(self.test_x)
self.shuffle = shuffle
def batches(self, batch_size, which_set="train"):
x_name = which_set + "_x"
y_name = which_set + "_y"
num_examples = len(getattr(dataset, y_name))
if self.shuffle:
idx = np.arange(num_examples)
np.random.shuffle(idx)
setattr(dataset, x_name, getattr(dataset, x_name)[idx])
setattr(dataset, y_name, getattr(dataset, y_name)[idx])
if which_set == "train":
dataset.label_mask = dataset.label_mask[idx]
dataset_x = getattr(dataset, x_name)
dataset_y = getattr(dataset, y_name)
for ii in range(0, num_examples, batch_size):
x = dataset_x[ii:ii+batch_size]
y = dataset_y[ii:ii+batch_size]
if which_set == "train":
# When we use the data for training, we need to include
# the label mask, so we can pretend we don't have access
# to some of the labels, as an exercise of our semi-supervised
# learning ability
yield x, y, self.label_mask[ii:ii+batch_size]
else:
yield x, y
# + deletable=true editable=true
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
y = tf.placeholder(tf.int32, (None), name='y')
label_mask = tf.placeholder(tf.int32, (None), name='label_mask')
return inputs_real, inputs_z, y, label_mask
# + deletable=true editable=true
def generator(z, output_dim, reuse=False, alpha=0.2, training=True, size_mult=128):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4 * 4 * size_mult * 4)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, size_mult * 4))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d_transpose(x1, size_mult * 2, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d_transpose(x2, size_mult, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
out = tf.tanh(logits)
return out
# + deletable=true editable=true
def discriminator(x, reuse=False, alpha=0.2, drop_rate=0., num_classes=10, size_mult=64):
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.dropout(x, rate=drop_rate/2.5)
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, size_mult, 3, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
relu1 = tf.layers.dropout(relu1, rate=drop_rate)
x2 = tf.layers.conv2d(relu1, size_mult, 3, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d(relu2, size_mult, 3, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
relu3 = tf.layers.dropout(relu3, rate=drop_rate)
x4 = tf.layers.conv2d(relu3, 2 * size_mult, 3, strides=1, padding='same')
bn4 = tf.layers.batch_normalization(x4, training=True)
relu4 = tf.maximum(alpha * bn4, bn4)
x5 = tf.layers.conv2d(relu4, 2 * size_mult, 3, strides=1, padding='same')
bn5 = tf.layers.batch_normalization(x5, training=True)
relu5 = tf.maximum(alpha * bn5, bn5)
x6 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=2, padding='same')
bn6 = tf.layers.batch_normalization(x6, training=True)
relu6 = tf.maximum(alpha * bn6, bn6)
relu6 = tf.layers.dropout(relu6, rate=drop_rate)
x7 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=1, padding='valid')
# Don't use bn on this layer, because bn would set the mean of each feature
# to the bn mu parameter.
# This layer is used for the feature matching loss, which only works if
# the means can be different when the discriminator is run on the data than
# when the discriminator is run on the generator samples.
relu7 = tf.maximum(alpha * x7, x7)
# Flatten it by global average pooling
features = tf.reduce_mean(relu7, (1, 2))
# Set class_logits to be the inputs to a softmax distribution over the different classes
class_logits = tf.layers.dense(features, num_classes + extra_class)
# Set gan_logits such that P(input is real | input) = sigmoid(gan_logits).
# Keep in mind that class_logits gives you the probability distribution over all the real
# classes and the fake class. You need to work out how to transform this multiclass softmax
# distribution into a binary real-vs-fake decision that can be described with a sigmoid.
# Numerical stability is very important.
# You'll probably need to use this numerical stability trick:
# log sum_i exp a_i = m + log sum_i exp(a_i - m).
# This is numerically stable when m = max_i a_i.
# (It helps to think about what goes wrong when...
# 1. One value of a_i is very large
# 2. All the values of a_i are very negative
# This trick and this value of m fix both those cases, but the naive implementation and
# other values of m encounter various problems)
if extra_class:
real_class_logits, fake_class_logits = tf.split(class_logits, [num_classes, 1], 1)
assert fake_class_logits.get_shape()[1] == 1, fake_class_logits.get_shape()
fake_class_logits = tf.squeeze(fake_class_logits)
else:
real_class_logits = class_logits
fake_class_logits = 0.
mx = tf.reduce_max(real_class_logits, 1, keep_dims=True)
stable_real_class_logits = real_class_logits - mx
gan_logits = tf.log(tf.reduce_sum(tf.exp(stable_real_class_logits), 1)) + tf.squeeze(mx) - fake_class_logits
out = tf.nn.softmax(class_logits)
return out, class_logits, gan_logits, features
# + deletable=true editable=true
def model_loss(input_real, input_z, output_dim, y, num_classes, label_mask, alpha=0.2, drop_rate=0.):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param output_dim: The number of channels in the output image
:param y: Integer class labels
:param num_classes: The number of classes
:param alpha: The slope of the left half of leaky ReLU activation
:param drop_rate: The probability of dropping a hidden unit
:return: A tuple of (discriminator loss, generator loss)
"""
# These numbers multiply the size of each layer of the generator and the discriminator,
# respectively. You can reduce them to run your code faster for debugging purposes.
g_size_mult = 32
d_size_mult = 64
# Here we run the generator and the discriminator
g_model = generator(input_z, output_dim, alpha=alpha, size_mult=g_size_mult)
d_on_data = discriminator(input_real, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_real, class_logits_on_data, gan_logits_on_data, data_features = d_on_data
d_on_samples = discriminator(g_model, reuse=True, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_fake, class_logits_on_samples, gan_logits_on_samples, sample_features = d_on_samples
# Here we compute `d_loss`, the loss for the discriminator.
# This should combine two different losses:
# 1. The loss for the GAN problem, where we minimize the cross-entropy for the binary
# real-vs-fake classification problem.
# 2. The loss for the SVHN digit classification problem, where we minimize the cross-entropy
# for the multi-class softmax. For this one we use the labels. Don't forget to ignore
# use `label_mask` to ignore the examples that we are pretending are unlabeled for the
# semi-supervised learning problem.
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=gan_logits_on_data,
labels=tf.ones_like(gan_logits_on_data)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=gan_logits_on_samples,
labels=tf.zeros_like(gan_logits_on_samples)))
y = tf.squeeze(y)
class_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=class_logits_on_data,
labels=tf.one_hot(y, num_classes + extra_class,
dtype=tf.float32))
class_cross_entropy = tf.squeeze(class_cross_entropy)
label_mask = tf.squeeze(tf.to_float(label_mask))
d_loss_class = tf.reduce_sum(label_mask * class_cross_entropy) / tf.maximum(1., tf.reduce_sum(label_mask))
d_loss = d_loss_class + d_loss_real + d_loss_fake
# Here we set `g_loss` to the "feature matching" loss invented by <NAME> at OpenAI.
# This loss consists of minimizing the absolute difference between the expected features
# on the data and the expected features on the generated samples.
# This loss works better for semi-supervised learning than the tradition GAN losses.
data_moments = tf.reduce_mean(data_features, axis=0)
sample_moments = tf.reduce_mean(sample_features, axis=0)
g_loss = tf.reduce_mean(tf.abs(data_moments - sample_moments))
pred_class = tf.cast(tf.argmax(class_logits_on_data, 1), tf.int32)
eq = tf.equal(tf.squeeze(y), pred_class)
correct = tf.reduce_sum(tf.to_float(eq))
masked_correct = tf.reduce_sum(label_mask * tf.to_float(eq))
return d_loss, g_loss, correct, masked_correct, g_model
# + deletable=true editable=true
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and biases to update. Get them separately for the discriminator and the generator
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
for t in t_vars:
assert t in d_vars or t in g_vars
# Minimize both players' costs simultaneously
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
shrink_lr = tf.assign(learning_rate, learning_rate * 0.9)
return d_train_opt, g_train_opt, shrink_lr
# + deletable=true editable=true
class GAN:
"""
A GAN model.
:param real_size: The shape of the real data.
:param z_size: The number of entries in the z code vector.
:param learnin_rate: The learning rate to use for Adam.
:param num_classes: The number of classes to recognize.
:param alpha: The slope of the left half of the leaky ReLU activation
:param beta1: The beta1 parameter for Adam.
"""
def __init__(self, real_size, z_size, learning_rate, num_classes=10, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.learning_rate = tf.Variable(learning_rate, trainable=False)
self.input_real, self.input_z, self.y, self.label_mask = model_inputs(real_size, z_size)
self.drop_rate = tf.placeholder_with_default(.5, (), "drop_rate")
loss_results = model_loss(self.input_real, self.input_z,
real_size[2], self.y, num_classes, label_mask=self.label_mask,
alpha=0.2,
drop_rate=self.drop_rate)
self.d_loss, self.g_loss, self.correct, self.masked_correct, self.samples = loss_results
self.d_opt, self.g_opt, self.shrink_lr = model_opt(self.d_loss, self.g_loss, self.learning_rate, beta1)
# + deletable=true editable=true
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
# + deletable=true editable=true
def train(net, dataset, epochs, batch_size, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.normal(0, 1, size=(50, z_size))
samples, train_accuracies, test_accuracies = [], [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
print("Epoch",e)
t1e = time.time()
num_examples = 0
num_correct = 0
for x, y, label_mask in dataset.batches(batch_size):
assert 'int' in str(y.dtype)
steps += 1
num_examples += label_mask.sum()
# Sample random noise for G
batch_z = np.random.normal(0, 1, size=(batch_size, z_size))
# Run optimizers
t1 = time.time()
_, _, correct = sess.run([net.d_opt, net.g_opt, net.masked_correct],
feed_dict={net.input_real: x, net.input_z: batch_z,
net.y : y, net.label_mask : label_mask})
t2 = time.time()
num_correct += correct
sess.run([net.shrink_lr])
train_accuracy = num_correct / float(num_examples)
print("\t\tClassifier train accuracy: ", train_accuracy)
num_examples = 0
num_correct = 0
for x, y in dataset.batches(batch_size, which_set="test"):
assert 'int' in str(y.dtype)
num_examples += x.shape[0]
correct, = sess.run([net.correct], feed_dict={net.input_real: x,
net.y : y,
net.drop_rate: 0.})
num_correct += correct
test_accuracy = num_correct / float(num_examples)
print("\t\tClassifier test accuracy", test_accuracy)
print("\t\tStep time: ", t2 - t1)
t2e = time.time()
print("\t\tEpoch time: ", t2e - t1e)
gen_samples = sess.run(
net.samples,
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
# Save history of accuracies to view after training
train_accuracies.append(train_accuracy)
test_accuracies.append(test_accuracy)
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return train_accuracies, test_accuracies, samples
# + deletable=true editable=true
# !mkdir checkpoints
# + deletable=true editable=true
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0003
net = GAN(real_size, z_size, learning_rate)
# + deletable=true editable=true
dataset = Dataset(trainset, testset)
batch_size = 128
epochs = 25
train_accuracies, test_accuracies, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
# + deletable=true editable=true
fig, ax = plt.subplots()
plt.plot(train_accuracies, label='Train', alpha=0.5)
plt.plot(test_accuracies, label='Test', alpha=0.5)
plt.title("Accuracy")
plt.legend()
# + [markdown] deletable=true editable=true
# When you run the fully implemented semi-supervised GAN, you should usually find that the test accuracy peaks a little above 71%. It should definitely stay above 70% fairly consistently throughout the last several epochs of training.
#
# This is a little bit better than a [NIPS 2014 paper](https://arxiv.org/pdf/1406.5298.pdf) that got 64% accuracy on 1000-label SVHN with variational methods. However, we still have lost something by not using all the labels. If you re-run with all the labels included, you should obtain over 80% accuracy using this architecture (and other architectures that take longer to run can do much better).
# + deletable=true editable=true
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
# + deletable=true editable=true
# !mkdir images
# + deletable=true editable=true
for ii in range(len(samples)):
fig, ax = view_samples(ii, samples, 5, 10, figsize=(10,5))
fig.savefig('images/samples_{:03d}.png'.format(ii))
plt.close()
# + [markdown] deletable=true editable=true
# Congratulations! You now know how to train a semi-supervised GAN. This exercise is stripped down to make it run faster and to make it simpler to implement. In the original work by <NAME> at OpenAI, a GAN using [more tricks and more runtime](https://arxiv.org/pdf/1606.03498.pdf) reaches over 94% accuracy using only 1,000 labeled examples.
# + deletable=true editable=true
|
semi-supervised/semi-supervised_learning_2_solution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
import os
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import pykeen
from pykeen.kge_models import TransD
# -
# %matplotlib inline
logging.basicConfig(level=logging.INFO)
logging.getLogger('pykeen').setLevel(logging.INFO)
print(sys.version)
print(time.asctime())
print(pykeen.get_version())
# Check which hyper-parameters are required by TransD:
TransD.hyper_params
# Define output directory:
output_directory = os.path.join(
os.path.expanduser('~'),
'Desktop',
'pykeen_test'
)
# Define hyper-parameters:
config = dict(
training_set_path = '../../tests/resources/data/rdf.nt',
execution_mode = 'Training_mode',
random_seed = 0,
kg_embedding_model_name = 'TransD',
embedding_dim = 50,
relation_embedding_dim = 20,
scoring_function = 2, # corresponds to L2
margin_loss = 0.05,
learning_rate = 0.01,
num_epochs = 10,
batch_size = 64,
preferred_device = 'cpu'
)
# Train TransD:
results = pykeen.run(
config=config,
output_directory=output_directory,
)
# Check result entries:
results.results.keys()
# Access trained model:
results.results['trained_model']
# Visualize loss values:
losses = results.results['losses']
epochs = np.arange(len(losses))
plt.title(r'Loss Per Epoch')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(epochs, losses)
plt.show()
|
notebooks/training_of_kge_models/Train TransD on RDF.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 使用字符级RNN分类名称
#
# [](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/intermediate/text/rnn_classification.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/tutorials/zh_cn/mindspore_rnn_classification.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvdHV0b3JpYWxzL3poX2NuL21pbmRzcG9yZV9ybm5fY2xhc3NpZmljYXRpb24uaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c)
# ## 概述
# 循环神经网络(Recurrent Neural Network, RNN)是一类以序列(sequence)数据为输入,在序列的演进方向进行递归(recursion)且所有节点(循环单元)按链式连接的递归神经网络(recursive neural network),常用于NLP领域当中来解决序列化数据的建模问题。
#
# 本教程我们将建立和训练基本的字符级RNN模型对单词进行分类,以帮助理解循环神经网络原理。实验中,我们将训练来自18种语言的数千种姓氏,并根据拼写内容预测名称的来源。
# ## 准备环节
# ### 环境配置
# 我们使用`PyNative`模式运行实验,使用Ascend环境。
# +
from mindspore import context
context.set_context(mode=context.PYNATIVE_MODE, device_target="Ascend")
# -
# ### 准备数据
#
# 数据集是来自18种语言的数千种姓氏,点击[这里](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/intermediate/data.zip)下载数据,并将其提取到当前目录。
#
# 数据集目录结构为`data/names`,目录中包含 18 个文本文件,名称为`[Language].txt`。 每个文件包含一系列名称,每行一个名称。数据大多数是罗马化的,需要将其从Unicode转换为ASCII。
#
# 可在Jupyter Notebook中执行以下代码完成数据集的下载,并将数据集解压完成。
# !wget -N https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/intermediate/data.zip
# !unzip -n data.zip
# ## 数据处理
# - 导入模块。
from io import open
import glob
import os
import unicodedata
import string
# - 定义`find_files`函数,查找符合通配符要求的文件。
# +
def find_files(path):
return glob.glob(path)
print(find_files('data/names/*.txt'))
# -
# - 定义`unicode_to_ascii`函数,将Unicode转换为ASCII。
# +
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii('Bélanger'))
# -
# - 定义`read_lines`函数,读取文件,并将文件每一行内容的编码转换为ASCII。
def read_lines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
# 定义`category_lines`字典和`all_categories`列表。
# - `category_lines`:key为语言的类别,value为名称的列表。
# - `all_categories`:所有语言的种类。
# +
category_lines = {}
all_categories = []
for filename in find_files('data/names/*.txt'):
category = os.path.splitext(os.path.basename(filename))[0]
all_categories.append(category)
lines = read_lines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
# -
# - 将语言为Italian,内容为前5行的数据进行打印显示。
print(category_lines['French'][:5])
# ## 将名称转换为向量
# 因为字符无法进行数学运算,所以需要将名称转变为向量。
#
# 为了表示单个字母,我们使用大小为`<1 x n_letters>`的one-hot向量,因为将离散型特征使用one-hot编码,会让特征之间的距离计算更加合理。
#
# > one-hot向量用0填充,但当前字母的索引处的数字为1,例如 `"b" = <0 1 0 0 0 ...>`。
#
# 为了组成单词,我们将其中的一些向量连接成2D矩阵`<line_length x 1 x n_letters>`。
# - 导入模块
# +
import numpy as np
from mindspore import Tensor
from mindspore import dtype as mstype
# -
# - 定义`letter_to_index`函数,从`all_letters`列表中查找字母索引。
def letter_to_index(letter):
return all_letters.find(letter)
# - 定义`letter_to_tensor`函数,将字母转换成维度是`<1 x n_letters>`的one-hot向量。
def letter_to_tensor(letter):
tensor = Tensor(np.zeros((1, n_letters)),mstype.float32)
tensor[0,letter_to_index(letter)] = 1.0
return tensor
# - 定义`line_to_tensor`函数,将一行转化为`<line_length x 1 x n_letters>`的one-hot向量。
def line_to_tensor(line):
tensor = Tensor(np.zeros((len(line), 1, n_letters)),mstype.float32)
for li, letter in enumerate(line):
tensor[li,0,letter_to_index(letter)] = 1.0
return tensor
# - 分别将字母A和单词Alex转换为one-hot向量,并打印显示。
print(letter_to_tensor('A'))
print(line_to_tensor('Alex').shape)
# ## 创建网络
#
# 创建的RNN网络只有`i2o`和`i2h`两个线性层,它们在输入`input`和隐藏状态`hidden`下运行,在线性层`i2o`的输出之后是`LogSoftmax`层。其中,网络结构如图所示。
#
# 
# +
from mindspore import nn, ops
class RNN(nn.Cell):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Dense(input_size + hidden_size, hidden_size)
self.i2o = nn.Dense(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(axis=1)
def construct(self, input, hidden):
op = ops.Concat(axis=1)
combined = op((input, hidden))
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return Tensor(np.zeros((1, self.hidden_size)),mstype.float32)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
# -
# 要运行此网络,我们需要输入代表当前字母的one-hot向量,以及上一个字母输出的隐藏状态(将隐藏状态初始化为0)。此网络将输出属于每种语言的概率和下一个字母需要输入的隐藏状态。
input = letter_to_tensor('A')
hidden = Tensor(np.zeros((1, n_hidden)), mstype.float32)
output, next_hidden = rnn(input, hidden)
# 为了提高效率,避免在每一步中都创建一个新向量,因此将使用`line_to_tensor`而不是`letter_to_tensor`,同时采取切片操作。
input = line_to_tensor('Albert')
hidden = Tensor(np.zeros((1, n_hidden)), mstype.float32)
output, next_hidden = rnn(input[0], hidden)
print(output)
# 可以看到,输出为`<1 x n_categories>`形式的向量,其中每个数字都代表了分类的可能性。
# ## 训练
#
# ### 准备训练
#
# - 定义`category_from_output`函数,获得网络模型输出的最大值,也就是分类类别概率为最大的类别。
# +
def category_from_output(output):
topk = ops.TopK(sorted=True)
top_n, top_i = topk(output, 1)
category_i = top_i.asnumpy().item(0)
return all_categories[category_i], category_i
print(category_from_output(output))
# -
# - 通过`random_training`函数随机选择一种语言和其中一个名称作为训练数据。
# +
import random
# 随机选择
def random_choice(l):
return l[random.randint(0, len(l) - 1)]
# 随机选择一种语言和一个名称
def random_training():
category = random_choice(all_categories)
line = random_choice(category_lines[category])
category_tensor = Tensor([all_categories.index(category)], mstype.int32)
line_tensor = line_to_tensor(line)
return category, line, category_tensor, line_tensor
# 随机选10组
for i in range(10):
category, line, category_tensor, line_tensor = random_training()
print('category =', category, '/ line =', line)
# -
# ### 训练网络
# - 定义`NLLLoss`损失函数。
# +
from mindspore.ops import functional as F
class NLLLoss(nn.loss.loss):
def __init__(self, reduction='mean'):
super(NLLLoss, self).__init__(reduction)
self.one_hot = ops.OneHot()
self.reduce_sum = ops.ReduceSum()
def construct(self, logits, label):
label_one_hot = self.one_hot(label, F.shape(logits)[-1], F.scalar_to_array(1.0), F.scalar_to_array(0.0))
loss = self.reduce_sum(-1.0 * logits * label_one_hot, (1,))
return self.get_loss(loss)
# -
criterion = NLLLoss()
# 每个循环训练将会执行下面几个步骤:
#
# - 创建输入和目标向量
# - 初始化隐藏状态
# - 学习每个字母并保存下一个字母的隐藏状态
# - 比较最终输出与目标值
# - 反向传播梯度变化
# - 返回输出和损失值
# - MindSpore将损失函数,优化器等操作都封装到了Cell中,但是本教程的rnn网络需要循环一个序列长度之后再求损失,所以我们需要自定义`WithLossCellRnn`类,将网络和Loss连接起来。
class WithLossCellRnn(nn.Cell):
def __init__(self, backbone, loss_fn):
super(WithLossCellRnn, self).__init__(auto_prefix=True)
self._backbone = backbone
self._loss_fn = loss_fn
def construct(self, line_tensor, hidden, category_tensor):
for i in range(line_tensor.shape[0]):
output, hidden = self._backbone(line_tensor[i], hidden)
return self._loss_fn(output, category_tensor)
# - 创建优化器、`WithLossCellRnn`实例和`TrainOneStepCell`训练网络。
# +
rnn_cf = RNN(n_letters, n_hidden, n_categories)
optimizer = nn.Momentum(filter(lambda x: x.requires_grad, rnn_cf.get_parameters()), 0.001, 0.9)
net_with_criterion = WithLossCellRnn(rnn_cf, criterion)
net = nn.TrainOneStepCell(net_with_criterion, optimizer)
net.set_train()
# 训练网路
def train(category_tensor, line_tensor):
hidden = rnn_cf.initHidden()
loss = net(line_tensor, hidden, category_tensor)
for i in range(line_tensor.shape[0]):
output, hidden = rnn_cf(line_tensor[i], hidden)
return output, loss
# -
# - 为了跟踪网络模型训练过程中的耗时,定义`time_since`函数,用来计算训练运行的时间,方便我们持续看到训练的整个过程。
# +
import time
import math
n_iters = 10000
print_every = 500
plot_every = 100
current_loss = 0
all_losses = []
def time_since(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
# -
# - 通过`print_every`(500)次迭代就打印一次,分别打印迭代次数、迭代进度、迭代所用时间、损失值、语言名称、预测语言类型、是否正确,其中通过✓、✗来表示模型判断的正误。同时,根据`plot_every`的值计算平均损失,将其添加进`all_losses`列表,以便于后面绘制训练过程中损失函数的图像。
# +
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = random_training()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# 分别打印迭代次数、迭代进度、迭代所用时间、损失值、语言名称、预测语言类型、是否正确
if iter % print_every == 0:
guess, guess_i = category_from_output(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %s %s / %s %s' % (iter, iter / n_iters * 100, time_since(start), loss.asnumpy(), line, guess, correct))
# 将loss的平均值添加至all_losses
if iter % plot_every == 0:
all_losses.append((current_loss / plot_every).asnumpy())
current_loss = 0
# -
# ### 绘制结果
#
# 从`all_losses`绘制网络模型学习过程中每个step得到的损失值,可显示网络学习情况。
# +
import matplotlib.pyplot as plt
plt.figure()
plt.plot(all_losses)
# -
# ## 评估结果
#
# - 为了查看网络在不同分类上的表现,我们将创建一个混淆矩阵,行坐标为实际语言,列坐标为预测的语言。为了计算混淆矩阵,使用`evaluate()`函数进行模型推理。
# +
# 在混淆矩阵中记录正确预测
confusion = Tensor(np.zeros((n_categories, n_categories)), mstype.float32)
n_confusion = 1000
# 模型推理
def evaluate(line_tensor):
hidden = rnn_cf.initHidden()
for i in range(line_tensor.shape[0]):
output, hidden = rnn_cf(line_tensor[i], hidden)
return output
# 运行样本,并记录正确的预测
for i in range(n_confusion):
category, line, category_tensor, line_tensor = random_training()
output = evaluate(line_tensor)
guess, guess_i = category_from_output(output)
category_i = all_categories.index(category)
confusion[category_i, guess_i] += 1
for i in range(n_categories):
confusion[i] / Tensor(np.sum(confusion[i].asnumpy()), mstype.float32)
# -
# - 使用`matplotlib`绘制混淆矩阵的图像。
# +
# 绘制图表
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.asnumpy())
fig.colorbar(cax)
# 设定轴
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# 在坐标处添加标签
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
|
tutorials/source_zh_cn/intermediate/text/rnn_classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Use the right encoding (how to know)
# - Determine the encoding of the file (Correctly detecting the encoding all times is impossible. See this [stackoverflow post](https://stackoverflow.com/questions/436220/how-to-determine-the-encoding-of-text))
# - Python libraries exist: [chardet](https://pypi.org/project/chardet/), [python-magic](https://pypi.org/project/python-magic/), [UnicodeDammit](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#unicode-dammit)
# - There can be gotchas for e.g. libmagic (which is OS dependency) needs to be installed for python-magic to work, as it depends on that (For macOSX, `brew install libmagic`)
# - [Pandas documentation](https://docs.python.org/3/library/codecs.html#standard-encodings) refers to encoding - https://docs.python.org/3/library/codecs.html#standard-encodings
import pandas as pd
df_clicks = pd.read_csv("./data/Clicks.csv",
sep="|", error_bad_lines=True)
df_clicks
# # Use python-magic - To determine encoding
# - `brew install libmagic` on Mac OSX upfront
# !pip install python-magic==0.4.18
# +
import magic
blob = open('./data/Clicks.csv', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)
encoding
# -
import pandas as pd
df_clicks = pd.read_csv("./data/Clicks.csv",
sep="|", error_bad_lines=True, encoding = "ISO-8859-1")
df_clicks
# # Use chardet
# - https://pypi.org/project/chardet/
# - https://chardet.readthedocs.io/en/latest/usage.html#basic-usage
# ! pip install chardet==3.0.4
import chardet
rawdata = open("./data/Clicks.csv", 'rb').read()
result = chardet.detect(rawdata)
charenc = result['encoding']
charenc
# # Use UnicodeDammit
# - https://www.crummy.com/software/BeautifulSoup/bs4/doc/#unicode-dammit
# ! pip install beautifulsoup4==4.9.1
from bs4 import UnicodeDammit
rawdata = open("./data/Clicks.csv", 'rb').read()
dammit = UnicodeDammit(rawdata)
print(dammit.unicode_markup)
dammit.original_encoding
|
class1_explore/encoding_errors.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Geochronology Calculations
# + hide_input=true slideshow={"slide_type": "notes"} tags=["hide-input"]
import matplotlib.pyplot as plt
from bokeh.plotting import figure, output_notebook, show
from bokeh.layouts import column
from bokeh.models import Range1d, LinearAxis, ColumnDataSource, LabelSet, Span, Slope, Label, Legend
from scipy.interpolate import CubicSpline
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
output_notebook()
import geochron_apps as gc
# + [markdown] slideshow={"slide_type": "fragment"}
# <center><img src="images/geochronology.png" align="center">
# https://www.explainxkcd.com/wiki/index.php/1829:_Geochronology
# </center>
# + [markdown] slideshow={"slide_type": "notes"}
# The following presentation shows some of the geochronology calculations learned in the Advanced Geochronology class at University of Saskatchewan taught by <NAME> and <NAME>, 2021. Some of the images in this presentation are taken from lectures given by the instructor.
# + [markdown] slideshow={"slide_type": "slide"}
# This notebook contains sample calculations typically used in geochronology. It can be obtained at https://git.cs.usask.ca/msv275/advanced-geochronology.
# It can be cloned through the git command:
# * git clone https://git.cs.usask.ca/msv275/advanced-geochronology.git
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Lf-Hf Calculations
# + [markdown] slideshow={"slide_type": "subslide"}
# Start with an appropriate value for depleted mantle at 4570 Ma and calculate and graph the curve for depleted mantle.
# + [markdown] slideshow={"slide_type": "fragment"}
# Our variables:
# * Decay Constant = 1.867 x 10<sup>-11</sup> (Scherer, Munker, and Mezger 2001)
# * <sup>176</sup>Lu/<sup>177</sup>Hf<sub>(depleted mantle)</sub> = 0.0384 (Chauvel and Blichert-Toft 2001)
# * <sup>176</sup>Hf/<sup>177</sup>Hf<sub>(depleted mantle)</sub> = 0.283250 (Chauvel and Blichert-Toft 2001)
#
# Isochron Equation:
#
# <sup>176</sup>Hf/<sup>177</sup>Hf<sub>(present day)</sub> = <sup>176</sup>Hf/<sup>177</sup>Hf<sub>(initial)</sub> + <sup>176</sup>Lu/<sup>177</sup>Hf<sub>(present day)</sub> * (*e*<sup>λ x t</sup> - 1)
#
# + slideshow={"slide_type": "slide"}
decay_const_177 = 1.867 * 10 ** -11
Lu_DM = 0.0384
Hf_DM = 0.283250
Lu_CHUR = 0.0336
Hf_CHUR = 0.282785
d = []
t1, t2 = 4570, 0
while t1 > 0:
d.append({'t1': t1,
't2': t2,
'176Lu/177Hf': Lu_DM,
'176Hf/177Hf_DM': gc.calc_initial(Hf_DM, Lu_DM, decay_const_177, t1, t2),
'176Hf/177Hf_CHUR': gc.calc_initial(Hf_CHUR, Lu_CHUR, decay_const_177, t1, t2),
})
t1 = t1 - 1
LuHf_df = pd.DataFrame(d)
LuHf_df.head()
# + slideshow={"slide_type": "slide"}
figure11 = gc.get_figure("176Hf/177Hf", "176Hf/177Hf", "Age (Ma)", [0,4570], [0.279,0.29])
figure11.line(LuHf_df['t1'], LuHf_df['176Hf/177Hf_DM'], color="darkred", legend_label="Depleted Mantle")
figure11.legend.location = "top_right"
figure11.legend.click_policy="hide"
# + slideshow={"slide_type": "slide"}
show(figure11)
# + [markdown] slideshow={"slide_type": "subslide"}
# Assume a crust generation event at 3000 Ma and another at 500 Ma, each starting from the depleted mantle curve. Assume these produce felsic crust with 176Lu/177Hf values of 0.15. Calculate and graph these two curves plus the curve for CHUR and for depleted mantle.
# + slideshow={"slide_type": "fragment"}
c_event1 = 3000
c_event2 = 500
LuHf_event = 0.15
Hf_DM_3000 = LuHf_df[LuHf_df['t1'] == 3000].values[0][-2]
Hf_DM_500 = LuHf_df[LuHf_df['t1'] == 500].values[0][-2]
Hf_CHUR_3000 = LuHf_df[LuHf_df['t1'] == 3000].values[0][-1]
Hf_CHUR_500 = LuHf_df[LuHf_df['t1'] == 500].values[0][-1]
d = []
t1, t2 = 3000, 0
while t2 < 3000:
d.append({'t1': t1,
't2': t2,
'176Lu/177Hf': Lu_DM,
'176Hf/177Hf_DM': gc.calc_t2_daughter(Hf_DM_3000, LuHf_event, decay_const_177, t1, t2),
'176Hf/177Hf_CHUR': gc.calc_t2_daughter(Hf_CHUR_3000, LuHf_event, decay_const_177, t1, t2),
})
t2 = t2 + 1
LuHf_3000_df = pd.DataFrame(d)
d = []
t1, t2 = 500, 0
while t2 < 500:
d.append({'t1': t1,
't2': t2,
'176Lu/177Hf': Lu_DM,
'176Hf/177Hf_DM': gc.calc_t2_daughter(Hf_DM_500, LuHf_event, decay_const_177, t1, t2),
'176Hf/177Hf_CHUR': gc.calc_t2_daughter(Hf_CHUR_500, LuHf_event, decay_const_177, t1, t2),
})
t2 = t2 + 1
LuHf_500_df = pd.DataFrame(d)
# + slideshow={"slide_type": "slide"}
figure12 = gc.get_figure("176Hf/177Hf", "176Hf/177Hf", "Age (Ma)", [0,4570], [0.279,0.29])
figure11.line(LuHf_df['t1'], LuHf_df['176Hf/177Hf_DM'], color="darkred", legend_label="Depleted Mantle")
figure12.line(LuHf_df['t1'], LuHf_df['176Hf/177Hf_CHUR'], color="darkblue", legend_label="CHUR")
figure12.line(LuHf_3000_df['t2'], LuHf_3000_df['176Hf/177Hf_CHUR'], color="lightblue", legend_label="3000 Ma Event (CHUR)")
figure12.line(LuHf_3000_df['t2'], LuHf_3000_df['176Hf/177Hf_DM'], color="pink", legend_label="3000 Ma Event (DM)")
figure12.line(LuHf_500_df['t2'], LuHf_500_df['176Hf/177Hf_CHUR'], color="blue", legend_label="500 Ma Event (CHUR)")
figure12.line(LuHf_500_df['t2'], LuHf_500_df['176Hf/177Hf_DM'], color="red", legend_label="500 Ma Event (DM)")
# + slideshow={"slide_type": "slide"}
show(figure12)
# + [markdown] slideshow={"slide_type": "subslide"}
# Assume that the 3000 Ma crust melts at 1000 Ma to produce a felsic igneous rock which crystallizes to form zircon with a 176Lu/177Hf value of 0.00001. Calculate and graph the evolution of this zircon to the present day.
# + slideshow={"slide_type": "fragment"}
c_event3 = 1000
LuHf_event = 0.00001
Hf_DM_1000 = LuHf_3000_df[LuHf_3000_df['t2'] == 1000].values[0][-2]
Hf_CHUR_1000 = LuHf_3000_df[LuHf_3000_df['t2'] == 1000].values[0][-1]
d = []
t1, t2 = 1000, 0
while t2 < 1000:
d.append({'t1': t1,
't2': t2,
'176Hf/177Hf_DM': gc.calc_t2_daughter(Hf_DM_1000, LuHf_event, decay_const_177, t1, t2),
'176Hf/177Hf_CHUR': gc.calc_t2_daughter(Hf_CHUR_1000, LuHf_event, decay_const_177, t1, t2),
})
t2 = t2 + 1
LuHf_1000_df = pd.DataFrame(d)
# + slideshow={"slide_type": "slide"}
figure11.line(LuHf_1000_df['t2'], LuHf_1000_df['176Hf/177Hf_CHUR'], color="lightblue", legend_label="1000 Ma Event (CHUR)")
figure11.line(LuHf_1000_df['t2'], LuHf_1000_df['176Hf/177Hf_DM'], color="pink", legend_label="1000 Ma Event (DM)")
# + slideshow={"slide_type": "slide"}
show(figure11)
# + [markdown] slideshow={"slide_type": "subslide"}
# Assume that zircon from an igneous rock sample is analysed and provides the following composition:
# 176Lu/177Hf= 0.0003
# 176Hf/177Hf= 0.286000
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "subslide"}
# The igneous unit has previously been dated at 500 Ma by U-Pb
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "subslide"}
# Calculate the initial composition (ratio and epsilon value). What is its T(DM) and T(2DM)?
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "subslide"}
# Assume the crustal evolution history used in the previous assignment and determine possible scenarios for mixing of multiple end-members to explain the composition of the zircon and host igneous rock sample at its time of formation. Assume that Hf concentrations in mantle and crust are 0.203 and 4.370, respectively and that Lu concentrations are 0.1 and 5.0, respectively.
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "subslide"}
# Illustrate the scenarios graphically in Excel for both ratio and epsilon situations
# -
|
Geochron_Calculations_LuHf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import datasets, linear_model
# ### Set-up
# we will read the dataset that we created in the previous exercise
df = pd.read_csv("DSDPartners_Data.csv", encoding='ISO-8859-1')
df.head()
#also try Operator Adjustments as potential target
target = 'PropOrderQty'
# ## Linear Regression
# ### Set-up X and y
y = np.asarray(df[target])
#y = np.reshape(y,(y.shape[0],1))
X = df.drop(['CustStorItemTriadID','BaseorderID','Createdate','ModelUsed','RecDeliveryDate',
'ConversionFactor','Previous2DelDate','MaxScanDate','MaxShipDate','Reviewed','IncInAnom',
'PrevDeliveryDate'], axis = 1).drop(target, axis=1).fillna(0)
#df.isna().sum()
X.head()
# +
#Establish training and testing data sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=314)
len(X_train), len(X_test), len(y_test), len(y_train)
# +
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# -
import sklearn.metrics as sm
print("Mean absolute error =", round(sm.mean_absolute_error(y_test, y_pred), 2))
print("Mean squared error =", round(sm.mean_squared_error(y_test, y_pred), 2))
print("Median absolute error =", round(sm.median_absolute_error(y_test, y_pred), 2))
print("Explain variance score =", round(sm.explained_variance_score(y_test, y_pred), 2))
print("R2 score =", round(sm.r2_score(y_test, y_pred), 2))
fig, ax = plt.subplots()
ax.scatter(y_test, y_pred)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
ax.set_xlabel('measured')
ax.set_ylabel('predicted')
plt.show()
# +
#Determine which features are most important to the model
import matplotlib.pyplot as pyplot
importance = regr.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
# -
X.columns
# ## Neural Networks
# imports
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
import numpy
from keras.optimizers import Adam
import keras
from matplotlib import pyplot
from keras.callbacks import EarlyStopping
import pandas as pd
from sklearn.preprocessing import LabelEncoder
# +
# Create model
model = Sequential()
model.add(Dense(X.shape[1], activation="relu", input_dim=X.shape[1]))
model.add(Dense(X.shape[1]*.75, activation="relu"))
model.add(Dense(1))
# Compile model: The model is initialized with the Adam optimizer and then it is compiled.
model.compile(loss='mean_squared_error', optimizer=Adam(lr=1e-3, decay=1e-3 / 200))
# Patient early stopping
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=200)
# Fit the model
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=10, verbose=2, callbacks=[es])
# Calculate predictions
PredTestSet = model.predict(X_train)
PredValSet = model.predict(X_test)
# -
print("Mean absolute error =", round(sm.mean_absolute_error(y_test, PredValSet), 2))
print("Mean squared error =", round(sm.mean_squared_error(y_test, PredValSet), 2))
print("Median absolute error =", round(sm.median_absolute_error(y_test, PredValSet), 2))
print("Explain variance score =", round(sm.explained_variance_score(y_test, PredValSet), 2))
print("R2 score =", round(sm.r2_score(y_test, PredValSet), 3))
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
#Neural Network Model Accuracy
r_squared = r2_score(y_test,PredValSet)
#add RMSE,MSE, MAE
adjusted_r_squared = 1 - (1-r_squared)*(len(y)-1)/(len(y)-X.shape[1]-1)
r_squared, adjusted_r_squared
# +
#Use the below code to see what percent where our predictions fall within 3, 4 or 5 off from actual
#As of Sunday night 98.6, 98.0 and 96.8 percent of predictions are +- 3,4 or 5 of actual.
#Biggest improvement is that the model gets it exactly right 83% of the time with new features, compared to 70% prior.
y_test_vals =np.reshape(y_test,(y_test.shape[0],))
Preds = np.reshape(PredValSet,(PredValSet.shape[0],))
compare = pd.DataFrame(np.array([y_test_vals, Preds]))
compare = np.transpose(compare)
compare.to_csv(r'compare.csv', index = False)
# +
## Determine variable importance in neural network model
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
import eli5
from eli5.sklearn import PermutationImportance
def base_model():
model = Sequential()
model.add(Dense(X.shape[1], activation="relu", input_dim=X.shape[1]))
model.add(Dense(36, activation="relu"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=Adam(lr=1e-3, decay=1e-3 / 200))
return model
my_model = KerasRegressor(build_fn=base_model)
my_model.fit(X_train, y_train, validation_data=(X_test, y_test),epochs=100)
perm = PermutationImportance(my_model, random_state=1).fit(X_train, y_train, validation_data=(X_test, y_test),epochs=10)
eli5.show_weights(perm, feature_names = X.columns.tolist())
# -
# ## Gradient Boost Regressor
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import roc_curve, roc_auc_score
# +
from sklearn.ensemble import GradientBoostingRegressor
gbm = GradientBoostingRegressor(random_state=314)
param_grid = {'n_estimators': [100,300,500,1000],
'learning_rate': [.025, 0.05, .25, 0.5],
'criterion': [‘friedman_mse’, ‘mse’, ‘mae’],
'loss': [‘ls’, ‘lad’, ‘huber’, ‘quantile’]}
gbm_rs = RandomizedSearchCV(gbm, param_grid, cv=3, n_iter=100, n_jobs=-1, random_state=314)
gbm.fit(X_train, y_train)
print ('Best GBM Parameters:', gbm_rs.best_params_)
#gbm_scores_train = gbm_rs.predict_proba(X_train)[:, 1]
#gbm_scores_test = gbm_rs.predict_proba(X_test)[:, 1]
gbm_scores_train = gbm_rs.predict(X_train)
gbm_scores_test = gbm_rs.predict(X_test)
#gbm_fpr_train, gbm_tpr_train, _ = roc_curve(y_train, gbm_scores_train)
#gbm_fpr_test, gbm_tpr_test, _ = roc_curve(y_test, gbm_scores_test)
# +
from sklearn.ensemble import GradientBoostingRegressor
gbrt=GradientBoostingRegressor(n_estimators=100, learning_rate=.025,criterion='friedman_mse')
param_grid = {'n_estimators': [100,300,500,1000],
'learning_rate': [.025,.05,.25,.5],
'criterion': ['friedman_mse','mse','mae'],
'loss': ['ls','lad','huber','quantile']}
gb_rs = RandomizedSearchCV(gbrt,param_grid,n_jobs=-1,random_state=314)
gb_rs.fit(X_train, y_train)
#y_pred=gbrt.predict(X_test)
#gbm_scores_test = gbm_rs.predict(X_test)
# +
from sklearn.ensemble import GradientBoostingRegressor
gbrt=GradientBoostingRegressor(n_estimators=100, learning_rate=.1,criterion='friedman_mse',loss='ls')
gb_rs.fit(X_train, y_train)
#y_pred=gbrt.predict(X_test)
#gbm_scores_test = gbm_rs.predict(X_test)
# -
y_pred=gbrt.predict(X_test)
#gb_rs.best_params_
#Decision Tree Model R2
r2_score(y_test,y_pred)
print("Mean absolute error =", round(sm.mean_absolute_error(y_test, y_pred), 2))
print("Mean squared error =", round(sm.mean_squared_error(y_test, y_pred), 2))
print("Median absolute error =", round(sm.median_absolute_error(y_test, y_pred), 2))
print("Explain variance score =", round(sm.explained_variance_score(y_test, y_pred), 2))
print("R2 score =", round(sm.r2_score(y_test, y_pred), 3))
# #keras
# evaluate the model
scores = model.evaluate(X, y, verbose=0)
scores
r2_score(y,predictions)
# # Optimization
# +
#best is batch size of 10 with 100 epics
import numpy
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasRegressor
def create_model():
model = Sequential()
model.add(Dense(42, activation="relu", input_dim=X.shape[1]))
#model.add(Dense(36, activation="relu"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=Adam(lr=1e-3, decay=1e-3 / 200))
return model
seed=314
model = KerasRegressor(build_fn=create_model, verbose=0)
batch_size = [10,100]
epochs = [10,100]
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X_train, y_train)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
|
DSD_finalmodels.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("winequalityN.csv")
df.head()
#output values only contains no. from 3-9
set(df.quality)
# a mapping dictionary that maps the quality values from 0-6
quality_mapping = {3:0,
4:1,
5:2,
6:3,
7:4,
8:5,
9:6}
df.loc[:,"quality"] = df.quality.map(quality_mapping)
df.head()
df.shape
df.isna().sum()
df = df.fillna(value=df.mean())
df.isna().sum()
# +
#using sample with frac=1 to shuffle the dataset
#reset the index after shuffling
df = df.sample(frac=1).reset_index(drop= True)
#selecting first 1000 rows for training set
df_train = df.head(1000)
#selecting last 600 rows for testing/validation
df_test = df.tail(600)
# +
from sklearn import tree
from sklearn import metrics
#intialize decesion tree classifier class
#with max_depth of 3
clf = tree.DecisionTreeClassifier(max_depth =3)
# -
features = df.columns[1:-1]
print(features)
#Information about the data columns
df.info()
# +
#Here we see that fixed acidity does not give any specification to classify the quality.
fig = plt.figure(figsize = (10,6))
sns.barplot(x = 'quality', y = 'fixed acidity', data = df)
#so should we drop this column??
# -
#train te model on the provided features
# and mapped the quality from before
clf.fit(df_train[features],df_train.quality)
# +
# generate prediction on the training set
train_predictions = clf.predict(df_train[features])
# generate prediction on the testing set
test_predictions = clf.predict(df_test[features])
# calculate the accuracy of the predictions on the train data set
train_accuracy = metrics.accuracy_score(df_train.quality,train_predictions)
# calculate the accuracy of the predictions on the test data set
test_accuracy = metrics.accuracy_score(df_test.quality,test_predictions)
# -
print("train accuracy {}".format(train_accuracy))
print("test accuracy {}".format(test_accuracy))
#increasing the depth to 7
clf = tree.DecisionTreeClassifier(max_depth =7)
clf.fit(df_train[features],df_train.quality)
print("train accuracy {}".format(train_accuracy))
print("test accuracy {}".format(test_accuracy))
#performance decreased in test set, issue of overfitting is detected
# +
#initailizing the list to store train and test accuracies
# we start from 50%
test_accuracies = [0.5]
train_accuracies = [0.5]
#iterating over few depth
for i in range(1,25):
clf = tree.DecisionTreeClassifier(max_depth = i)
clf.fit(df_train[features],df_train.quality)
# generate prediction on the training set
train_predictions = clf.predict(df_train[features])
# generate prediction on the testing set
test_predictions = clf.predict(df_test[features])
# calculate the accuracy of the predictions on the train data set
train_accuracy = metrics.accuracy_score(df_train.quality,train_predictions)
# calculate the accuracy of the predictions on the test data set
test_accuracy = metrics.accuracy_score(df_test.quality,test_predictions)
#appened accurarcies
train_accuracies.append(train_accuracy)
test_accuracies.append(test_accuracy)
#plotting
plt.figure(figsize=(10,5))
sns.set_style("whitegrid")
plt.plot(train_accuracies,label="train accuracy")
plt.plot(test_accuracies,label="test accuracy")
plt.legend(loc ="upper left",prop ={'size':15})
plt.xticks(range(0,26,5))
plt.xlabel("max_depth", size=20)
plt.ylabel("accuracy", size=20)
plt.show()
# -
# <H2>Weeee...Woooo....Weeee...Woooo IT'S OVERFITTING!!!!!</H2>
#
# 
# ## Simple KFold Cross Validation
from sklearn import model_selection
df_new = df_train
#create a new column called kfold and fill it with -1
df_new["kfold"]=-1
df_new = df_new.sample(frac=1).reset_index(drop=True)
kf = model_selection.KFold(n_splits=6)
#fill the new kfold column
for fold,(trn_,val_) in enumerate(kf.split(X=df_new)):
df_new.loc[val_,'kfold'] =fold
df.to_csv("train_folds.csv",index=False)
df_new
pd.set_option("display.max_rows", None, "display.max_columns", None)
print(df_new["kfold"])
# ## Stratified KFold Cross Validation
df_new = df_train
#create a new column called kfold and fill it with -1
df_new["kfold"]=-1
y = df_new.quality.values
df_new = df_new.sample(frac=1).reset_index(drop=True)
kf = model_selection.StratifiedKFold(n_splits=6)
#fill the new kfold column
for fold,(trn_,val_) in enumerate(kf.split(X=df_new,y=y)):
df_new.loc[val_,'kfold'] =fold
df_new.to_csv("train_stratkfolds.csv",index=False)
df_new
b =sns.countplot(x='quality',data=df_new)
b.set_xlabel("quality",fontsize=20)
b.set_ylabel("count",fontsize=20)
# +
count=0
for i in df.quality:
if(i == 6):
count+=1
print(count)
# +
f, axes = plt.subplots(3, 2)
sns.countplot(x='quality',data=df_new[df_new["kfold"]==0],ax=axes[0,0])
sns.countplot(x='quality',data=df_new[df_new["kfold"]==1],ax=axes[0,1])
sns.countplot(x='quality',data=df_new[df_new["kfold"]==2],ax=axes[1,0])
sns.countplot(x='quality',data=df_new[df_new["kfold"]==3],ax=axes[1,1])
sns.countplot(x='quality',data=df_new[df_new["kfold"]==4],ax=axes[2,0])
sns.countplot(x='quality',data=df_new[df_new["kfold"]==5],ax=axes[2,1])
plt.show()
# -
9
|
Ch2_Cross_validation/Cross_Valid_WineQuality.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''ml'': conda)'
# language: python
# name: python3
# ---
# +
import numpy as np
from utils import *
# +
def ST(f, M):
h = get_length(f, 0)
w = get_length(f, 1)
g = np.zeros((h, w))
for x in range(h):
for y in range(w):
u = (int)(M[0,0] * x + M[0,1] * y + M[0,2])
v = (int)(M[1,0] * x + M[1,1] * y + M[1,2])
if u >=0 and u < w and v >=0 and v < h:
g[x,y] = f[u,v]
return g
def create_flipv(h):
"""
-1, 0, h - 1
0, 1, 0
"""
M = np.zeros((2, 3))
M[0, 0] = -1
M[0, 2] = h - 1
M[1, 1] = 1
return M
def create_fliph(w):
"""
1, 0, 0
0, -1, w-1
"""
M = np.zeros((2, 3))
M[0, 0] = 1
M[1, 1] = -1
M[1, 2] = w - 1
return M
def create_scale(s):
"""
1/s, 0, 0
0, 1/s, 0
"""
M = np.zeros((2, 3))
M[0, 0] = 1 / s
M[1, 1] = 1 / s
return M
def create_translation(tx, ty):
"""
1, 0, -tx
0, 1, -ty
"""
M = np.zeros((2, 3))
M[0, 0] = 1
M[0, 2] = -tx
M[1, 1] = 1
M[1, 2] = -ty
return M
def main(imp):
f = load_img(imp)
show_img("f", f)
h = get_length(f, 0)
w = get_length(f, 1)
show_img("Flip V", ST(f, create_flipv(h)))
show_img("Flip H", ST(f, create_fliph(w)))
show_img("Scale 0.5", ST(f, create_scale(0.5)))
show_img("Scale 2", ST(f, create_scale(2)))
show_img("Tx=30,Ty=50", ST(f, create_translation(30,50)))
imp = "images/lena256.bmp"
main(imp)
|
dip_notes/09_st_matrix.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext nb_black
# %config InlineBackend.figure_format = 'retina'
import pandas as pd
def logs_to_dataframe(logs):
rows = []
for line in logs.split("\n"):
if len(line) == 0:
continue
path, elapsed = line.split(",")
rows.append({"path": path, "elapsed": float(elapsed)})
return pd.DataFrame(rows)
# +
# dont_test
logs_gunicorn_sync_2_workers = """
data/10000000_10_125000000/1,0.13233324800000013
data/10000000_10_125000000/2,0.1391417800000001
data/10000000_10_125000000/4,0.13696566499999996
data/10000000_10_125000000/3,0.20793659200000025
data/10000000_10_125000000/5,0.17613249799999986
data/10000000_10_125000000/6,0.16959439799999965
data/10000000_10_125000000/7,0.16980598999999996
data/10000000_10_125000000/8,0.18768835400000006
data/10000000_10_125000000/9,0.19619635400000002
data/10000000_10_125000000/10,0.12048396400000039
data/10000000_10_125000000/12,0.17553784299999986
data/10000000_10_125000000/11,0.17659414099999982
data/10000000_10_125000000/14,0.15443146900000038
data/10000000_10_125000000/13,0.1564723739999998
data/10000000_10_125000000/15,0.19070393700000032
data/10000000_10_125000000/16,0.18991380699999993
data/10000000_10_125000000/17,0.14951232099999956
data/10000000_10_125000000/18,0.15861024099999987
data/10000000_10_125000000/19,0.16118835099999984
data/10000000_10_125000000/20,0.1799300239999999
data/10000000_10_125000000/21,0.16416603999999957
data/10000000_10_125000000/22,0.20302186599999938
data/10000000_10_125000000/23,0.17241555699999989
data/10000000_10_125000000/24,0.13072230800000018
data/10000000_10_125000000/25,0.1767661389999997
data/10000000_10_125000000/26,0.16966930999999974
data/10000000_10_125000000/27,0.15800878100000038
data/10000000_10_125000000/28,0.15136135599999978
data/10000000_10_125000000/30,0.30392040899999984
data/10000000_10_125000000/29,0.3209907859999994
data/10000000_10_125000000/31,0.15475920699999968
data/10000000_10_125000000/32,0.15288592500000053
data/10000000_10_125000000/33,0.19010990299999975
data/10000000_10_125000000/34,0.1876869869999993
data/10000000_10_125000000/35,0.1483351620000004
data/10000000_10_125000000/36,0.14837874799999984
data/10000000_10_125000000/37,0.19151743499999974
data/10000000_10_125000000/38,0.1915203139999999
data/10000000_10_125000000/39,0.148607159
data/10000000_10_125000000/40,0.150989257
data/10000000_10_125000000/41,0.16669448900000017
data/10000000_10_125000000/42,0.17037444300000004
data/10000000_10_125000000/44,0.13860771500000002
data/10000000_10_125000000/43,0.20704748400000028
data/10000000_10_125000000/45,0.16522986399999962
data/10000000_10_125000000/46,0.17184321800000024
data/10000000_10_125000000/47,0.17469332000000026
data/10000000_10_125000000/48,0.17866594999999919
data/10000000_10_125000000/49,0.16920686300000032
data/10000000_10_125000000/50,0.16285504100000026
data/10000000_10_125000000/51,0.16766404899999987
data/10000000_10_125000000/52,0.17659219699999973
data/10000000_10_125000000/53,0.16599748299999995
data/10000000_10_125000000/54,0.1652587900000002
data/10000000_10_125000000/55,0.17141527999999973
data/10000000_10_125000000/56,0.16835534400000007
data/10000000_10_125000000/57,0.16872550100000083
data/10000000_10_125000000/58,0.1686464660000011
data/10000000_10_125000000/59,0.17021810800000026
data/10000000_10_125000000/60,0.17308598699999855
data/10000000_10_125000000/61,0.168336107
data/10000000_10_125000000/62,0.17003564799999893
data/10000000_10_125000000/63,0.1654905410000005
data/10000000_10_125000000/64,0.16328318599999925
data/10000000_10_125000000/65,0.1690795180000002
data/10000000_10_125000000/66,0.1676081259999993
data/10000000_10_125000000/67,0.17339162999999935
data/10000000_10_125000000/68,0.17365442400000042
data/10000000_10_125000000/69,0.16801133899999954
data/10000000_10_125000000/70,0.16788407700000008
data/10000000_10_125000000/71,0.17107996700000072
data/10000000_10_125000000/72,0.1704846549999992
data/10000000_10_125000000/73,0.16663613599999927
data/10000000_10_125000000/74,0.16240590099999963
data/10000000_10_125000000/75,0.1759734359999996
data/10000000_10_125000000/76,0.16498992900000076
data/10000000_10_125000000/77,0.175823909
data/10000000_10_125000000/78,0.17289628700000037
data/10000000_10_125000000/79,0.15828678699999976
data/10000000_10_125000000/80,0.1652505220000009
data/10000000_10_125000000/81,0.17736960599999918
data/10000000_10_125000000/82,0.1781365749999999
data/10000000_10_125000000/83,0.16241023799999965
data/10000000_10_125000000/84,0.16498699099999925
data/10000000_10_125000000/85,0.17131125999999952
data/10000000_10_125000000/86,0.16925666899999925
data/10000000_10_125000000/87,0.172995534
data/10000000_10_125000000/88,0.17221549699999983
data/10000000_10_125000000/89,0.1894249299999995
data/10000000_10_125000000/90,0.1335127620000005
data/10000000_10_125000000/91,0.17627253800000098
data/10000000_10_125000000/92,0.17055110599999956
data/10000000_10_125000000/93,0.15662888600000002
data/10000000_10_125000000/94,0.16696236900000017
data/10000000_10_125000000/96,0.1794062150000002
data/10000000_10_125000000/95,0.19177810899999947
data/10000000_10_125000000/98,0.15232991099999893
data/10000000_10_125000000/97,0.17174317199999933
data/10000000_10_125000000/99,0.15910907399999985
data/10000000_10_125000000/100,0.150944814999999
data/10000000_10_125000000/101,0.19744660199999942
data/10000000_10_125000000/102,0.19329486500000037
data/10000000_10_125000000/103,0.1507982900000009
data/10000000_10_125000000/104,0.17583517799999981
data/10000000_10_125000000/106,0.052728656000001095
data/10000000_10_125000000/107,0.0027764309999991355
data/10000000_10_125000000/105,0.08686955800000007
data/10000000_10_125000000/108,0.0012759689999999324
data/10000000_10_125000000/109,0.0011289330000003872
data/10000000_10_125000000/111,0.0013970549999999804
data/10000000_10_125000000/110,0.0014068089999987876
data/10000000_10_125000000/112,0.0014829720000015811
data/10000000_10_125000000/113,0.0014840100000004242
data/10000000_10_125000000/114,0.0014055969999997586
data/10000000_10_125000000/115,0.0014337460000000135
data/10000000_10_125000000/116,0.00115705700000035
data/10000000_10_125000000/117,0.0011629440000007207
data/10000000_10_125000000/118,0.001371556999998802
data/10000000_10_125000000/119,0.0013781549999993814
data/10000000_10_125000000/120,0.0012762520000002553
data/10000000_10_125000000/122,0.001363725000000926
data/10000000_10_125000000/123,0.0014017360000000423
data/10000000_10_125000000/121,0.3043763559999988
"""
# -
# dont_test
df = logs_to_dataframe(logs_gunicorn_sync_2_workers)
_ = df.elapsed.plot(figsize=(10, 6))
logs_gunicorn_gevent_worker = """
data/10000000_10_125000000/11,10.049786225000002
data/10000000_10_125000000/19,10.142580827
data/10000000_10_125000000/7,10.146840237
data/10000000_10_125000000/20,10.150704512
data/10000000_10_125000000/120,10.122840609999997
data/10000000_10_125000000/123,10.121602928
data/10000000_10_125000000/122,10.122118580999999
data/10000000_10_125000000/76,10.148149843999999
data/10000000_10_125000000/78,10.147318071
data/10000000_10_125000000/53,10.162819055
data/10000000_10_125000000/80,10.146296087000001
data/10000000_10_125000000/55,10.161437949000002
data/10000000_10_125000000/82,10.145368287
data/10000000_10_125000000/57,10.160199759999998
data/10000000_10_125000000/84,10.144561303
data/10000000_10_125000000/59,10.159090885999998
data/10000000_10_125000000/90,10.140924209999998
data/10000000_10_125000000/63,10.15698979
data/10000000_10_125000000/93,10.139958184000001
data/10000000_10_125000000/65,10.156054337
data/10000000_10_125000000/67,10.155146436999999
data/10000000_10_125000000/101,10.135774621
data/10000000_10_125000000/69,10.154169379
data/10000000_10_125000000/107,10.132057306999998
data/10000000_10_125000000/75,10.151017116000002
data/10000000_10_125000000/109,10.131148374
data/10000000_10_125000000/77,10.150131627
data/10000000_10_125000000/111,10.130328509999998
data/10000000_10_125000000/79,10.149146975
data/10000000_10_125000000/115,10.128527976
data/10000000_10_125000000/81,10.148195848
data/10000000_10_125000000/117,10.127728697999999
data/10000000_10_125000000/83,10.147369818000001
data/10000000_10_125000000/119,10.126898995
data/10000000_10_125000000/85,10.146481678
data/10000000_10_125000000/121,10.126121247
data/10000000_10_125000000/89,10.144125953000001
data/10000000_10_125000000/1,10.196405991
data/10000000_10_125000000/91,10.143193301
data/10000000_10_125000000/2,10.190029192999999
data/10000000_10_125000000/98,10.139254614000002
data/10000000_10_125000000/4,10.189262420999999
data/10000000_10_125000000/104,10.135686237000002
data/10000000_10_125000000/5,10.188641594999998
data/10000000_10_125000000/106,10.134586581
data/10000000_10_125000000/6,10.188061286
data/10000000_10_125000000/108,10.133765302999999
data/10000000_10_125000000/8,10.186739097
data/10000000_10_125000000/116,10.129841413000001
data/10000000_10_125000000/9,10.186163559
data/10000000_10_125000000/3,10.197953694999999
data/10000000_10_125000000/12,10.185583374
data/10000000_10_125000000/14,10.184939351
data/10000000_10_125000000/10,10.193246413
data/10000000_10_125000000/16,10.184358808999999
data/10000000_10_125000000/13,10.191344963000002
data/10000000_10_125000000/18,10.183583919000002
data/10000000_10_125000000/15,10.190677460000002
data/10000000_10_125000000/21,10.182984772000001
data/10000000_10_125000000/17,10.190088161
data/10000000_10_125000000/23,10.182389167
data/10000000_10_125000000/22,10.186476271
data/10000000_10_125000000/26,10.181791039
data/10000000_10_125000000/24,10.185824207000001
data/10000000_10_125000000/29,10.181200036
data/10000000_10_125000000/25,10.185252228
data/10000000_10_125000000/32,10.180613176999998
data/10000000_10_125000000/27,10.184676020000001
data/10000000_10_125000000/36,10.194895873
data/10000000_10_125000000/28,10.184101595000001
data/10000000_10_125000000/38,10.180104017000001
data/10000000_10_125000000/30,10.183538796
data/10000000_10_125000000/40,10.178259745000002
data/10000000_10_125000000/31,10.182969694
data/10000000_10_125000000/42,10.177541994999999
data/10000000_10_125000000/33,10.182395246
data/10000000_10_125000000/44,10.176756977
data/10000000_10_125000000/34,10.181812781000001
data/10000000_10_125000000/46,10.175866665999997
data/10000000_10_125000000/35,10.181170844
data/10000000_10_125000000/50,10.172830751
data/10000000_10_125000000/37,10.190080036000001
data/10000000_10_125000000/52,10.171157024
data/10000000_10_125000000/39,10.180427206
data/10000000_10_125000000/54,10.170058332
data/10000000_10_125000000/41,10.179056383999999
data/10000000_10_125000000/56,10.169087178
data/10000000_10_125000000/43,10.178144235000001
data/10000000_10_125000000/58,10.168071731000001
data/10000000_10_125000000/47,10.176564986999999
data/10000000_10_125000000/60,10.167033150999998
data/10000000_10_125000000/49,10.175300491999998
data/10000000_10_125000000/62,10.166064699
data/10000000_10_125000000/64,10.165158504999997
data/10000000_10_125000000/66,10.164190174
data/10000000_10_125000000/68,10.163336655000002
data/10000000_10_125000000/70,10.162265987000001
data/10000000_10_125000000/72,10.161002029999999
data/10000000_10_125000000/73,10.169775734999998
data/10000000_10_125000000/88,10.163308793000002
data/10000000_10_125000000/87,10.165899461000002
data/10000000_10_125000000/45,10.192329575999999
data/10000000_10_125000000/96,10.161089714000001
data/10000000_10_125000000/94,10.162418325
data/10000000_10_125000000/110,10.153332905000001
data/10000000_10_125000000/71,10.17709614
data/10000000_10_125000000/99,10.161127931
data/10000000_10_125000000/100,10.159951314999999
data/10000000_10_125000000/86,10.168621941000001
data/10000000_10_125000000/103,10.159180141
data/10000000_10_125000000/95,10.163916566000001
data/10000000_10_125000000/114,10.152976403
data/10000000_10_125000000/74,10.176083288000001
data/10000000_10_125000000/51,10.191375425
data/10000000_10_125000000/112,10.154442626999998
data/10000000_10_125000000/118,10.151580177
data/10000000_10_125000000/61,10.462965026
data/10000000_10_125000000/97,10.442389549000001
data/10000000_10_125000000/92,10.44519714
data/10000000_10_125000000/48,10.473605571
data/10000000_10_125000000/102,10.841431194
data/10000000_10_125000000/105,11.4475363
data/10000000_10_125000000/113,12.446504518000001
"""
# dont_test
df = logs_to_dataframe(logs_gunicorn_gevent_worker)
_ = df.elapsed.plot(figsize=(10, 6))
logs_gunicorn_eventlet = """
data/10000000_10_125000000/10,10.184719939
data/10000000_10_125000000/12,10.184644241
data/10000000_10_125000000/2,10.189455096
data/10000000_10_125000000/7,10.188595104
data/10000000_10_125000000/3,10.18903791
data/10000000_10_125000000/14,10.186163384
data/10000000_10_125000000/1,10.19606213
data/10000000_10_125000000/11,10.188827164
data/10000000_10_125000000/13,10.188504778
data/10000000_10_125000000/8,10.189711949
data/10000000_10_125000000/28,10.188679491000002
data/10000000_10_125000000/6,10.19747775
data/10000000_10_125000000/4,10.202771965
data/10000000_10_125000000/5,10.197630879
data/10000000_10_125000000/27,10.198612735000001
data/10000000_10_125000000/29,10.198348465999999
data/10000000_10_125000000/32,10.201460553
data/10000000_10_125000000/31,10.207318116
data/10000000_10_125000000/33,10.215371223000002
data/10000000_10_125000000/36,10.214723688
data/10000000_10_125000000/35,10.216656426
data/10000000_10_125000000/38,10.21710555
data/10000000_10_125000000/34,10.224452201
data/10000000_10_125000000/39,10.221920728
data/10000000_10_125000000/41,10.222048285
data/10000000_10_125000000/37,10.224276120999999
data/10000000_10_125000000/43,10.225547443
data/10000000_10_125000000/30,10.233102602
data/10000000_10_125000000/46,10.23923362
data/10000000_10_125000000/49,10.238331024
data/10000000_10_125000000/52,10.239438189
data/10000000_10_125000000/53,10.242015596
data/10000000_10_125000000/54,10.244025660000002
data/10000000_10_125000000/55,10.243423893
data/10000000_10_125000000/57,10.244574366
data/10000000_10_125000000/59,10.247448022
data/10000000_10_125000000/61,10.247466688
data/10000000_10_125000000/64,10.248910354
data/10000000_10_125000000/51,10.254479862
data/10000000_10_125000000/44,10.258631225000002
data/10000000_10_125000000/66,10.250132134000001
data/10000000_10_125000000/62,10.252318133
data/10000000_10_125000000/67,10.249673432999998
data/10000000_10_125000000/65,10.252885931
data/10000000_10_125000000/72,10.250956043999999
data/10000000_10_125000000/68,10.253180968999999
data/10000000_10_125000000/70,10.253000127
data/10000000_10_125000000/71,10.254830627
data/10000000_10_125000000/73,10.256339651000001
data/10000000_10_125000000/78,10.258200193
data/10000000_10_125000000/80,10.257791053999998
data/10000000_10_125000000/84,10.257923050999999
data/10000000_10_125000000/85,10.257961045999998
data/10000000_10_125000000/86,10.258829788
data/10000000_10_125000000/81,10.260892003999999
data/10000000_10_125000000/88,10.259086181
data/10000000_10_125000000/87,10.260039227
data/10000000_10_125000000/90,10.260308308
data/10000000_10_125000000/94,10.261200858999999
data/10000000_10_125000000/89,10.263648383
data/10000000_10_125000000/92,10.263308989
data/10000000_10_125000000/95,10.262204155000001
data/10000000_10_125000000/97,10.261563119
data/10000000_10_125000000/93,10.264244075
data/10000000_10_125000000/96,10.26348924
data/10000000_10_125000000/102,10.263946232
data/10000000_10_125000000/9,10.301802314
data/10000000_10_125000000/100,10.265266968999999
data/10000000_10_125000000/99,10.266049734
data/10000000_10_125000000/103,10.26539377
data/10000000_10_125000000/104,10.266776699000001
data/10000000_10_125000000/107,10.264800517000001
data/10000000_10_125000000/108,10.26602768
data/10000000_10_125000000/109,10.266278726
data/10000000_10_125000000/112,10.265707696
data/10000000_10_125000000/110,10.266982455
data/10000000_10_125000000/114,10.265863616
data/10000000_10_125000000/118,10.265107922999999
data/10000000_10_125000000/111,10.267253321999998
data/10000000_10_125000000/79,10.283389566
data/10000000_10_125000000/105,10.271341671
data/10000000_10_125000000/116,10.267321940999999
data/10000000_10_125000000/101,10.273624182999999
data/10000000_10_125000000/115,10.267311622000001
data/10000000_10_125000000/120,10.266694706
data/10000000_10_125000000/113,10.268713996
data/10000000_10_125000000/122,10.266908873999999
data/10000000_10_125000000/117,10.26799192
data/10000000_10_125000000/77,10.288387241
data/10000000_10_125000000/121,10.267407265
data/10000000_10_125000000/119,10.268824593
data/10000000_10_125000000/15,10.270173398
data/10000000_10_125000000/123,10.268984085
data/10000000_10_125000000/20,10.268106880000001
data/10000000_10_125000000/16,10.269731499999999
data/10000000_10_125000000/18,10.267971663
data/10000000_10_125000000/17,10.269288911
data/10000000_10_125000000/19,10.26812912
data/10000000_10_125000000/24,10.269486987
data/10000000_10_125000000/22,10.270921141
data/10000000_10_125000000/106,10.283296733
data/10000000_10_125000000/83,10.2940824
data/10000000_10_125000000/98,10.287201846
data/10000000_10_125000000/45,10.311855003000002
data/10000000_10_125000000/60,10.305778208
data/10000000_10_125000000/58,10.306894049
data/10000000_10_125000000/25,10.26960571
data/10000000_10_125000000/23,10.270902884000002
data/10000000_10_125000000/21,10.272020047000002
data/10000000_10_125000000/26,10.272069007999999
data/10000000_10_125000000/91,10.307479687999999
data/10000000_10_125000000/56,10.323989161
data/10000000_10_125000000/42,10.332979603
data/10000000_10_125000000/40,10.336185529
data/10000000_10_125000000/48,10.332800759
data/10000000_10_125000000/76,10.320437415
data/10000000_10_125000000/74,10.321928754
data/10000000_10_125000000/50,10.332989415
data/10000000_10_125000000/75,10.611571835
data/10000000_10_125000000/82,10.609119367
data/10000000_10_125000000/69,10.614836658000002
data/10000000_10_125000000/63,11.021881586
data/10000000_10_125000000/47,11.029639393
"""
# dont_test
df = logs_to_dataframe(logs_gunicorn_eventlet)
_ = df.elapsed.plot(figsize=(10, 6))
logs_gunicorn_120_threads = """
data/10000000_10_125000000/5,9.7534306
data/10000000_10_125000000/18,9.879486786000001
data/10000000_10_125000000/7,10.041890682
data/10000000_10_125000000/12,10.042334445000002
data/10000000_10_125000000/19,10.050457308
data/10000000_10_125000000/21,10.014103077000001
data/10000000_10_125000000/23,10.051826471
data/10000000_10_125000000/29,10.008919143
data/10000000_10_125000000/14,10.056218547999999
data/10000000_10_125000000/20,10.008920907
data/10000000_10_125000000/27,10.006882749999999
data/10000000_10_125000000/16,10.008426328999999
data/10000000_10_125000000/40,10.007380451
data/10000000_10_125000000/6,10.042293367000001
data/10000000_10_125000000/25,10.051188795999998
data/10000000_10_125000000/4,10.046511581999999
data/10000000_10_125000000/11,10.055740641
data/10000000_10_125000000/10,10.058752593
data/10000000_10_125000000/28,10.008557094
data/10000000_10_125000000/34,10.008137601
data/10000000_10_125000000/41,10.006055724
data/10000000_10_125000000/55,10.00475077
data/10000000_10_125000000/37,10.006095978
data/10000000_10_125000000/38,10.005419175999998
data/10000000_10_125000000/42,10.007430688000001
data/10000000_10_125000000/15,10.053646464000002
data/10000000_10_125000000/26,10.01057804
data/10000000_10_125000000/47,10.006250781000002
data/10000000_10_125000000/32,10.010841858000001
data/10000000_10_125000000/46,10.007213391
data/10000000_10_125000000/52,10.008139648999999
data/10000000_10_125000000/33,10.006741323
data/10000000_10_125000000/44,10.011018695999999
data/10000000_10_125000000/59,10.010344053999999
data/10000000_10_125000000/35,10.01243399
data/10000000_10_125000000/50,10.013958135
data/10000000_10_125000000/48,10.013802143000001
data/10000000_10_125000000/60,10.012354956
data/10000000_10_125000000/62,10.013773900999999
data/10000000_10_125000000/45,10.018343383000001
data/10000000_10_125000000/81,10.014606542
data/10000000_10_125000000/51,10.015721236000001
data/10000000_10_125000000/66,10.012982461
data/10000000_10_125000000/49,10.017521703
data/10000000_10_125000000/63,10.015465663
data/10000000_10_125000000/67,10.013837776999999
data/10000000_10_125000000/65,10.01594185
data/10000000_10_125000000/58,10.015853510000001
data/10000000_10_125000000/61,10.015682561999999
data/10000000_10_125000000/56,10.017132151999999
data/10000000_10_125000000/64,10.015442654000001
data/10000000_10_125000000/69,10.015416098000001
data/10000000_10_125000000/73,10.016928575
data/10000000_10_125000000/79,10.015193088999998
data/10000000_10_125000000/54,10.020262691000001
data/10000000_10_125000000/77,10.017853008000001
data/10000000_10_125000000/78,10.016912833
data/10000000_10_125000000/71,10.019507098
data/10000000_10_125000000/83,10.018484311000002
data/10000000_10_125000000/36,10.019772856000001
data/10000000_10_125000000/75,10.019658429000001
data/10000000_10_125000000/68,10.019277795999999
data/10000000_10_125000000/84,10.018474561
data/10000000_10_125000000/92,10.018489557
data/10000000_10_125000000/76,10.018322705000001
data/10000000_10_125000000/86,10.018484549000002
data/10000000_10_125000000/80,10.017787655000001
data/10000000_10_125000000/82,10.020203316
data/10000000_10_125000000/90,10.018032199000002
data/10000000_10_125000000/89,10.018942144999999
data/10000000_10_125000000/87,10.019289304
data/10000000_10_125000000/93,10.017621980000001
data/10000000_10_125000000/88,10.020627963
data/10000000_10_125000000/103,10.019131022000002
data/10000000_10_125000000/91,10.019489048
data/10000000_10_125000000/106,10.018594000000002
data/10000000_10_125000000/72,10.020299679999999
data/10000000_10_125000000/109,10.018566252000001
data/10000000_10_125000000/100,10.017950242
data/10000000_10_125000000/85,10.019773755
data/10000000_10_125000000/104,10.021095475000001
data/10000000_10_125000000/101,10.019118107999999
data/10000000_10_125000000/97,10.020531699
data/10000000_10_125000000/98,10.020273037000003
data/10000000_10_125000000/111,10.018147682999999
data/10000000_10_125000000/113,10.019450411
data/10000000_10_125000000/102,10.019812152
data/10000000_10_125000000/95,10.022530427
data/10000000_10_125000000/94,10.019893824999999
data/10000000_10_125000000/112,10.018966217
data/10000000_10_125000000/107,10.020450199999999
data/10000000_10_125000000/119,10.019118357
data/10000000_10_125000000/117,10.019432602999998
data/10000000_10_125000000/118,10.018596075999998
data/10000000_10_125000000/105,10.020632086
data/10000000_10_125000000/115,10.019250611999999
data/10000000_10_125000000/99,10.021887215999998
data/10000000_10_125000000/116,10.021272626000002
data/10000000_10_125000000/120,10.020416862
data/10000000_10_125000000/108,10.021727752999999
data/10000000_10_125000000/70,10.044299126999999
data/10000000_10_125000000/74,10.038137322
data/10000000_10_125000000/110,10.039624448000001
data/10000000_10_125000000/114,10.041261818999999
data/10000000_10_125000000/30,10.136347072000001
data/10000000_10_125000000/53,10.120595234000001
data/10000000_10_125000000/17,10.151263598000002
data/10000000_10_125000000/57,10.122831130000002
data/10000000_10_125000000/22,10.142321575
data/10000000_10_125000000/122,0.1677977129999988
data/10000000_10_125000000/31,10.137989476000001
data/10000000_10_125000000/2,10.180985981000001
data/10000000_10_125000000/9,10.178264232
data/10000000_10_125000000/123,0.007332185999999297
data/10000000_10_125000000/13,10.180813425999999
data/10000000_10_125000000/43,10.145055623000001
data/10000000_10_125000000/96,10.133915430000002
data/10000000_10_125000000/1,10.489431428
data/10000000_10_125000000/39,10.435408501000001
data/10000000_10_125000000/24,10.464666993
data/10000000_10_125000000/8,10.470107651
data/10000000_10_125000000/3,10.491772646
data/10000000_10_125000000/121,0.9838803550000002
"""
# dont_test
df = logs_to_dataframe(logs_gunicorn_120_threads)
_ = df.elapsed.plot(figsize=(10, 6))
logs_uvicorn_async = """
data/10000000_10_125000000/76,6.833490975
data/10000000_10_125000000/4,6.8391029670000005
data/10000000_10_125000000/104,6.838003313
data/10000000_10_125000000/80,6.8480928599999995
data/10000000_10_125000000/12,6.852861995
data/10000000_10_125000000/106,6.846324348
data/10000000_10_125000000/96,6.853801959
data/10000000_10_125000000/88,6.855233322
data/10000000_10_125000000/10,6.861634785
data/10000000_10_125000000/71,6.859930076000001
data/10000000_10_125000000/94,6.857132744000001
data/10000000_10_125000000/86,6.858719097
data/10000000_10_125000000/78,6.861408690999999
data/10000000_10_125000000/17,6.864899255999999
data/10000000_10_125000000/100,6.859420549
data/10000000_10_125000000/110,6.858577593
data/10000000_10_125000000/112,6.858554137
data/10000000_10_125000000/116,6.8583735720000005
data/10000000_10_125000000/8,6.868179236
data/10000000_10_125000000/98,6.861395097999999
data/10000000_10_125000000/20,6.867319949
data/10000000_10_125000000/23,6.867669544999999
data/10000000_10_125000000/3,6.873879709000001
data/10000000_10_125000000/7,6.87126375
data/10000000_10_125000000/120,6.861684735999999
data/10000000_10_125000000/90,6.866986024999999
data/10000000_10_125000000/118,6.863732507999999
data/10000000_10_125000000/69,6.872146911
data/10000000_10_125000000/114,6.868118553
data/10000000_10_125000000/102,6.871459867
data/10000000_10_125000000/82,6.878120139
data/10000000_10_125000000/108,6.877958173
data/10000000_10_125000000/73,6.884065569999999
data/10000000_10_125000000/84,6.885973236000001
data/10000000_10_125000000/92,6.885979599000001
data/10000000_10_125000000/121,0.5001234769999989
data/10000000_10_125000000/122,0.49906346100000043
data/10000000_10_125000000/123,0.5011086139999996
data/10000000_10_125000000/89,11.032744597
data/10000000_10_125000000/5,11.051084642000001
data/10000000_10_125000000/113,11.030516426
data/10000000_10_125000000/83,11.034537851
data/10000000_10_125000000/119,11.032404708
data/10000000_10_125000000/29,11.048712144
data/10000000_10_125000000/91,11.037720751
data/10000000_10_125000000/67,11.04127821
data/10000000_10_125000000/93,11.038579559999999
data/10000000_10_125000000/40,11.048697408999999
data/10000000_10_125000000/74,11.042011558
data/10000000_10_125000000/57,11.046417315000001
data/10000000_10_125000000/48,11.050267618
data/10000000_10_125000000/18,11.058596911
data/10000000_10_125000000/49,11.050924267
data/10000000_10_125000000/46,11.05186287
data/10000000_10_125000000/9,11.062712834
data/10000000_10_125000000/38,11.055433218
data/10000000_10_125000000/26,11.05973431
data/10000000_10_125000000/32,11.058744868000002
data/10000000_10_125000000/95,11.047404809
data/10000000_10_125000000/62,11.05281331
data/10000000_10_125000000/59,11.054625497
data/10000000_10_125000000/28,11.062536805999999
data/10000000_10_125000000/56,11.056182651
data/10000000_10_125000000/55,11.056770561
data/10000000_10_125000000/36,11.061455025
data/10000000_10_125000000/60,11.056176264
data/10000000_10_125000000/11,11.069853024
data/10000000_10_125000000/24,11.066644812
data/10000000_10_125000000/31,11.064902846
data/10000000_10_125000000/6,11.07166956
data/10000000_10_125000000/50,11.060900398000001
data/10000000_10_125000000/63,11.058165392
data/10000000_10_125000000/53,11.060557161999998
data/10000000_10_125000000/35,11.065279985
data/10000000_10_125000000/66,11.058640195999999
data/10000000_10_125000000/107,11.053706394999999
data/10000000_10_125000000/70,11.058427082000001
data/10000000_10_125000000/64,11.059915993
data/10000000_10_125000000/58,11.061551527
data/10000000_10_125000000/27,11.069412708000002
data/10000000_10_125000000/47,11.064328374999999
data/10000000_10_125000000/54,11.062908982
data/10000000_10_125000000/101,11.056246204999999
data/10000000_10_125000000/111,11.054867281
data/10000000_10_125000000/19,11.072417706
data/10000000_10_125000000/109,11.055482711
data/10000000_10_125000000/44,11.066063286
data/10000000_10_125000000/75,11.059957632
data/10000000_10_125000000/81,11.059499023
data/10000000_10_125000000/2,11.077147819
data/10000000_10_125000000/79,11.060031592
data/10000000_10_125000000/14,11.07562179
data/10000000_10_125000000/1,11.078239976999999
data/10000000_10_125000000/13,11.076195470999998
data/10000000_10_125000000/99,11.058546546999999
data/10000000_10_125000000/15,11.075459068
data/10000000_10_125000000/65,11.062996483
data/10000000_10_125000000/43,11.068114391
data/10000000_10_125000000/34,11.070469448
data/10000000_10_125000000/42,11.068652715
data/10000000_10_125000000/39,11.069661946
data/10000000_10_125000000/68,11.06305115
data/10000000_10_125000000/105,11.058671799
data/10000000_10_125000000/72,11.062973051
data/10000000_10_125000000/16,11.076515701999998
data/10000000_10_125000000/52,11.067420615
data/10000000_10_125000000/61,11.065567389
data/10000000_10_125000000/41,11.07015263
data/10000000_10_125000000/33,11.072266278999999
data/10000000_10_125000000/37,11.071602876
data/10000000_10_125000000/22,11.076253732
data/10000000_10_125000000/97,11.061458265
data/10000000_10_125000000/25,11.076188460000001
data/10000000_10_125000000/45,11.0702033
data/10000000_10_125000000/103,11.060830321
data/10000000_10_125000000/51,11.069254457
data/10000000_10_125000000/115,11.059888521
data/10000000_10_125000000/77,11.064627831
data/10000000_10_125000000/117,11.060062831
data/10000000_10_125000000/30,11.075093915
data/10000000_10_125000000/85,11.064380559
data/10000000_10_125000000/87,11.064333315
data/10000000_10_125000000/21,11.078558584
"""
# dont_test
df = logs_to_dataframe(logs_uvicorn_async)
_ = df.elapsed.plot(figsize=(10, 6))
logs_fastapi = """
data/10000000_10_125000000/122,9.933765433000001
data/10000000_10_125000000/41,9.943492624000001
data/10000000_10_125000000/63,9.946849223000001
data/10000000_10_125000000/9,9.962773567
data/10000000_10_125000000/118,9.959870551
data/10000000_10_125000000/53,9.963995594
data/10000000_10_125000000/90,9.964910438
data/10000000_10_125000000/100,9.979324739
data/10000000_10_125000000/71,9.981060487999999
data/10000000_10_125000000/57,9.981785366
data/10000000_10_125000000/21,9.997121532000001
data/10000000_10_125000000/13,10.003701104000001
data/10000000_10_125000000/75,10.004152799
data/10000000_10_125000000/20,10.104847233
data/10000000_10_125000000/12,10.111484621999999
data/10000000_10_125000000/6,10.121463404
data/10000000_10_125000000/7,10.127957686
data/10000000_10_125000000/8,10.129199069
data/10000000_10_125000000/61,10.128894684999999
data/10000000_10_125000000/43,10.13089011
data/10000000_10_125000000/23,10.137666875999999
data/10000000_10_125000000/96,10.136829996
data/10000000_10_125000000/27,10.140542367
data/10000000_10_125000000/25,10.147355485
data/10000000_10_125000000/19,10.150221776
data/10000000_10_125000000/1,10.152099835
data/10000000_10_125000000/39,10.151208884
data/10000000_10_125000000/29,10.157005554
data/10000000_10_125000000/114,10.156500757
data/10000000_10_125000000/102,10.163977353
data/10000000_10_125000000/49,10.169793219999999
data/10000000_10_125000000/10,10.171598289
data/10000000_10_125000000/88,10.171449753000001
data/10000000_10_125000000/86,10.172172116
data/10000000_10_125000000/35,10.175344432
data/10000000_10_125000000/33,10.178474155
data/10000000_10_125000000/15,10.178805528
data/10000000_10_125000000/92,10.178972404
data/10000000_10_125000000/34,10.187136509
data/10000000_10_125000000/16,10.182664230999999
data/10000000_10_125000000/51,10.184671498
data/10000000_10_125000000/44,10.192229157
data/10000000_10_125000000/50,10.191487468
data/10000000_10_125000000/28,10.19359861
data/10000000_10_125000000/65,10.186523094
data/10000000_10_125000000/37,10.187684434
data/10000000_10_125000000/73,10.187815393
data/10000000_10_125000000/3,10.190067701999999
data/10000000_10_125000000/55,10.189257711000002
data/10000000_10_125000000/2,10.191653348
data/10000000_10_125000000/42,10.196547966
data/10000000_10_125000000/11,10.192346410999999
data/10000000_10_125000000/107,10.193080724
data/10000000_10_125000000/94,10.189392177
data/10000000_10_125000000/79,10.187979312
data/10000000_10_125000000/111,10.193701459000001
data/10000000_10_125000000/81,10.190431308
data/10000000_10_125000000/54,10.197143696000001
data/10000000_10_125000000/77,10.191597804
data/10000000_10_125000000/76,10.197235757
data/10000000_10_125000000/31,10.193320191
data/10000000_10_125000000/4,10.196006129
data/10000000_10_125000000/30,10.200562421999999
data/10000000_10_125000000/101,10.197051785000001
data/10000000_10_125000000/80,10.198384966
data/10000000_10_125000000/5,10.196779710000001
data/10000000_10_125000000/112,10.192615386
data/10000000_10_125000000/67,10.195742501
data/10000000_10_125000000/47,10.195817022
data/10000000_10_125000000/17,10.197556056
data/10000000_10_125000000/110,10.193257883000001
data/10000000_10_125000000/108,10.194445537
data/10000000_10_125000000/59,10.194693101
data/10000000_10_125000000/56,10.197073349
data/10000000_10_125000000/69,10.195686933000001
data/10000000_10_125000000/83,10.196833738999999
data/10000000_10_125000000/106,10.195994832999999
data/10000000_10_125000000/48,10.204802624
data/10000000_10_125000000/116,10.19641522
data/10000000_10_125000000/45,10.198883305
data/10000000_10_125000000/58,10.198545685
data/10000000_10_125000000/97,10.203346065
data/10000000_10_125000000/120,10.196421403
data/10000000_10_125000000/99,10.20331979
data/10000000_10_125000000/14,10.207468200000001
data/10000000_10_125000000/18,10.207972573
data/10000000_10_125000000/40,10.20691203
data/10000000_10_125000000/52,10.207585178
data/10000000_10_125000000/70,10.208279563
data/10000000_10_125000000/46,10.210060307
data/10000000_10_125000000/32,10.211920035
data/10000000_10_125000000/113,10.20913842
data/10000000_10_125000000/22,10.212886702999999
data/10000000_10_125000000/62,10.211756448000001
data/10000000_10_125000000/91,10.210190138
data/10000000_10_125000000/78,10.210368638
data/10000000_10_125000000/89,10.21005216
data/10000000_10_125000000/117,10.210204634
data/10000000_10_125000000/87,10.212714482
data/10000000_10_125000000/103,10.211219121000001
data/10000000_10_125000000/74,10.2120921
data/10000000_10_125000000/93,10.212405
data/10000000_10_125000000/36,10.2141169
data/10000000_10_125000000/119,10.2117151
data/10000000_10_125000000/121,10.212248202
data/10000000_10_125000000/105,10.213146601
data/10000000_10_125000000/26,10.216476612
data/10000000_10_125000000/85,10.215189716000001
data/10000000_10_125000000/38,10.217162803
data/10000000_10_125000000/115,10.213779615
data/10000000_10_125000000/95,10.215831901
data/10000000_10_125000000/72,10.217764303000001
data/10000000_10_125000000/66,10.218596177999999
data/10000000_10_125000000/64,10.219937079
data/10000000_10_125000000/68,10.220424912999999
data/10000000_10_125000000/109,10.218735611
data/10000000_10_125000000/82,10.224988896
data/10000000_10_125000000/24,10.230682018
data/10000000_10_125000000/123,0.2759294560000001
data/10000000_10_125000000/60,10.460096808000001
data/10000000_10_125000000/84,10.860351411
data/10000000_10_125000000/98,11.463377860000001
data/10000000_10_125000000/104,12.458244998000001
"""
# dont_test
df = logs_to_dataframe(logs_fastapi)
_ = df.elapsed.plot(figsize=(10, 6))
|
53_analyze.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# # Gráficos de convergência do método de Newton
#
# O método de Newton é muito usado exatamente porque a convergência é muito rápida.
# Vamos "ver" esta convergência com gráficos!
# ### Exercício
#
# Modifique o método de Newton para que ele retorne a lista de todos os pontos intermediários gerados.
def newton(f,df,x, prec=1e-8, maxiter=100):
### Resposta aqui
# Vamos "calcular" pi!
pts = newton(np.sin, np.cos, 4)
pts
# O gráfico do erro
plt.plot([p - np.pi for p in pts])
plt.show()
# E agora em escala log
plt.semilogy([abs(p - np.pi) for p in pts])
plt.show()
# E agora em números de verdade
for p in pts:
print(p - np.pi)
# ## Um exemplo "mais realista"
#
# Como veremos em breve, o seno é "bom demais" para convergência do método de Newton.
# Vamos fazer um caso "um pouco mais geral", e também um exemplo clássico:
# calcular a raiz quadrada de 2.
# Construa as funções
### Resposta aqui
pts = newton(f, df, 1)
pts
# E agora, repita os gráficos e os números:
# +
ans = np.sqrt(2)
_, [ax1, ax2] = plt.subplots(1,2, figsize=(14,4))
ax1.plot([p - ans for p in pts])
ax2.semilogy([abs(p - ans) for p in pts])
plt.show()
for p in pts:
print(p - ans)
# -
# ### Exercício
#
# Escreva explicitamente o passo de Newton para encontrar a raiz quadrada de um número $x$.
#
# ### Exercício
#
# Se você quiser fazer uma "animação" do método de Newton, você precisará de:
# - `f` e `df`, `x0` e `prec` para passar para o método de Newton
# - Construir os segmentos que saem de $(x_n, f(x_n))$ e são tangentes ao gráfico de $f$
#
# Com ajuda da lista de pontos `pts` retornada pelo seu novo método de Newton,
# crie as tangentes.
# +
# Gráfico da função f, em pontilhado para não esconder as tangentes
ts = np.arange(1,2, 0.01)
plt.plot(ts, f(ts), '--')
### Resposta aqui
plt.legend(loc=0, title='Pt inicial')
plt.show()
|
comp-cientifica-I-2018-2/semana-5/Semana3-Parte3-GraficosConvergenciaNewton-Copy1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Filings Monthly Stats
# We need to load in these libraries into our notebook in order to query, load, manipulate and view the data
# + pycharm={"is_executing": false, "name": "#%%\n"}
import os
import psycopg2
import pandas as pd
import matplotlib
from datetime import datetime, timedelta
from IPython.core.display import HTML
# %load_ext sql
# %config SqlMagic.displaylimit = 5
# -
# This will create the connection to the database and prep the jupyter magic for SQL
# + pycharm={"is_executing": false, "name": "#%%\n"}
connect_to_db = 'postgresql://' + \
os.getenv('PG_USER', '') + ":" + os.getenv('PG_PASSWORD', '') +'@' + \
os.getenv('PG_HOST', '') + ':' + os.getenv('PG_PORT', '5432') + '/' + os.getenv('PG_DB_NAME', '');
# %sql $connect_to_db
# -
# Simplest query to run to ensure our libraries are loaded and our DB connection is working
# + pycharm={"is_executing": false, "name": "#%%\n"} language="sql"
# select now() AT TIME ZONE 'PST' as current_date
# -
# monthly total before running time.
# + pycharm={"is_executing": false, "name": "#%%\n"} magic_args="stat_filings_monthly_completed <<" language="sql"
# SELECT b.identifier AS COOPERATIVE_NUMBER
# , b.legal_name AS COOPERATIVE_NAME
# , COUNT(b.identifier) AS FILINGS_TOTAL_COMPLETED
# , STRING_AGG(f.filing_type, ', ') AS FILING_TYPES_COMPLETED
# FROM businesses b,
# filings f
# WHERE b.id = f.business_id
# AND f.status='COMPLETED'
# AND date(f.completion_date at time zone 'utc' at time zone 'pst') > date(current_date - 1 - interval '1 months')
# GROUP BY b.identifier, b.legal_name
# UNION ALL
# SELECT 'SUM' identifier, null, COUNT(b.identifier) AS count, null
# FROM businesses b,
# filings f
# WHERE b.id = f.business_id
# AND f.status='COMPLETED'
# AND date(f.completion_date at time zone 'utc' at time zone 'pst') > date(current_date - 1 - interval '1 months');
# + pycharm={"is_executing": false, "name": "#%%\n"}
edt = stat_filings_monthly_completed.DataFrame()
# + pycharm={"is_executing": false, "name": "#%%\n"}
# -
# Save to CSV
# + pycharm={"is_executing": false, "name": "#%%\n"}
filename = 'filings_monthly_stats_till_' + datetime.strftime(datetime.now()-timedelta(1), '%Y-%m-%d') +'.csv'
edt.to_csv(filename, sep=',', encoding='utf-8', index=False)
# + pycharm={"is_executing": false, "name": "#%%\n"}
|
jobs/filings-notebook-report/monthly/filings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
path = os.getcwd()
model_tar = "nuclei_datasets.tar.gz"
data_path = os.path.join(path + '/nuclei_datasets')
model_path = os.path.join(path + '/logs/nucleus')
weights_path = os.path.join(model_path + '/mask_rcnn_nucleus.h5') #My weights file
DEVICE = "/gpu:0"
config = nucleus.NucleusConfig()
class InferenceConfig(config.__class__):
# Run detection on one image at a time
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
dataset = nucleus.NucleusDataset()
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference", model_dir=model_path, config=config)
model.load_weights(weights_path, by_name=True)
def compute_batch_ap(image_ids):
APs = []
for image_id in image_ids:
# Load image
image, image_meta, gt_class_id, gt_bbox, gt_mask = modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False)
# Run object detection
results = model.detect([image], verbose=0)
# Compute AP
r = results[0]
AP, precisions, recalls, overlaps = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r['rois'], r['class_ids'], r['scores'], r['masks'])
AP = 1 - AP
APs.append(AP)
return APs, precisions, recalls
dataset.load_nucleus(data_path, 'val')
dataset.prepare()
print("Images: {}\nClasses: {}".format(len(dataset.image_ids), dataset.class_names))
print("Loading weights ", weights_path)
image_ids = np.random.choice(dataset.image_ids, 25)
APs, precisions, recalls = compute_batch_ap(image_ids)
print("mAP @ IoU=50: ", APs)
AP = np.mean(APs)
visualize.plot_precision_recall(AP, precisions, recalls)
plt.show()
# +
if __name__ == '__main__':
|
mask_rcnn_damage_detection/PR Curve.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="0bf81eb9-8749-401f-9a2e-d58447256499" _uuid="e7de522614a7e048e788bc62b8752e95739fc20a"
# ## Basics of TS:
#
# Collation of different basic concepts of the different traditional time-series models and some basic intuition behind them
#
# ## Objective:
# This kernel was made to serve as repository of various time-series concepts for beginners and I hope it would be useful as a refresher to some of the experts too :)
#
# ## Table of contents:
# * Competition and data overview
# * Imports ( data and packages )
# * Basic exploration/EDA
# * Single time-series
# * Stationarity
# * Seasonality , Trend and Remainder
# * AR , MA , ARMA , ARIMA
# * Selecting P and Q using AIC
# * ETS
# * Prophet
# * UCM
# * Hierarchical time-series
# * Bottom's up
# * AHP
# * PHA
# * FP
#
#
# ## Competition and data overview:
#
# In this playground competition, we are provided with the challenge of predicting total sales for every product and store in the next month for Russian Software company-[1c company](http://1c.ru/eng/title.htm).
#
# **What does the IC company do?:**
#
# 1C: Enterprise 8 system of programs is intended for automation of everyday enterprise activities: various business tasks of economic and management activity, such as management accounting, business accounting, HR management, CRM, SRM, MRP, MRP, etc.
#
# **Data**:
# We are provided with daily sales data for each store-item combination, but our task is to predict sales at a monthly level.
#
# ## Imports:
#
# + _cell_guid="795bbe4b-51b2-42ec-810a-4f4c18c84f53" _uuid="e4eb15fdb1237ea12fda77b898eb315b00a205ce"
# always start with checking out the files!
# !ls ../input/*
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# Basic packages
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import random as rd # generating random numbers
import datetime # manipulating date formats
# Viz
import matplotlib.pyplot as plt # basic plotting
import seaborn as sns # for prettier plots
# TIME SERIES
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.statespace.sarimax import SARIMAX
from pandas.plotting import autocorrelation_plot
from statsmodels.tsa.stattools import adfuller, acf, pacf,arma_order_select_ic
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
# settings
import warnings
warnings.filterwarnings("ignore")
# + _cell_guid="6541e1a6-a353-4709-a1fa-730e0f2a308d" _uuid="debe15ae99f3596923efc37ce2f609920213be54"
# Import all of them
sales=pd.read_csv("../input/sales_train.csv")
# settings
import warnings
warnings.filterwarnings("ignore")
item_cat=pd.read_csv("../input/item_categories.csv")
item=pd.read_csv("../input/items.csv")
sub=pd.read_csv("../input/sample_submission.csv")
shops=pd.read_csv("../input/shops.csv")
test=pd.read_csv("../input/test.csv")
# + _cell_guid="dc6fc0f9-45a9-4146-b88d-d4bddcb224b2" _uuid="8e1875bb64b6efc577e8b121217e2ded20ea9ce9"
#formatting the date column correctly
sales.date=sales.date.apply(lambda x:datetime.datetime.strptime(x, '%d.%m.%Y'))
# check
print(sales.info())
# + _cell_guid="dd800a06-41f7-41d2-a402-80ef2cc4ed2d" _uuid="0ca7c39c5544de1888d111db2450010f85f1a099"
# Aggregate to monthly level the required metrics
monthly_sales=sales.groupby(["date_block_num","shop_id","item_id"])[
"date","item_price","item_cnt_day"].agg({"date":["min",'max'],"item_price":"mean","item_cnt_day":"sum"})
## Lets break down the line of code here:
# aggregate by date-block(month),shop_id and item_id
# select the columns date,item_price and item_cnt(sales)
# Provide a dictionary which says what aggregation to perform on which column
# min and max on the date
# average of the item_price
# sum of the sales
# + _cell_guid="986b9168-860f-4ae0-8ed7-c42cb65837fb" _uuid="3d689df5658dfa3bfbfe531488844a9fdd31d804"
# take a peak
monthly_sales.head(20)
# + _cell_guid="c8e0a7f3-9a16-46e0-aae3-273fe0f21d0e" _uuid="a051b790a453f6e28632435a6c30efae02538113"
# number of items per cat
x=item.groupby(['item_category_id']).count()
x=x.sort_values(by='item_id',ascending=False)
x=x.iloc[0:10].reset_index()
x
# #plot
plt.figure(figsize=(8,4))
ax= sns.barplot(x.item_category_id, x.item_id, alpha=0.8)
plt.title("Items per Category")
plt.ylabel('# of items', fontsize=12)
plt.xlabel('Category', fontsize=12)
plt.show()
# + [markdown] _cell_guid="68d378e2-2302-4381-8423-ede818fce32e" _uuid="8dadea026ac25a550cb6725894e1117c67e88757"
# Of course, there is a lot more that we can explore in this dataset, but let's dive into the time-series part.
#
# # Single series:
#
# The objective requires us to predict sales for the next month at a store-item combination.
#
# Sales over time of each store-item is a time-series in itself. Before we dive into all the combinations, first let's understand how to forecast for a single series.
#
# I've chosen to predict for the total sales per month for the entire company.
#
# First let's compute the total sales per month and plot that data.
#
# + _cell_guid="a783e367-da29-47fd-97be-f3ff756f32fe" _uuid="95eaf40635366294662b228680cb6e425940c7db"
ts=sales.groupby(["date_block_num"])["item_cnt_day"].sum()
ts.astype('float')
plt.figure(figsize=(16,8))
plt.title('Total Sales of the company')
plt.xlabel('Time')
plt.ylabel('Sales')
plt.plot(ts);
# + _cell_guid="b98fb1f6-f3a2-434f-94c6-af01f3ffdfd4" _uuid="bee64faeaacd2f60ff85ac8d2b61eea4e80afda8"
plt.figure(figsize=(16,6))
plt.plot(ts.rolling(window=12,center=False).mean(),label='Rolling Mean');
plt.plot(ts.rolling(window=12,center=False).std(),label='Rolling sd');
plt.legend();
# + [markdown] _cell_guid="5fe94fac-46c3-43c5-b032-705cdfd43726" _uuid="1a06f1b76571d5d09095148d07ddfa1e4e2002cc"
# **Quick observations:**
# There is an obvious "seasonality" (Eg: peak sales around a time of year) and a decreasing "Trend".
#
# Let's check that with a quick decomposition into Trend, seasonality and residuals.
#
# + _cell_guid="b7c4c5fe-8a25-403d-8bb6-fa4f64699c00" _uuid="611d345c3a3358dd34826c277bd2294247183c0e"
import statsmodels.api as sm
# multiplicative
res = sm.tsa.seasonal_decompose(ts.values,freq=12,model="multiplicative")
#plt.figure(figsize=(16,12))
fig = res.plot()
#fig.show()
# + _cell_guid="68db7d1b-1a74-48d2-96f0-78c8847981bb" _uuid="80b4215987ff52e4e514b97093a54fc55461430a"
# Additive model
res = sm.tsa.seasonal_decompose(ts.values,freq=12,model="additive")
#plt.figure(figsize=(16,12))
fig = res.plot()
#fig.show()
# + _cell_guid="2176681b-44c0-4b11-9a11-f6172ba3d265" _uuid="6261f5b777f4d539e383e6928f151b7db4dbf443"
# R version ported into python
# alas ! rpy2 does not exist in Kaggle kernals :(
# from rpy2.robjects import r
# def decompose(series, frequency, s_window, **kwargs):
# df = pd.DataFrame()
# df['date'] = series.index
# s = [x for x in series.values]
# length = len(series)
# s = r.ts(s, frequency=frequency)
# decomposed = [x for x in r.stl(s, s_window, **kwargs).rx2('time.series')]
# df['observed'] = series.values
# df['trend'] = decomposed[length:2*length]
# df['seasonal'] = decomposed[0:length]
# df['residual'] = decomposed[2*length:3*length]
# return df
# + [markdown] _cell_guid="7e6f683b-a27d-4a68-9069-e0c713356339" _uuid="a243f999421ec6d568a781d8a1f9baea720b09db"
# we assume an additive model, then we can write
#
# > yt=St+Tt+Et
#
# where yt is the data at period t, St is the seasonal component at period t, Tt is the trend-cycle component at period tt and Et is the remainder (or irregular or error) component at period t
# Similarly for Multiplicative model,
#
# > yt=St x Tt x Et
#
# ## Stationarity:
#
# 
#
# Stationarity refers to time-invariance of a series. (ie) Two points in a time series are related to each other by only how far apart they are, and not by the direction(forward/backward)
#
# When a time series is stationary, it can be easier to model. Statistical modeling methods assume or require the time series to be stationary.
#
#
# There are multiple tests that can be used to check stationarity.
# * ADF( Augmented Dicky Fuller Test)
# * KPSS
# * PP (Phillips-Perron test)
#
# Let's just perform the ADF which is the most commonly used one.
#
# Note: [Step by step guide to perform dicky fuller test in Excel](http://www.real-statistics.com/time-series-analysis/stochastic-processes/dickey-fuller-test/)
#
# [Another Useful guide](http://www.blackarbs.com/blog/time-series-analysis-in-python-linear-models-to-garch/11/1/2016#AR)
#
# [good reference](https://github.com/ultimatist/ODSC17/blob/master/Time%20Series%20with%20Python%20(ODSC)%20STA.ipynb)
#
# + _cell_guid="0172ae25-5173-4645-960a-cedcb2800cb9" _uuid="f98bc8fda199838bfa54b1b406e6c7f5023d16bb"
# Stationarity tests
def test_stationarity(timeseries):
#Perform Dickey-Fuller test:
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
test_stationarity(ts)
# + _cell_guid="0374ddff-dc1f-4d9b-82f9-f3eff9c9c4b0" _uuid="a85f4e771a553ff529b46f25c183d33708055378"
# to remove trend
from pandas import Series as Series
# create a differenced series
def difference(dataset, interval=1):
diff = list()
for i in range(interval, len(dataset)):
value = dataset[i] - dataset[i - interval]
diff.append(value)
return Series(diff)
# invert differenced forecast
def inverse_difference(last_ob, value):
return value + last_ob
# + _cell_guid="c97fbab1-a301-46bd-95cb-5ba01cdef568" _uuid="0904a2ab681ac5b3042f5e3d3ba9743955865266"
ts=sales.groupby(["date_block_num"])["item_cnt_day"].sum()
ts.astype('float')
plt.figure(figsize=(16,16))
plt.subplot(311)
plt.title('Original')
plt.xlabel('Time')
plt.ylabel('Sales')
plt.plot(ts)
plt.subplot(312)
plt.title('After De-trend')
plt.xlabel('Time')
plt.ylabel('Sales')
new_ts=difference(ts)
plt.plot(new_ts)
plt.plot()
plt.subplot(313)
plt.title('After De-seasonalization')
plt.xlabel('Time')
plt.ylabel('Sales')
new_ts=difference(ts,12) # assuming the seasonality is 12 months long
plt.plot(new_ts)
plt.plot()
# + _cell_guid="9227dec3-bed4-4a12-bc69-563bd68cb3ff" _uuid="aab34e83d42ceea015ce2f7fe1ace57a115fcd5f"
# now testing the stationarity again after de-seasonality
test_stationarity(new_ts)
# + [markdown] _cell_guid="66399279-b53f-4c3b-ad30-68353880a5b0" _uuid="f6ba95bc505b6de75f94840eb4b1e1ce6ccc90e5"
# ### Now after the transformations, our p-value for the DF test is well within 5 %. Hence we can assume Stationarity of the series
#
# We can easily get back the original series using the inverse transform function that we have defined above.
#
# Now let's dive into making the forecasts!
#
# # AR, MA and ARMA models:
# TL: DR version of the models:
#
# MA - Next value in the series is a function of the average of the previous n number of values
# AR - The errors(difference in mean) of the next value is a function of the errors in the previous n number of values
# ARMA - a mixture of both.
#
# Now, How do we find out, if our time-series in AR process or MA process?
#
# Let's find out!
# + _cell_guid="85e12639-f2c2-4ce1-a57a-fba013e0c64c" _uuid="30302a2f14d1e9a450672504ed3237e10af33d31"
def tsplot(y, lags=None, figsize=(10, 8), style='bmh',title=''):
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
#mpl.rcParams['font.family'] = 'Ubuntu Mono'
layout = (3, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
qq_ax = plt.subplot2grid(layout, (2, 0))
pp_ax = plt.subplot2grid(layout, (2, 1))
y.plot(ax=ts_ax)
ts_ax.set_title(title)
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax, alpha=0.5)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax, alpha=0.5)
sm.qqplot(y, line='s', ax=qq_ax)
qq_ax.set_title('QQ Plot')
scs.probplot(y, sparams=(y.mean(), y.std()), plot=pp_ax)
plt.tight_layout()
return
# + _cell_guid="98e9a6bf-63af-4de5-bc5b-87a2b53749e6" _uuid="274f0899031c6c8904cc2fc16278210bf60f44cf"
# Simulate an AR(1) process with alpha = 0.6
np.random.seed(1)
n_samples = int(1000)
a = 0.6
x = w = np.random.normal(size=n_samples)
for t in range(n_samples):
x[t] = a*x[t-1] + w[t]
limit=12
_ = tsplot(x, lags=limit,title="AR(1)process")
# + [markdown] _cell_guid="e737518c-d725-4ed2-a01d-f82986db65af" _uuid="b3bfab2ac67a745c9aa1c1c495a958383ebd4b45"
# ## AR(1) process -- has ACF tailing out and PACF cutting off at lag=1
# + _cell_guid="c0ae4820-5e6e-4f51-b870-caff9f093a65" _uuid="bfa6b99d581c1a11248254634fb3932bc0de7a0b"
# Simulate an AR(2) process
n = int(1000)
alphas = np.array([.444, .333])
betas = np.array([0.])
# Python requires us to specify the zero-lag value which is 1
# Also note that the alphas for the AR model must be negated
# We also set the betas for the MA equal to 0 for an AR(p) model
# For more information see the examples at statsmodels.org
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ar2 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
_ = tsplot(ar2, lags=12,title="AR(2) process")
# + [markdown] _cell_guid="789221b6-4c5f-4e22-b740-abd904310050" _uuid="0e64eb4625e7fed1ea67892cd1ce76f521ed2e43"
# ## AR(2) process -- has ACF tailing out and PACF cutting off at lag=2
# + _cell_guid="d87cb6df-a332-4ac0-bf2d-df690a4a3510" _uuid="8b6e8e1fb9d5d32e925a3eb5718bbb3fed09c585"
# Simulate an MA(1) process
n = int(1000)
# set the AR(p) alphas equal to 0
alphas = np.array([0.])
betas = np.array([0.8])
# add zero-lag and negate alphas
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma1 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
limit=12
_ = tsplot(ma1, lags=limit,title="MA(1) process")
# + [markdown] _cell_guid="8974f547-b74a-4b01-822b-0512bcfbd428" _uuid="bb9116b36c617672b13e339afd14209c0ea72493"
# ## MA(1) process -- has ACF cut off at lag=1
# + _cell_guid="266ed44d-a2af-40b2-bc70-1f8c92c97cd4" _uuid="50d9e7da3491a1da9c88d2da1038651e4dd18931"
# Simulate MA(2) process with betas 0.6, 0.4
n = int(1000)
alphas = np.array([0.])
betas = np.array([0.6, 0.4])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma3 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
_ = tsplot(ma3, lags=12,title="MA(2) process")
# + [markdown] _cell_guid="cc105523-c043-41f2-8c33-0e73c2b5eef0" _uuid="1e3b61a68f1d1840e2d136087ed2daa3991c5e18"
# ## MA(2) process -- has ACF cut off at lag=2
# + _cell_guid="c9c8d060-8572-426f-87d9-e786d82ad205" _uuid="3bb2c3992a9b0fdbe9bc1a4f1dfcf7153e925c31"
# Simulate an ARMA(2, 2) model with alphas=[0.5,-0.25] and betas=[0.5,-0.3]
max_lag = 12
n = int(5000) # lots of samples to help estimates
burn = int(n/10) # number of samples to discard before fit
alphas = np.array([0.8, -0.65])
betas = np.array([0.5, -0.7])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
arma22 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n, burnin=burn)
_ = tsplot(arma22, lags=max_lag,title="ARMA(2,2) process")
# + [markdown] _cell_guid="50fe1c7f-2524-4fa1-8e30-3f14232b7ac6" _uuid="8bac724eafd54b4e8c2ec85ccf3f54496a61d525"
# ## Now things get a little hazy. Its not very clear/straight-forward.
#
# A nifty summary of the above plots:
#
# ACF Shape | Indicated Model |
# -- | -- |
# Exponential, decaying to zero | Autoregressive model. Use the partial autocorrelation plot to identify the order of the autoregressive model |
# Alternating positive and negative, decaying to zero Autoregressive model. | Use the partial autocorrelation plot to help identify the order. |
# One or more spikes, rest are essentially zero | Moving average model, order identified by where plot becomes zero. |
# Decay, starting after a few lags | Mixed autoregressive and moving average (ARMA) model. |
# All zero or close to zero | Data are essentially random. |
# High values at fixed intervals | Include seasonal autoregressive term. |
# No decay to zero | Series is not stationary |
#
#
# ## Let's use a systematic approach to finding the order of AR and MA processes.
# + _cell_guid="fce4e806-d217-4b2c-9df6-e38c3d03208b" _uuid="67306349432a683c926a812bd071915bf5e23e18"
# pick best order by aic
# smallest aic value wins
best_aic = np.inf
best_order = None
best_mdl = None
rng = range(5)
for i in rng:
for j in rng:
try:
tmp_mdl = smt.ARMA(arma22, order=(i, j)).fit(method='mle', trend='nc')
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, j)
best_mdl = tmp_mdl
except: continue
print('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
# + [markdown] _cell_guid="f9f28bdd-6b6e-4522-9644-f8d6020d830f" _uuid="e32468dcd2ea44e9477adc212eb7175875dba33b"
# ## We've correctly identified the order of the simulated process as ARMA(2,2).
#
# ### Lets use it for the sales time-series.
#
# + _cell_guid="4adcd9c6-63eb-41c2-82f3-4bde0ce556ef" _uuid="43f731d8b664c9531464d8766f1fc911dd69b2e0"
#
# pick best order by aic
# smallest aic value wins
best_aic = np.inf
best_order = None
best_mdl = None
rng = range(5)
for i in rng:
for j in rng:
try:
tmp_mdl = smt.ARMA(new_ts.values, order=(i, j)).fit(method='mle', trend='nc')
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, j)
best_mdl = tmp_mdl
except: continue
print('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
# + _cell_guid="62dacf92-a612-4342-812f-8936f45c1dce" _uuid="733861273519695c485dd59e8cb483e0b91802f3"
# Simply use best_mdl.predict() to predict the next values
# + _cell_guid="9f22f870-38b0-44f2-b7cf-90dfc3fefaa6" _uuid="dd7ffaeba28472d4bc2e8a0b4de8b6613b38b83e"
# adding the dates to the Time-series as index
ts=sales.groupby(["date_block_num"])["item_cnt_day"].sum()
ts.index=pd.date_range(start = '2013-01-01',end='2015-10-01', freq = 'MS')
ts=ts.reset_index()
ts.head()
# + [markdown] _cell_guid="5cd6369a-20e7-4586-b9ec-5d804ea64528" _uuid="d8c35e14d08d580907da6ed43e684ab9b89fb6cf"
# # Prophet:
#
# Recently open-sourced by Facebook research. It's a very promising tool, that is often a very handy and quick solution to the frustrating **flatline** :P
#
# 
#
# Sure, one could argue that with proper pre-processing and carefully tuning the parameters the above graph would not happen.
#
# But the truth is that most of us don't either have the patience or the expertise to make it happen.
#
# Also, there is the fact that in most practical scenarios- there is often a lot of time-series that needs to be predicted.
# Eg: This competition. It requires us to predict the next month sales for the **Store - item level combinations** which could be in the thousands.(ie) predict 1000s of parameters!
#
# Another neat functionality is that it follows the typical **sklearn** syntax.
#
# At its core, the Prophet procedure is an additive regression model with four main components:
# * A piecewise linear or logistic growth curve trend. Prophet automatically detects changes in trends by selecting changepoints from the data.
# * A yearly seasonal component modeled using Fourier series.
# * A weekly seasonal component using dummy variables.
# * A user-provided list of important holidays.
#
# **Resources for learning more about prophet:**
# * https://www.youtube.com/watch?v=95-HMzxsghY
# * https://facebook.github.io/prophet/docs/quick_start.html#python-api
# * https://research.fb.com/prophet-forecasting-at-scale/
# * https://blog.exploratory.io/is-prophet-better-than-arima-for-forecasting-time-series-fa9ae08a5851
# + _cell_guid="e0f1d568-e74b-4b4d-970f-17ed78ad6c04" _uuid="5515d79d56f071c77c955be1ef36de528f953306"
from fbprophet import Prophet
#prophet reqiures a pandas df at the below config
# ( date column named as DS and the value column as Y)
ts.columns=['ds','y']
model = Prophet( yearly_seasonality=True) #instantiate Prophet with only yearly seasonality as our data is monthly
model.fit(ts) #fit the model with your dataframe
# + _cell_guid="1dc15d33-ea9c-47b3-9f10-379e8f259606" _uuid="d9377c6f2e7537cfaebc606049977154a4cce49a"
# predict for five months in the furure and MS - month start is the frequency
future = model.make_future_dataframe(periods = 5, freq = 'MS')
# now lets make the forecasts
forecast = model.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
# + _cell_guid="c1120a17-8947-42cd-84ee-424f0b60d5d7" _uuid="695836bdeb4e148f08e3f3349e89bf4345781ca1"
model.plot(forecast)
# + _cell_guid="9821912e-76eb-4997-a4cc-cb111998370b" _uuid="d3ea5a00ce7d8e7f568a0c900cacc59d58c2893e"
model.plot_components(forecast)
# + [markdown] _cell_guid="4d72929a-1363-40b1-9394-9d9bc3cbbfcd" _uuid="50aff39e479cc20c9898b3a9e008eae2bc2eb713"
# Awesome. The trend and seasonality from Prophet look similar to the ones that we had earlier using the traditional methods.
#
# ## UCM:
#
# Unobserved Components Model. The intuition here is similar to that of the prophet. The model breaks down the time-series into its components, trend, seasonal, cycle and regresses them and then predicts the next point for the components and then combines them.
#
# Unfortunately, I could not find a good package/code that can perform this model in Python :(
#
# R version of UCM: https://bicorner.com/2015/12/28/unobserved-component-models-in-r/
#
# # Hierarchical time series:
#
# The [Forecasting: principles and practice](https://www.otexts.org/fpp/9/4) , is the ultimate reference book for forecasting by <NAME>.
#
# He lays out the fundamentals of dealing with grouped or Hierarchical forecasts. Consider the following simple scenario.
#
# 
#
# Hyndman proposes the following methods to estimate the points in this hierarchy. I've tried to simplify the language to make it more intuitve.
#
# ### Bottom up approach:
# * Predict all the base level series using any method, and then just aggregate it to the top.
# * Advantages: Simple , No information is lost due to aggregation.
# * Dis-advantages: Lower levels can be noisy
#
# ### Top down approach:
# * Predict the top level first. (Eg: predict total sales first)
# * Then calculate **weights** that denote the proportion of the total sales that needs to be given to the base level forecast(Eg:) the contribution of the item's sales to the total sales
# * There are different ways of arriving at the "weights".
# * **Average Historical Proportions** - Simple average of the item's contribution to sales in the past months
# * **Proportion of historical averages** - Weight is the ratio of average value of bottom series by the average value of total series (Eg: Weight(item1)= mean(item1)/mean(total_sales))
# * **Forecasted Proportions** - Predict the proportion in the future using changes in the past proportions
# * Use these weights to calcuate the base -forecasts and other levels
#
# ### Middle out:
# * Use both bottom up and top down together.
# * Eg: Consider our problem of predicting store-item level forecasts.
# * Take the middle level(Stores) and find forecasts for the stores
# * Use bottoms up approach to find overall sales
# * Dis-integrate store sales using proportions to find the item-level sales using a top-down approach
#
# ### Optimal combination approach:
# * Predict for all the layers independently
# * Since, all the layers are independent, they might not be consistent with hierarchy
# * Eg: Since the items are forecasted independently, the sum of the items sold in the store might not be equal to the forecasted sale of store or as Hyndman puts it “aggregate consistent”
# * Then some matrix calculations and adjustments happen to provide ad-hoc adjustments to the forecast to make them consistent with the hierarchy
#
#
# ### Enough with the theory. Lets start making forecasts! :P
# The problem at hand here, has 22170 items and 60 stores . This indicates that there can be around a **million** individual time-series(item-store combinations) that we need to predict!
#
# Configuring each of them would be nearly impossible. Let's use Prophet which does it for us.
#
# Starting off with the bottoms up approach.
#
# There are some other points to consider here:
# * Not all stores sell all items
# * What happens when a new product is introduced?
# * What if a product is removed off the shelves?
# + _cell_guid="f628232b-2b87-4ecf-98a9-df85b8cfa079" _uuid="c32a2ee89ed90af6aa786af833a27b3b2570117f"
total_sales=sales.groupby(['date_block_num'])["item_cnt_day"].sum()
dates=pd.date_range(start = '2013-01-01',end='2015-10-01', freq = 'MS')
total_sales.index=dates
total_sales.head()
# + _cell_guid="8c62a4c2-c482-417c-ba56-b376584706e7" _uuid="da06ef3cef98055ec146eb21b2ac4cdc580b73c7"
# get the unique combinations of item-store from the sales data at monthly level
monthly_sales=sales.groupby(["shop_id","item_id","date_block_num"])["item_cnt_day"].sum()
# arrange it conviniently to perform the hts
monthly_sales=monthly_sales.unstack(level=-1).fillna(0)
monthly_sales=monthly_sales.T
dates=pd.date_range(start = '2013-01-01',end='2015-10-01', freq = 'MS')
monthly_sales.index=dates
monthly_sales=monthly_sales.reset_index()
monthly_sales.head()
# + _cell_guid="ef4ffa1f-170b-421f-9a87-1798cb7ca885" _uuid="480e0c16e34f95bca30da929861e2c1de14410e4"
import time
start_time=time.time()
# Bottoms up
# Calculating the base forecasts using prophet
# From HTSprophet pachage -- https://github.com/CollinRooney12/htsprophet/blob/master/htsprophet/hts.py
forecastsDict = {}
for node in range(len(monthly_sales)):
# take the date-column and the col to be forecasted
nodeToForecast = pd.concat([monthly_sales.iloc[:,0], monthly_sales.iloc[:, node+1]], axis = 1)
# print(nodeToForecast.head()) # just to check
# rename for prophet compatability
nodeToForecast = nodeToForecast.rename(columns = {nodeToForecast.columns[0] : 'ds'})
nodeToForecast = nodeToForecast.rename(columns = {nodeToForecast.columns[1] : 'y'})
growth = 'linear'
m = Prophet(growth, yearly_seasonality=True)
m.fit(nodeToForecast)
future = m.make_future_dataframe(periods = 1, freq = 'MS')
forecastsDict[node] = m.predict(future)
if (node== 10):
end_time=time.time()
print("forecasting for ",node,"th node and took",end_time-start_time,"s")
break
# + [markdown] _cell_guid="3a0487c9-1e58-4d37-859a-23776598eac2" _uuid="e60bf72b1fbbf5c11c1a6e6302a8497ecf2c6dd0"
# ~16s for 10 predictions. We need a million predictions. This would not work out.
#
# # Middle out:
# Let's predict for the store level
# + _cell_guid="458386cd-bd4b-41ad-ac59-a2b0135b89fb" _uuid="0e1e93358ddc83308b5f16910816977750c8ac87"
monthly_shop_sales=sales.groupby(["date_block_num","shop_id"])["item_cnt_day"].sum()
# get the shops to the columns
monthly_shop_sales=monthly_shop_sales.unstack(level=1)
monthly_shop_sales=monthly_shop_sales.fillna(0)
monthly_shop_sales.index=dates
monthly_shop_sales=monthly_shop_sales.reset_index()
monthly_shop_sales.head()
# + _cell_guid="f812b9fc-a079-4f0f-a19d-5618bf499228" _uuid="75e7e20609e23bd676cc9781619940a3febf3cab"
start_time=time.time()
# Calculating the base forecasts using prophet
# From HTSprophet pachage -- https://github.com/CollinRooney12/htsprophet/blob/master/htsprophet/hts.py
forecastsDict = {}
for node in range(len(monthly_shop_sales)):
# take the date-column and the col to be forecasted
nodeToForecast = pd.concat([monthly_shop_sales.iloc[:,0], monthly_shop_sales.iloc[:, node+1]], axis = 1)
# print(nodeToForecast.head()) # just to check
# rename for prophet compatability
nodeToForecast = nodeToForecast.rename(columns = {nodeToForecast.columns[0] : 'ds'})
nodeToForecast = nodeToForecast.rename(columns = {nodeToForecast.columns[1] : 'y'})
growth = 'linear'
m = Prophet(growth, yearly_seasonality=True)
m.fit(nodeToForecast)
future = m.make_future_dataframe(periods = 1, freq = 'MS')
forecastsDict[node] = m.predict(future)
# + _cell_guid="bc342fe1-72cc-46ba-bc52-5bb4fed994fb" _uuid="cc93cc3b4f09a2e5a0bbaf86cc683f168557e004"
#predictions = np.zeros([len(forecastsDict[0].yhat),1])
nCols = len(list(forecastsDict.keys()))+1
for key in range(0, nCols-1):
f1 = np.array(forecastsDict[key].yhat)
f2 = f1[:, np.newaxis]
if key==0:
predictions=f2.copy()
# print(predictions.shape)
else:
predictions = np.concatenate((predictions, f2), axis = 1)
# + _cell_guid="f6f0ea03-3500-4580-8c3a-b024f8f43a6d" _uuid="689180c42779b32ab3ea7cffd9f2889e84b0ba4e"
predictions_unknown=predictions[-1]
predictions_unknown
# + _cell_guid="574d9966-059c-4c5d-babc-aa6e19e4263f" _uuid="35e79b17c6fdc31550458ffdebd622ba06ae5296"
# + [markdown] _cell_guid="6ee15e2b-a2a2-453d-b3ab-dfa292d91bc4" _uuid="c474acb21e3bf7dd4803a5d768283f184d32da5f"
# ## Under construction...........
#
# ### Unconventional techniques: converting TS into a regression problem
#
# ### Dealing with Hierarchy
# ### Codes for top down, optimal ,etc
#
#
# + _cell_guid="15a353b8-18c8-4b0b-977c-0b6931448aaa" _uuid="668928cb0ff4f9a301669621e2b1d060b377c0cf"
# + _cell_guid="556c78f5-f0bf-49d7-8662-f726284e1638" _uuid="19124c1ac7d7d4f4143c4ba8c260f8a737687b56"
# + [markdown] _cell_guid="4b521c08-cd33-442b-b639-2163209b3daf" _uuid="43e42792956ed2c45eb0f650f0e875c73221814f"
# ## Foot-notes:
#
# I'm not a stats major, so please do let me know in the comments if you feel that I've left out any important technique or if there was any mistake in the content.
#
# I plan to add another kernel about Time-series here which would be about adapting the open-source solutions from the recent time-series competitions ( Favorita, Recruit,etc. ) to this playground dataset.
#
# Do leave a comment/upvote :)
|
time-series-basics-exploring-traditional-ts.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Scenario 1, Simple Http Server
# ### Making the dataset, thats is a load on a pod named "httpdia".
# prerequisites:
#
# minikube start
#
# https://httpd.apache.org/docs/2.4/programs/ab.html
import time
from otumba.utils import manage_pods
from otumba.utils import get_pod_metrics_scenario_1
from otumba.GeneralPod import Pod
ServerPrometheus = "http://192.168.49.2:30000/"
numberpods = 10
request = 1000000
concurrency = 4000
host = "http://httpdia.default.svc.cluster.local/"
filedataset = "dataset.csv"
filedataset = "scenario01-"+str(request)+"-"+str(concurrency)+".csv"
requestpod = "load-generator-jupyter-c"
requestedpod = "httpdia"
# ### filedataset
lapso="5m"
namespace="default"
respuesta= get_pod_metrics_scenario_1 (server = ServerPrometheus, lapso = lapso,
namespace = namespace, podname = requestedpod,
requestpodname = requestpod)
from datetime import datetime
import csv
respuesta["load"] = '1000'
respuesta["date"] = datetime.now()
header = list(respuesta.keys())
print(respuesta)
f = open(filedataset, "w", newline='')
writer = csv.DictWriter(f, fieldnames = header)
writer.writeheader()
manage_pods(numberpods, "httpd", requestedpod, "500m", "200m", "default")
loadpod = Pod(namepod = requestpod,
dockerimage = "httpd", namespace = "default", shell = "/bin/sh")
loadpod.create()
carga = "ab -n "+str(request)+" -c "+str(concurrency)+" "+host
loadpod.exec_command(carga)
countstdout=0
while loadpod.resp.is_open():
resp_stdout=""
resp_stderr=""
longitud =0
respuesta = get_pod_metrics_scenario_1 (server = ServerPrometheus, lapso = lapso,
namespace = namespace, podname = requestedpod,
requestpodname = requestpod)
respuesta["load"] = str(concurrency)
respuesta["date"] = datetime.now()
writer.writerow(respuesta)
if loadpod.resp.peek_stdout():
resp_stdout= loadpod.resp.read_stdout()
longitud= len(resp_stdout)
countstdout= countstdout+1
if loadpod.resp.peek_stderr():
resp_stderr=loadpod.resp.read_stderr()
print("STDERR: %s" % resp_stderr)
inicioerror = resp_stderr.find("Completed")
if inicioerror == -1:
failedrequest=resp_stderr
break
if (longitud >300):
inicio = resp_stdout.find("Failed requests")
if (inicio > -1):
fin = resp_stdout.find("Total transferred")
failedrequest=resp_stdout[inicio+16:fin-1]
finalresponse = resp_stdout.split("\n")
if countstdout==2:
break
#loadpod.resp.update(timeout=1)
respuesta = failedrequest.strip()
f.close()
# +
import csv
from datetime import datetime
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import seaborn as sns
dataframe = pd.read_csv(filedataset)
# -
plt.figure(figsize=(20,12))
plt.title("cpu_usage_seconds, pods ="+str(numberpods)+" concurrency = "+ str(concurrency) )
plt.xticks(rotation = 90)
plt.plot(dataframe.dropped_packets, c = "cyan", marker = "o")
#plt.plot(dataframe.memory_usage_bytes, c = "green", marker = "o")
plt.show()
|
Scenario_1/GenerateDataset.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
import pandas as pd
from gensim.models.nmf import Nmf
from gensim.models.ldamodel import LdaModel
from gensim.models.hdpmodel import HdpModel
import topic_modelling as tm
# -
# Disable any warnings that appear
warnings.filterwarnings('ignore')
# Define number of topics to analyse
NUM_TOPICS: int = 5
# Retrieve data from the dataset
df = pd.read_csv('resources/bbc-news-data.csv', sep='\t')
data = df.content.values.tolist()
# Perform data preprocessing
data, id2word, corpus = tm.preprocess_data(data)
# Evaluate models
lda = LdaModel(corpus=corpus, id2word=id2word,
num_topics=NUM_TOPICS, per_word_topics=True)
nmf = Nmf(corpus=corpus, id2word=id2word, num_topics=NUM_TOPICS)
hdp = HdpModel(corpus=corpus, id2word=id2word, T=NUM_TOPICS)
# +
# List of excluded terms:
# ['say', 'year', 'people', 'new', 'good', 'time', 'come', 'take']
# +
# Create wordclouds for LDA
lda_wordcounts = {}
for i, topic in lda.show_topics(formatted=False):
tm.form_wordcloud(f'LDA Topic {i}', topic)
for word, _ in topic:
lda_wordcounts.setdefault(word, 0)
lda_wordcounts[word] += 1
print('Common words in LDA:')
for word, num in lda_wordcounts.items():
if num >= 3:
print(f'\t{word}: {num}')
# +
# Create wordclouds for NMF
nmf_wordcounts = {}
for i, topic in nmf.show_topics(formatted=False):
tm.form_wordcloud(f'NMF Topic {i}', topic)
for word, _ in topic:
nmf_wordcounts.setdefault(word, 0)
nmf_wordcounts[word] += 1
print('Common words in NMF:')
for word, num in nmf_wordcounts.items():
if num >= 3:
print(f'\t{word}: {num}')
# +
# Create wordclouds for HDP
hdp_wordcounts = {}
for i, topic in hdp.show_topics(formatted=False):
tm.form_wordcloud(f'HDP Topic {i}', topic)
for word, _ in topic:
hdp_wordcounts.setdefault(word, 0)
hdp_wordcounts[word] += 1
print('Common words in HDP:')
for word, num in hdp_wordcounts.items():
if num >= 3:
print(f'\t{word}: {num}')
|
test_preprocess_3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math as m
s = input("Enter a string: ")
count = 0
for k in range(len(s)-1):
for i in range(len(s)-k):
j = i + 1
while j < len(s)-k:
if ''.join(sorted(s[i:i+k+1])) == ''.join(sorted(s[j:j+k+1])):
count += 1
j += 1
print(count)
# +
s = "shakir"
print(s[:1])
# -
|
Problem set 2/8. Sherlock&Anagrams.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import pandas_datareader.data as web
import datetime as dt
# +
start = dt.datetime(2009,1,2)
end = dt.datetime.now()
DJI_DF = web.DataReader('^DJI', 'yahoo', start, end)
GSPC_DF = web.DataReader('^GSPC', 'yahoo', start, end)
IXIC_DF = web.DataReader('^IXIC', 'yahoo', start, end)
DJI_DF.head()
# -
GSPC_DF.head()
IXIC_DF.head()
# +
DJI_DF.reset_index(inplace = True)
DJI_DF.set_index('Date', inplace = True)
DJI_DF.to_csv('^DJI.csv')
GSPC_DF.reset_index(inplace = True)
GSPC_DF.set_index('Date', inplace = True)
GSPC_DF.to_csv('^GSPC.csv')
IXIC_DF.reset_index(inplace = True)
IXIC_DF.set_index('Date', inplace = True)
IXIC_DF.to_csv('^IXIC.csv')
# -
|
Yahoo Finance Stock Data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/StephanyLera/Linear-Algebra_2nd-Sem/blob/main/LabRep2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LX4jIMWoPpqq"
# # Practice Answers
# + [markdown] id="XoCBdyVRQdcv"
# ## Practice Number 1
# + [markdown] id="n7XrX5MgLKOh"
# 1. Given the linear combination below, try to create a corresponding matrix representing it.
# + [markdown] id="23HatMeoLPse"
# $$\theta = 5x + 3y - z$$
# + id="_aVjWxL9PuXX"
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/"} id="L7hxvQJmKV_A" outputId="6166a7aa-446d-497e-b587-1376a5244546"
def describe_mat (matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
theta = np.array ([
[5,3,-1]
])
describe_mat(theta)
# + [markdown] id="e5S37s2YPwdD"
# ## Practice Number 2
# + [markdown] id="7nF-LyHtLeQY"
# 2. Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix.
# + [markdown] id="c7Nb7ECSLnFU"
# $$
# A = \left\{\begin{array}
# 5x_1 + 2x_2 +x_3\\
# 4x_2 - x_3\\
# 10x_3
# \end{array}\right.
# $$
# + id="CpkaLgkgPz9Y"
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/"} id="_Ep_pHqRP6wN" outputId="710341f7-bbe2-4dbd-b767-21c9fecab7a2"
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs square: {is_square}\n')
D = np.array ([
[1,2,1],
[0,4,-1],
[0,0,10]
])
describe_mat(D)
# + [markdown] id="Ilsf7oPFP_PM"
# ## Practice Number 3
# + [markdown] id="NkEn_tX8L57K"
# 3. Given the matrix below, express it as a linear combination in a markdown.
# + id="jlN6gxCNOu3V"
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# %matplotlib inline
# + id="lHuDyvRqO16U"
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
# + [markdown] id="SAdJWjpkQHCW"
# $$
# G = \left\{
# \begin{array}\
# x + 7y + 8z \\
# 2x + 2y + 2z \\
# 4x + 6y + 7z \\
# \end{array}
# \right. \\
# $$
#
# + [markdown] id="plwU7xo1QIbb"
# ## Practice Number 4
# + [markdown] id="zFj6VfSKMCbc"
# 4. Given the matrix below, display the output as a LaTeX makdown also express it as a system of linear combinations.
# + id="u7BLNVYIPBHK"
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# %matplotlib inline
# + id="azuV153NPDyq" outputId="bad98fc7-7507-4bbd-d636-14c4639514a6" colab={"base_uri": "https://localhost:8080/"}
H = np.tril(G)
H
# + [markdown] id="sXz4oUkxPGu3"
# $$
# G = \left\{
# \begin{array}\
# x \\
# 2x + 2y \\
# 4x + 6y +7z \\
# \end{array}
# \right. \\
# $$
#
# + [markdown] id="5LgsaWvlPJzR"
# $$ G=\begin{bmatrix} 1 & 0 & 0 \\ 2 & 2 & 0 \\ 4 & 6 & 7\end{bmatrix}
# $$
# + [markdown] id="SW2kplYHQOwP"
# # Tasks
# + [markdown] id="DVXFVtIpQQfq"
# ##**Task 1**
# + [markdown] id="E4QRm9XNNSc_"
# Create a function named mat_desc() that thouroughly describes a matrix , it should:
#
# 1. Displays the shape, size and rank of the matrix.
# 2. Displays whether the matrix is a square or non-square.
# 3. Displays whether the matrix is an empty matrix.
# 4. Displays if the matrix is an identity, ones or zeros matrix.
#
# Use samples 3 matrices in which their shapes are not lower than **(3,3)**. In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
# + id="WcDUaBzDPmV-"
def describe_mat (matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank"\t{matrix.ndim}\n')
# + [markdown] id="4I4a5SiRQXfw"
# ###**Shape, Size and Rank**
# + colab={"base_uri": "https://localhost:8080/"} id="AjhLla1rQrcg" outputId="8b4f8347-8ceb-4bf5-d8df-023e6b3d7cc8"
Z = np.array([
[8, 6, 2, 4],
[4, 7, 9, 1],
[10, 9, 8, 7],
[23, 6, 11, 5]
])
describe_mat(Z)
# + colab={"base_uri": "https://localhost:8080/"} id="UFhn4M_KRMLj" outputId="3ec13676-2199-422c-a750-28583db0b8ac"
Y = np.array([
[2, 4, 6, 8, 10],
[3, 6, 9, 12, 15],
[4, 8, 12, 16, 20],
[5, 10, 15, 20, 25]
])
describe_mat(Y)
# + colab={"base_uri": "https://localhost:8080/"} id="Ob8JQQVvRn71" outputId="e0de160a-b980-4095-e57d-447e5d7ffdaa"
X = np.array([
[1, 2, 3, 4,],
[10, 9, 8, 7],
[4, 6, 8, 10],
[14, 7, 9, 1],
[4, 8, 5, 67]
])
describe_mat(X)
# + [markdown] id="yffd8Q-8UDbl"
# ###**Square and Non-Square Matrices**
# + id="0NmOZqm0UXSB"
def mat_desc(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
# + colab={"base_uri": "https://localhost:8080/"} id="3FVrA9STU0Kj" outputId="5649fd67-a965-410a-cb3d-3e460dd1545d"
square_mat = np.array([
[3, 8, 7, 9],
[9, 8, 3, 5],
[6, 2, 7, 5],
[1, 6, 0, 4]
])
non_square_mat = np.array([
[5, 7, 3, 6, 0],
[3, 78, 14, 17, 5],
[56, 1, 3, 90, 4],
[5, 24, 10, 11, 9]
])
mat_desc(square_mat)
mat_desc(non_square_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="rYw1_xwBZORo" outputId="680a3706-78b2-4dfa-f214-5ac5f1e6dfda"
square_mat = np.array([
[4, 8, 12, 91],
[23, 16, 7, 10],
[56, 12, 45, 1],
[9, 41, 34, 7]
])
non_square_mat = np.array([
[23, 78, 14, 56, 89],
[67, 10 , 11, 56, 34],
[43, 92, 12, 0, 4],
[78, 101, 79, 34, 2]
])
mat_desc(square_mat)
mat_desc(non_square_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="_w5HTMdJc5Sz" outputId="54cc943e-88d7-49cb-d4e9-54381f62e9cd"
square_mat = np.array([
[3, 4, 5, 6],
[1, 2, 3, 4],
[9, 8, 7, 6],
[4, 6, 2, 6]
])
non_square_mat = np.array([
[4, 7, 2, 8, 9],
[12, 87, 46, 23, 1],
[23, 67, 12, 40, 1],
[56, 17, 19, 23, 65]
])
mat_desc(square_mat)
mat_desc(non_square_mat)
# + [markdown] id="ludKWRL-eENM"
# ###**Empty Matrix**
# + id="_6nTQ7jVeOE9"
def mat_desc(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
else:
print('Matrix is Null')
# + colab={"base_uri": "https://localhost:8080/"} id="csYvHlRFfKzq" outputId="eac5808d-bf24-4457-83f5-6d1fd2b702b5"
null_mat = np.array([])
mat_desc(null_mat)
# + [markdown] id="aVApWp-Vf7LX"
# ###**Identity**
# + colab={"base_uri": "https://localhost:8080/"} id="ybwRX-umgChY" outputId="2bb126ee-6e66-4a20-fc9b-9da91004a3af"
np. eye (5)
# + colab={"base_uri": "https://localhost:8080/"} id="AG7iHzBMgGxg" outputId="05dbfca8-cf18-495e-c0d0-ada24ff01be7"
np.identity (10)
# + colab={"base_uri": "https://localhost:8080/"} id="dgK-owHVgcSn" outputId="8f716ce5-cc5b-422a-fadc-2e5bfa984e8d"
np.eye (16)
# + [markdown] id="PFNdui6Phdm2"
# ###**Ones**
# + colab={"base_uri": "https://localhost:8080/"} id="pzVLfkqJhhau" outputId="233b99ac-243c-49b5-8263-27438b9bdd8c"
ones_mat_row = np.ones((9,5))
ones_mat_sqr = np.ones((6,5))
ones_mat_rct = np.ones((7,4))
print(f'ones Row Matrix: \n{ones_mat_row}')
print(f'ones Square Matrix: \n{ones_mat_sqr}')
print(f'ones Rectangular Matrix: \n{ones_mat_rct}')
# + [markdown] id="pPADSZbVk98l"
# ###**Zeros**
# + colab={"base_uri": "https://localhost:8080/"} id="2gaeai7jlCcW" outputId="cc879a20-6d1d-4c8f-8e40-2c399071892a"
zero_mat_row = np.zeros((8,3))
zero_mat_sqr = np.zeros((4,4))
zero_mat_rct = np.zeros((7,5))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
# + [markdown] id="G3C29YdaaR8d"
# ##**Task 2**
# + id="oahri1H7MpD3"
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/"} id="_u6L6aqMJgh6" outputId="a8c1272a-9ef4-436d-c12a-51385ad8bf13"
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('The shape of both matrices are not same. Could not perform operations.')
return
print('Sum of the given matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the given matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-wise multiplication of the given matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-wise division of the given matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
mat_operations([
[5,2,7,4,4,3],
[8,9,9,5,8,6],
[1,2,3,4,5,6],
[2,4,6,8,10,0]],
[[9,8,7,6,5,4],
[4,2,1,1,0,3],
[7,2,5,8,3,6],
[4,1,0,2,5,8]])
# + id="X1IYYH82Qo7q" outputId="75f99289-f31a-4e38-a96b-8fe2070400fa" colab={"base_uri": "https://localhost:8080/"}
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('The shape of both matrices are not same. Could not perform operations.')
return
print('Sum of the given matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the given matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-wise multiplication of the given matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-wise division of the given matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
mat_operations([
[3,1,2,3,4,5],
[6,5,7,7,6,9],
[2,4,9,3,5,4],
[3,4,6,4,8,9]],
[[1,4,3,3,9,3],
[1,6,2,5,1,4],
[2,9,1,8,1,5],
[3,6,8,6,8,6]])
|
LabRep2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Embedding a Bokeh server in a Notebook
#
# This notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook.
# Test Notebook WebSocket
print("123456")
# +
import yaml
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.themes import Theme
from bokeh.io import show, output_notebook
from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
output_notebook()
# -
# There are various application handlers that can be used to build up Bokeh documents. For example, there is a `ScriptHandler` that uses the code from a `.py` file to produce Bokeh documents. This is the handler that is used when we run `bokeh serve app.py`. Here we are going to use the lesser-known `FunctionHandler`, that gets configured with a plain Python function to build up a document.
#
# Here is the function `modify_doc(doc)` that defines our app:
def modify_doc(doc):
df = sea_surface_temperature.copy()
source = ColumnDataSource(data=df)
plot = figure(x_axis_type='datetime', y_range=(0, 25),
y_axis_label='Temperature (Celsius)',
title="Sea Surface Temperature at 43.18, -70.43")
plot.line('time', 'temperature', source=source)
def callback(attr, old, new):
if new == 0:
data = df
else:
data = df.rolling('{0}D'.format(new)).mean()
source.data = ColumnDataSource(data=data).data
slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
slider.on_change('value', callback)
doc.add_root(column(slider, plot))
doc.theme = Theme(json=yaml.load("""
attrs:
Figure:
background_fill_color: "#DDDDDD"
outline_line_color: white
toolbar_location: above
height: 500
width: 800
Grid:
grid_line_dash: [6, 4]
grid_line_color: white
"""))
# Now we can display our application using ``show``, which will automatically create an ``Application`` that wraps ``modify_doc`` using ``FunctionHandler``. The end result is that the Bokeh server will call ``modify_doc`` to build new documents for every new sessions that is opened.
show(modify_doc)
|
reviews/Jupyter_Widgets/bokeh_widget_plot_server.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import re
import collections
import datetime
p = re.compile(r'(?P<word>\b\w+\b)')
m = p.search( '(((( Lots of punctuation )))' )
#dir(m)#.groups(1)
m.span()
substitutions = {'year': '1776'}
_regex = re.compile('|'.join(map(re.escape, substitutions)))
datetime.datetime.strptime('01', '%Y')
# +
specifier = '1776-07/04'
'1996-03/02'
substitutions = collections.OrderedDict((
('year', '1776'),
#('two digit year', '76'),
#('two digit month', '07'),
('month', '7'),
('day', '4'),
#('two digit day', '04'),
))
#iterator = p.finditer('12 drummers drumming, 11 ... 10 ...')
map(re.escape, substitutions)
#re.escape(
#(?P<word>)
#_regex = re.compile('|'.join())
# +
'|'.join(map(re.escape, substitutions))
# +
for match in re.finditer(r'July|1776|76|7|4', 'July 4, 1776'):
print match.span()
print match.groupdict()
def parse(string, specifiers, year=None, month=None, day=None, hour=None, minute=None, second=None,
millisecond=None, microsecond=None, meridian=None, timezone=None):
pass
# -
|
develop/.ipynb_checkpoints/re-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf] *
# language: python
# name: conda-env-tf-py
# ---
# +
import datetime
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard, LearningRateScheduler
from tensorflow.keras.layers import GlobalMaxPooling2D, Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.layers import GlobalMaxPooling2D, MaxPooling2D, BatchNormalization
# -
(x, y), (x_test, y_test) = keras.datasets.cifar10.load_data()
x.shape
x_test.shape
# tensorflow Dataset class.
train_dataset = tf.data.Dataset.from_tensor_slices((x, y))
HEIGHT = 32
WIDTH = 32
NUM_CHANNELS = 3
NUM_CLASSES = 10
def augmentation(x, y):
x = tf.image.resize_with_crop_or_pad(
x, HEIGHT + 8, WIDTH + 8)
x = tf.image.random_crop(x, [HEIGHT, WIDTH, NUM_CHANNELS])
# x = tf.image.random_flip_left_right(x)
return x, y
train_dataset = train_dataset.map(augmentation)
train_dataset = (train_dataset
.map(augmentation)
.shuffle(buffer_size=50000))
def normalize(x, y):
x = tf.cast(x, tf.float32)
x /= 255.0 # normalize to [0,1] range
return x, y
train_dataset = (train_dataset
.map(augmentation)
.shuffle(buffer_size=50000)
.map(normalize))
train_dataset = (train_dataset.map(augmentation)
.map(normalize)
.shuffle(50000)
.batch(128, drop_remainder=True))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = (test_dataset.map(normalize).batch(128, drop_remainder=True))
# +
NUM_GPUS = 1
BS_PER_GPU = 128
NUM_EPOCHS = 60
NUM_TRAIN_SAMPLES = 50000
BASE_LEARNING_RATE = 0.1
LR_SCHEDULE = [(0.1, 30), (0.01, 45)]
def schedule(epoch):
initial_learning_rate = BASE_LEARNING_RATE * BS_PER_GPU / 128
learning_rate = initial_learning_rate
for mult, start_epoch in LR_SCHEDULE:
if epoch >= start_epoch:
learning_rate = initial_learning_rate * mult
else:
break
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
# +
i = Input(shape=(HEIGHT, WIDTH, NUM_CHANNELS))
x = Conv2D(32, (3, 3), activation='relu')(i)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu')(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu')(x)
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
# last hidden layer i.e.. output layer
x = Dense(NUM_CLASSES)(x)
model = Model(i, x)
# +
# i = Input(shape=(HEIGHT, WIDTH, NUM_CHANNELS))
# x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
# x = BatchNormalization()(x)
# x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
# x = BatchNormalization()(x)
# x = MaxPooling2D((2, 2))(x)
# x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
# x = BatchNormalization()(x)
# x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
# x = BatchNormalization()(x)
# x = MaxPooling2D((2, 2))(x)
# x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
# x = BatchNormalization()(x)
# x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
# x = BatchNormalization()(x)
# x = MaxPooling2D((2, 2))(x)
# x = Flatten()(x)
# x = Dropout(0.2)(x)
# # Hidden layer
# x = Dense(1024, activation='relu')(x)
# x = Dropout(0.2)(x)
# # last hidden layer i.e.. output layer
# x = Dense(NUM_CLASSES, activation='softmax')(x)
# model = Model(i, x)
# -
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,name='Adam'),
metrics=['accuracy'])
# +
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(log_dir + "/metrics")
file_writer.set_as_default()
tensorboard_callback = TensorBoard(
log_dir=log_dir,
update_freq='batch',
histogram_freq=1)
lr_schedule_callback = LearningRateScheduler(schedule)
model.fit(train_dataset,
epochs=NUM_EPOCHS,
validation_data=test_dataset,
validation_freq=1,
callbacks=[tensorboard_callback, lr_schedule_callback])
model.evaluate(test_dataset)
model.save('cifar_model.h5')
# +
new_model = keras.models.load_model('model.h5')
new_model.evaluate(test_dataset)
|
data_pipeline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
# %matplotlib inline
# +
num_points = 1000
vectors_set = []
for i in xrange(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1])
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
plt.plot(x_data, y_data, 'ro')
plt.legend()
plt.show()
# -
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros(1))
y = W * x_data + b
# +
# Cost function (error fuction) = MSE (Mean Squared Error)
loss = tf.reduce_mean(tf.square(y - y_data))
# +
# Gradient Decent (경사하강법) 텐서플로우 적용
optimizer = tf.train.GradientDescentOptimizer(0.5) # 0.5는 learning_rate
train = optimizer.minimize(loss)
# +
# 텐서플로우 런타임 실행을 위해 세션 생성과 호출, 초기화
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
# -
for step in xrange(8):
sess.run(train)
print "==========================="
print step,"-", sess.run(loss)
plt.plot(x_data, y_data, 'ro')
plt.plot(x_data, sess.run(W) * x_data + sess.run(b))
plt.legend(('W:{w}'.format(w = sess.run(W)), 'b:{b}'.format(b = sess.run(b))), loc='upper left')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
print(sess.run(W), sess.run(b))
print "==========================="
print tf.shape([x_data, y_data])
print tf.size([x_data, y_data])
print tf.rank([x_data, y_data])
[x_data, y_data]
|
02_p50_linear regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %cd /proj/fastsom
# +
# # !pip install plotly chart-studio
# +
# # !pip install --upgrade fastai
# # !pip uninstall -y fastai_category_encoders && pip install git+https://github.com/kireygroup/fastai-category-encoders
# +
import inspect
def members(o: any):
methods = inspect.getmembers(o, predicate=inspect.ismethod)
objects = inspect.getmembers(o, predicate=lambda item: not inspect.isfunction(item))
return {
'methods': list(dict(methods).keys()),
'objects': {name: type(value) for name, value in objects},
}
# -
from typing import *
from fastsom import *
from fastai.tabular.all import *
from fastai_category_encoders import *
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
target = 'salary'
cont_names = [col for col in df.columns if 'float' in str(df[col].dtype) or 'int' in str(df[col].dtype) and col != target]
cat_names = [col for col in df.columns if col not in cont_names and col != target]
procs = [FillMissing, CategoryEncode('fasttext'), Normalize]
dls = TabularDataLoaders.from_df(df, cat_names=cat_names, cont_names=cont_names, y_names=[target], procs=procs, bs=128)
visualize = []
learn = SomLearner(dls, visualize=visualize, loss_func=codebook_err, lr=0.3)
# + [markdown] heading_collapsed=true
# ## Plotly
# + hidden=true
data = [b for b in learn.dls.train]
# + hidden=true
x = torch.cat([torch.cat([b[0], b[1]], dim=-1) for b in learn.dls.train])
# + hidden=true
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
data_pca = pca.fit_transform(x.cpu().numpy())
data_weights = pca.transform(learn.model.weights.reshape(-1, learn.model.weights.shape[-1]).cpu().numpy())
# + hidden=true
@delegates(go.Scatter.__init__)
def scatter(data: Union[torch.Tensor, np.ndarray], **kwargs):
if data.shape[-1] == 1:
return go.Scatter(x=data[...,0], **kwargs)
elif data.shape[-1] == 2:
return go.Scatter(x=data[...,0], y=data[...,1], **kwargs)
elif data.shape[-1] == 3:
return go.Scatter3d(x=data[...,0], y=data[...,1], z=data[...,2], **kwargs)
else:
raise ValueError('Unable to plot data with more than 3 dimensions')
# + hidden=true
trace_data = scatter(data_pca * 10, name='data', mode='markers', marker_color='#539dcc', marker_size=2)
trace_weights = scatter(data_weights * 10, name='SOM weights', mode='markers', marker_color='#e58368', marker_size=3)
data = [trace_weights, trace_data]
layout = go.Layout(title="SOM Visualization", automargin=True)
fig = go.Figure(data=data, layout=layout)
# + hidden=true
fig.show()
# -
# ## Training
# %matplotlib notebook
learn.fit(5)
# ## Visualization
interp = SomInterpretation.from_learner(learn)
# %matplotlib inline
interp.show_weights()
# %matplotlib inline
interp.show_hitmap()
# %matplotlib inline
interp.show_feature_heatmaps(feature_indices=[0, 1, 2])
# %matplotlib inline
interp.show_preds()
# ## Exporting
df_exp = learn.codebook_to_df(recategorize=True, denorm=True)
df_exp.head()
learn.export('example_som.pkl')
|
nbs/develop.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="hq9XMl8_c_45"
# # 3. Train데이터와 Test데이터 분리
# + [markdown] id="47PsJJZmtKFl"
# ## library
# 최소한으로 필요한 라이브러리들 선언
# + id="SRsCpDfyc_46"
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# + [markdown] id="YGvGcL2BrgMN"
# ## 데이터 셋 불러오기
# iris 데이터
# + id="OBhMVXj6nZJl"
from sklearn import datasets
iris = datasets.load_iris()
x=iris.data
y= keras.utils.to_categorical(iris.target, 3)
# + [markdown] id="2izSxxF4tUBV"
# ## 훈련, 평가 데이터셋 분리
# sklearn 에서 지원해주는 train_test_split 함수를 이용하여 훈련 데이터와 평가 데이터를 분리해준다.
# + colab={"base_uri": "https://localhost:8080/"} id="EdWHUJTYtS0I" executionInfo={"status": "ok", "timestamp": 1628224208046, "user_tz": -540, "elapsed": 358, "user": {"displayName": "\uae40\uc131\uad6d", "photoUrl": "", "userId": "14454204284131910272"}} outputId="7542f98f-e41f-4086-e45e-fb2b70bb53a7"
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
x_train.shape,x_test.shape,y_train.shape,y_test.shape
# + [markdown] id="1HRy1Qo-tzR9"
# ## 모델 구성
# 간단하게 모델 구성하기
# + colab={"base_uri": "https://localhost:8080/"} id="YB_yTi48c_48" executionInfo={"status": "ok", "timestamp": 1628224217114, "user_tz": -540, "elapsed": 6275, "user": {"displayName": "\uae40\uc131\uad6d", "photoUrl": "", "userId": "14454204284131910272"}} outputId="7755e6d3-da88-454a-8715-474d701c70aa"
model = Sequential()
model.add(Dense(5, activation='relu', input_shape=(4,)))
model.add(Dense(3, activation='softmax'))
model.summary()
# + [markdown] id="OoIUfM1ht5X8"
# 사용할 fuction 들을 지정
# + id="yWKDrr_9HmSd"
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# + [markdown] id="j6Uc86Vht8j0"
# ## Validation
# 훈련 데이터로 학습을 하고, 평가데이터를 validation data 로 넣어서 모델이 잘 예측하는지 평가하여 Overfitting 을 막아준다.
# + colab={"base_uri": "https://localhost:8080/"} id="LGKpoNvVc_48" executionInfo={"status": "ok", "timestamp": 1628224286542, "user_tz": -540, "elapsed": 11088, "user": {"displayName": "\uae40\uc131\uad6d", "photoUrl": "", "userId": "14454204284131910272"}} outputId="f21f5b29-e1b0-4e56-9413-7b6ccfaf5406"
model.fit(x_train, y_train,
batch_size=10,
epochs=100,
verbose=1,
validation_data=(x_test, y_test))
# + [markdown] id="azaV5yHCu2X9"
# ## Score 확인
# + colab={"base_uri": "https://localhost:8080/"} id="or20WrrBvMX-" executionInfo={"status": "ok", "timestamp": 1628224340457, "user_tz": -540, "elapsed": 361, "user": {"displayName": "\uae40\uc131\uad6d", "photoUrl": "", "userId": "14454204284131910272"}} outputId="74814fe6-cba4-4c05-f90c-ee94256980a6"
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# + [markdown] id="QxvB9z1_u9_d"
# ## Decoder 를 통한 predict
# + id="A_0bSXQmsZBP"
decoder = {k:v for k,v in enumerate( iris.target_names )}
# + colab={"base_uri": "https://localhost:8080/"} id="YevdLG68XyLg" executionInfo={"status": "ok", "timestamp": 1628224377950, "user_tz": -540, "elapsed": 35, "user": {"displayName": "\uae40\uc131\uad6d", "photoUrl": "", "userId": "14454204284131910272"}} outputId="a8f96867-a78d-4875-b770-42b1a06e0733"
r=np.argmax(model.predict(x_test[:10,:]), axis=-1)
[decoder[i] for i in r]
# + [markdown] id="rITDyK7XpdbH"
# # 모델의 Training되는 과정을 Text로 보기 때문에 한눈에 잘 안 들어옴.
|
03.MLP/03.MLP_train_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="15B6ZpW8pzxX"
# # FastText Inference Process
#
# In this notebook we will analyze the inner workings of a FastText supervised model when making inferences. We will replicate what the trained model does in plain Python so it's easier to understand. The process can be split into the following steps:
#
# 1. Getting an embedding for the given sentence.
# 2. Computing the inferred class given this embedding.
#
# First we train a simple model.
# + id="s0cQuf1mWrc7"
# %%capture
# ! pip install fasttext
# ! wget https://dl.fbaipublicfiles.com/fasttext/data/cooking.stackexchange.tar.gz && tar xvzf cooking.stackexchange.tar.gz
# ! head -n 12404 cooking.stackexchange.txt > cooking.train
# ! tail -n 3000 cooking.stackexchange.txt > cooking.valid
# + colab={"base_uri": "https://localhost:8080/"} id="uwz2HIXdXdNZ" outputId="ff23935f-21d9-44bd-f874-a0489a4c4c0f"
# ! ls
# + id="3aead067"
import numpy as np
import fasttext
from typing import List
# + id="Dnhwa7AHYJt9"
LR=1.0
EPOCH=30
WORD_NGRAMS=2
BUCKET=200000
DIM=16
MINN=3
MAXN=6
# + id="977db69c"
model = fasttext.train_supervised(input="cooking.train", lr=LR, epoch=EPOCH, wordNgrams=WORD_NGRAMS,
bucket=BUCKET, dim=DIM, minn=MINN, maxn=MAXN)
# + [markdown] id="6d2df131"
# # Get FastText parameters
#
# Now we get the parameters and hyperparameters from the FastText binary model.
# + id="227a193c" outputId="c0af27a3-5568-4b74-f6b4-e3c3429ae6fd" colab={"base_uri": "https://localhost:8080/"}
vocabulary = model.get_words()
labels = model.labels
embedding_dim = model.get_dimension()
bucket_size = model.f.getArgs().bucket
minn = model.f.getArgs().minn
maxn = model.f.getArgs().maxn
wordNgrams = model.f.getArgs().wordNgrams
input_matrix = model.get_input_matrix()
output_matrix = model.get_output_matrix()
print("Vocabulary size:", len(vocabulary))
print("Number of labels:", len(labels))
print("Embedding dimension:", embedding_dim)
print("Bucket size:", bucket_size)
print("Minn:", minn)
print("Maxn:", maxn)
print("wordNgrams max length:", wordNgrams)
print("Input matrix shape:", input_matrix.shape)
print("Output matrix shape:", output_matrix.shape)
# + [markdown] id="082761ef"
# # 1. Compute full word vectors
#
# Full word vectors are stored in the first chunk of the input_matrix. These are words that show up in the training data at least **minCount** times. If there are no character n-grams then this vector will be the final word vector. If there are character n-grams then the final word vector is the mean of the word vector and the subword vectors.
# + id="972ef6e1" outputId="f12c571f-5dca-4904-c26d-54663595bdbf" colab={"base_uri": "https://localhost:8080/"}
word = "meatballs"
idx_a = model.get_words().index(word)
idx_a
# + id="c6dd56fa" outputId="edee3684-1ca7-4550-e9a8-9aac5cb1f34b" colab={"base_uri": "https://localhost:8080/"}
idx_b = model.get_word_id(word)
idx_b
# + id="b41f26ad" outputId="afac875f-1d83-4bbc-9b3b-61687027d2b1" colab={"base_uri": "https://localhost:8080/"}
input_matrix[idx_a]
# + [markdown] id="82a10d57"
# # 2. Compute all possible subwords
#
# FastText adds the "Beginning of Word" and "End of Word" characteres (< and >) to the words before computing character n-grams.
# + id="aa43b41f"
def get_subwords(word: str, minn: int, maxn: int) -> List[str]:
word = '<' + word + '>'
subwords = set(
[
word[i:i+size]
for size in range(minn, maxn+1)
for i in range(len(word)-size+1)
]
)
return list(subwords)
# + id="a41f46d7" outputId="37e38f92-20e1-430f-83b6-41674aa043d1" colab={"base_uri": "https://localhost:8080/"}
get_subwords("meatballs", minn=minn, maxn=maxn)
# + [markdown] id="2b1a32e8"
# # 3. Find subword vectors in input matrix
#
# The input matrix is of shape **(n_vocabulary + bucket_size, dim)**. For every unique word in the vocabulary there is a corresponding vector in the first chunk of the matrix. After this the remaining chunk (of bucket_size) holds vectors for subword and word n-grams, which might collide since the bucket size is fixed, as opposed to word vectors which do not collide.
#
# To find the n-gram index in the matrix we have to do the following procedure:
#
# 1. Hash the subword string characters into a single integer.
# 2. Modulo this number with the bucket size, which will map the number in one of the bucket slots.
# 3. Add the vocabulary length to the previous result to get the index of the subword in the input_matrix.
# + id="62587aa3"
def get_hash(subword: str) -> int:
h = 2166136261
for c in subword:
c = ord(c) % 2**8
h = (h ^ c) % 2**32
h = (h * 16777619) % 2**32
return h
def get_subword_index(subword, bucket, nb_words):
return (get_hash(subword) % bucket) + nb_words
# + id="6cf2cd08"
# Equivalent implementation
def get_hash(subword: str) -> np.uint32:
h = np.uint32(2166136261)
for c in subword:
c = np.uint32(np.int8(ord(c)))
h = np.uint32(h ^ c)
h = np.uint32(h * 16777619)
return h
# + id="674f05c5" outputId="421ae160-aeeb-4a48-cb94-cd86c76977c6" colab={"base_uri": "https://localhost:8080/"}
idx_a = get_subword_index('<me', bucket_size, len(vocabulary))
idx_a
# + id="295f4d2c" outputId="a6c594d9-3dca-46d8-d5f5-b58aa9712788" colab={"base_uri": "https://localhost:8080/"}
idx_b = model.get_subword_id('<me')
idx_b
# + id="0ff1e0cb" outputId="b45726fe-d3ff-4bc5-95b3-dba27f72ac79" colab={"base_uri": "https://localhost:8080/"}
input_matrix[idx_a]
# + [markdown] id="14758fc8"
# # 4. Find wordNgram features in the input matrix
# + [markdown] id="e_rVnLU1yEQv"
# To compute the wordNgrams for a particular sentence you need to:
# 1. Get the hashes for the words in the sentence, including the EOS token.
# 2. Make ngrams from size 2 up to the max ngram size (**wordNgrams** parameter)
# 3. Get a single index (feature) per ngram by recursively hashing (again but different hashing function) each word hash integer into a single integer.
#
# Suppose we want to get the wordNgram features for the sentence: **what is pizza?**
# + [markdown] id="K-NYEPFY9IVf"
# ### 4.1. Get hashes for words in the sentence
# In this case we will get the hashes for the following words:
# 1. what
# 2. is
# 3. pizza?
# 4. <\/s>
#
# To do this we will use the previously defined `get_hash` function for characters.
# + colab={"base_uri": "https://localhost:8080/"} id="6Z1RxtbR8ZYv" outputId="37d2ee1c-6372-4908-a6c6-17db4273ae6c"
sentence = "what is pizza?"
# Append EOL token
sentence += " </s>"
# Get hashes for each word
words = sentence.split()
hashes = [np.int32(get_hash(word)) for word in words]
hashes, words
# + [markdown] id="biHWbJgi-Ij5"
# ### 4.2. Make ngrams from size 2 up to the max ngram size (wordNgrams parameter)
#
# Now we have to generate the word ngrams (only if wordNgrams > 1). Word Ngrams are generated for each length from 2 up to the wordNgrams parameter or \[2, wordNgrams+1\].
#
# As we already have a list of hashes instead of a sentence with words, we generate the ngrams using the hashes directly instead of doing it with words and then hashing.
#
# + colab={"base_uri": "https://localhost:8080/"} id="-XBBmTTzp-W4" outputId="fe0c24d6-59f2-4457-c849-d68c9ce563fd"
wordngrams = [
words[i:i+size]
for size in range(2, wordNgrams+1)
for i in range(len(words)-size+1)
]
wordngrams
# + colab={"base_uri": "https://localhost:8080/"} id="QhARvlVEpuFS" outputId="a83fd0af-3a63-4409-d209-60e08178d48a"
wordngrams = [
hashes[i:i+size]
for size in range(2, wordNgrams+1)
for i in range(len(hashes)-size+1)
]
wordngrams
# + [markdown] id="xuX6Q1UIqevF"
# ### 4.3 Get a single index per ngram by recursively hashing each word hash integer into a single integer.
#
# We want to combine each ngram's hashes (words) into a single index for the input_matrix. A different recursive hashing function is used than the one for hashing string characters. After hashes are obtained we pass through modulo % buckets and add the vocabulary length to find the actual index in the input matrix.
# + colab={"base_uri": "https://localhost:8080/"} id="vygWdZ5KqY9F" outputId="35432f58-2a5e-4d41-89eb-25657a1d019f"
wordngrams_indexes = list()
for ngram in wordngrams:
h = ngram[0]
for word_hash in ngram[1:]:
h = np.uint64(h * 116049371 + word_hash)
ngram_index = len(vocabulary) + int(h) % bucket_size
wordngrams_indexes.append(ngram_index)
wordngrams_indexes
# + [markdown] id="kNxf4IXTtqsK"
# We end up with the indices corresponding to the following 3 wordNgram features.
#
# 1. "what is"
# 2. "is pizza"
# 3. "pizza <\/s>"
#
# It's not as easy to verify the right index for a given wordngram using the Python model as it is with words and subwords, thanks to **model.get_word_id()** and **model.get_subword_id()**, there is not a **model.get_wordngram_id()** available to verify. So to check these are in fact the right indexes we have to modify the original source to print them which is not included in this notebook.
#
# Once we have the wordNgram indexes we can look them up in the input matrix to retrieve the embeddings.
# + colab={"base_uri": "https://localhost:8080/"} id="YuJjgt6utrZq" outputId="e71c1854-2757-43b6-89b0-1536b2d93e45"
features = [input_matrix[word_ngram_id] for word_ngram_id in wordngrams_indexes]
features
# + [markdown] id="k2UH7Bqtsete"
# ### Actual FastText implementation
#
# The previous implementantion makes it easier to understand the steps involved, however in the original FastText C++ source, getting the wordNgram indexes for a given sentence is computed by generating and hashing the ngrams to a single list in one go. The following definition is a closer replica of the FastText process.
# + id="rGTRkR7w-HSl"
def add_word_ngram(line: list, hashes: list, n: int, bucket: int, nwords: int) -> List[int]:
for i in range(len(hashes)):
h = np.int32(hashes[i])
j = i + 1
while j < len(hashes) and j < i + n:
h = np.uint64(h * 116049371 + np.int32(hashes[j]))
line.append(nwords + (int(h) % bucket))
j += 1
return line
# + colab={"base_uri": "https://localhost:8080/"} id="D3948YIC-7-u" outputId="f668dd6d-62be-4f6a-963b-a88adef5a1d1"
word_ngrams_idxs = add_word_ngram(line=list(), hashes=hashes, n=wordNgrams, bucket=bucket_size, nwords=len(vocabulary))
word_ngrams_idxs
# + [markdown] id="6d019149"
# # 5. Replicating FastText vector functions
#
# ## 5.1 model.get_word_vector(string)
#
# In the case of minn and maxn equal to zero (no character n-grams), the full word index is the final word vector. However, when we use character n-grams the final word vector is the mean of the full word vector and the compounding subword vectors. Notice that when computing sentence vectors we do not make use of model.get_word_vector(), since averaging all features at once (subwords, words, wordngrams) might produce a different result that if we average at word level and then average those again for a sentence.
# + id="6c4a6207"
def get_word_vector(word: str) -> np.ndarray:
subwords = get_subwords(word, minn, maxn)
subword_idxs = [get_subword_index(subword, bucket_size, len(vocabulary)) for subword in subwords]
embeddings = [input_matrix[idx] for idx in subword_idxs]
if word in vocabulary:
word_idx = vocabulary.index(word)
embeddings += [input_matrix[word_idx]]
return np.mean(embeddings, axis=0)
# + id="5c12c816" outputId="42bec295-4e90-4d44-8e5d-9e9571d22506" colab={"base_uri": "https://localhost:8080/"}
get_word_vector("meatballs")
# + id="a0c6a3d0" outputId="0acdbf1c-4df7-406c-92a6-106c805ffcbf" colab={"base_uri": "https://localhost:8080/"}
model.get_word_vector("meatballs")
# + id="4bd74d1c" outputId="5014ed85-ce4f-4fb5-915e-7c060a35d610" colab={"base_uri": "https://localhost:8080/"}
np.isclose(
get_word_vector("meatballs"),
model.get_word_vector("meatballs")
).all()
# + [markdown] id="333da734"
# ## 5.2 model.get_sentence_vector(string)
#
# The sentence vector is the mean of all feature embeddings, these are all the individual full word embeddings, all the subword embeddings and all wordNgram embeddings. It also includes the End Of Sentence token as if it were a single word.
# + id="FSjPU0SEb3zF"
def get_word_features(words: List[str], input_matrix: np.ndarray, vocabulary: List[str]):
"""
Returns a list of all individual word features, 1 per word if word is in vocabulary
"""
features = [input_matrix[vocabulary.index(word)] for word in words if word in vocabulary]
return features
def get_subword_features(words: List[str], input_matrix: np.ndarray, vocabulary: List[str], minn: int, maxn: int, bucket_size: int) -> List[np.ndarray]:
"""
Returns a list of all subword features, all the variable number of subword features
from each word are flattened together in a single list
"""
def get_subwords(word: str, minn: int, maxn: int) -> List[str]:
word = '<' + word + '>'
subwords = set(
[
word[i:i+size]
for size in range(minn, maxn+1)
for i in range(len(word)-size+1)
]
)
return list(subwords)
def get_hash(subword: str) -> int:
h = 2166136261
for c in subword:
c = ord(c) % 2**8
h = (h ^ c) % 2**32
h = (h * 16777619) % 2**32
return h
def get_subword_index(subword: str, bucket: int, nb_words: int):
return (get_hash(subword) % bucket) + nb_words
features = list()
for word in words:
if word == "</s>":
continue
subwords = get_subwords(word, minn, maxn)
subword_idxs = [get_subword_index(subword, bucket_size, len(vocabulary)) for subword in subwords]
subword_embeddings = [input_matrix[idx] for idx in subword_idxs]
features += subword_embeddings
return features
def get_wordNgram_features(words: List[str], input_matrix: np.ndarray, nwords: int, max_ngram_length: int, bucket: int) -> List[np.ndarray]:
"""
Returns a list of all wordNgram features according to size n
"""
def get_hash(subword: str) -> int:
h = 2166136261
for c in subword:
c = ord(c) % 2**8
h = (h ^ c) % 2**32
h = (h * 16777619) % 2**32
return h
def get_wordngram_index(hashes: list, bucket: int, nwords: int) -> int:
h = hashes[0]
for word_hash in hashes[1:]:
h = np.uint64(h * 116049371 + word_hash)
return (nwords + int(h) % bucket)
def get_wordngrams(words: List[str], max_ngram_length: int) -> List[np.ndarray]:
hashes = [np.int32(get_hash(word)) for word in words]
wordngrams = [
hashes[i:i+size]
for size in range(2, max_ngram_length+1)
for i in range(len(hashes)-size+1)
]
return wordngrams
ngrams = get_wordngrams(words, max_ngram_length)
word_ngrams_idxs = [get_wordngram_index(ngram, bucket, nwords) for ngram in ngrams]
features = [input_matrix[word_ngram_id] for word_ngram_id in word_ngrams_idxs]
return features
# + id="E7-WsYfYbzTt"
def get_sentence_vector(sentence: str, minn: int, maxn: int, wordNgrams: int, bucket_size: int, vocabulary: List[str], input_matrix: np.ndarray) -> np.ndarray:
"""
Equivalent to model.get_sentence_vector()
"""
sentence += " </s>"
words = sentence.split()
feature_embeddings = []
# Get word features
feature_embeddings += get_word_features(words, input_matrix, vocabulary)
# Get subword features
if minn > 0 and maxn > 0:
feature_embeddings += get_subword_features(words, input_matrix, vocabulary, minn, maxn, bucket_size)
# Get wordNgram features
if wordNgrams > 1:
feature_embeddings += get_wordNgram_features(words, input_matrix, len(vocabulary),
wordNgrams, bucket_size)
# Compute mean of all features
sentence_vec = np.mean(feature_embeddings, axis=0)
return sentence_vec
# + id="293f555f" outputId="0895869c-3c18-4150-fae0-fef82662dce0" colab={"base_uri": "https://localhost:8080/"}
# Compare against model generated sentence vector
sentence = "How to make a pepperoni pizza?"
np.isclose(
get_sentence_vector(sentence, minn, maxn, wordNgrams, bucket_size, vocabulary, input_matrix),
model.get_sentence_vector(sentence)
).all()
# + colab={"base_uri": "https://localhost:8080/"} id="cVOncL3UeiWb" outputId="519879ed-ed6f-4453-83a8-b0434643c0b5"
get_sentence_vector(sentence, minn, maxn, wordNgrams, bucket_size, vocabulary, input_matrix)
# + colab={"base_uri": "https://localhost:8080/"} id="DrOUTW6KeksM" outputId="b9827170-c55f-4204-b180-e0a469bde830"
model.get_sentence_vector(sentence)
# + [markdown] id="85beeac5"
# # 5. Compute predictions using output_matrix
#
# **\*Assumes model uses a softmax loss**
#
# The supervised model FastText uses is a single layer neural network. Once we have a sentence vector we can obtain the output class vector by matrix multiplying the output_matrix (n_classes, dim) and the sentence vector (dim,), which will result in an output vector with shape (n_classes,) which has a value for each possible class. These scores are then softmaxed to convert to probabilities and then the highest one is selected for prediction.
#
# In this notebook we are assuming that we are using the default softmax loss when training the supervised model. To implement a different loss function we need to define it as a function and apply it to the output vector instead of **softmax** before selecting the prediction.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="E0SKjv7YBp6t" outputId="c364eebb-b0d0-4bd4-e558-55572ff71084"
output_matrix.shape
# + id="d5f68e3e"
def softmax(x):
f_x = np.exp(x) / np.sum(np.exp(x))
return f_x
# + id="48xIhYyML9ai"
def exp_normalize(x):
b = x.max()
y = np.exp(x - b)
return y / y.sum()
# + id="521c1b38"
def get_prediction(sentence: str, minn: int, maxn: int, wordNgrams: int, bucket_size: int, vocabulary: List[str], input_matrix: np.ndarray):
sentence_vec = get_sentence_vector(sentence, minn, maxn, wordNgrams, bucket_size, vocabulary, input_matrix)
pred_idx = np.argmax(softmax(np.matmul(output_matrix, sentence_vec)))
pred_proba = np.max(softmax(np.matmul(output_matrix, sentence_vec)))
pred_label = labels[pred_idx]
return pred_label, pred_proba
# + id="8f51d3bd" outputId="7132ea7c-17fa-4625-eb11-1c921b7ea5ac" colab={"base_uri": "https://localhost:8080/"}
get_prediction("Italian food recipes", minn, maxn, wordNgrams, bucket_size, vocabulary, input_matrix)
# + id="aab26456" outputId="bb0c07a5-2f65-4693-c561-98d3193e6683" colab={"base_uri": "https://localhost:8080/"}
model.predict("Italian food recipes")
# + id="2f163cbe" colab={"base_uri": "https://localhost:8080/"} outputId="53257ad6-f6f0-4c55-be10-7a0a442d2982"
np.isclose(get_sentence_vector(sentence, minn, maxn, wordNgrams, bucket_size, vocabulary, input_matrix), model.get_sentence_vector(sentence)).all()
# + id="G7Al2ZxHvqxm"
|
fasttext_inference_reverse_engineering.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import common packages
import itertools
import logging
import numpy as np
from qiskit import Aer
from qiskit_aqua import Operator, set_aqua_logging, QuantumInstance
from qiskit_aqua.algorithms.adaptive import VQE
from qiskit_aqua.algorithms.classical import ExactEigensolver
from qiskit_aqua.components.optimizers import COBYLA
from qiskit_chemistry.drivers import PySCFDriver, UnitsType
from qiskit_chemistry.core import Hamiltonian, TransformationType, QubitMappingType
from qiskit_chemistry.aqua_extensions.components.variational_forms import UCCSD
from qiskit_chemistry.aqua_extensions.components.initial_states import HartreeFock
# set_aqua_logging(logging.INFO)
# -
# using driver to get fermionic Hamiltonian
driver = PySCFDriver(atom='Li .0 .0 .0; H .0 .0 1.6', unit=UnitsType.ANGSTROM,
charge=0, spin=0, basis='sto3g')
molecule = driver.run()
# +
core = Hamiltonian(transformation=TransformationType.FULL, qubit_mapping=QubitMappingType.PARITY,
two_qubit_reduction=True, freeze_core=True)
algo_input = core.run(molecule)
qubit_op = algo_input.qubit_op
print("Originally requires {} qubits".format(qubit_op.num_qubits))
print(qubit_op)
# -
# Find the symmetries of the qubit operator
[symmetries, sq_paulis, cliffords, sq_list] = qubit_op.find_Z2_symmetries()
print('Z2 symmetries found:')
for symm in symmetries:
print(symm.to_label())
print('single qubit operators found:')
for sq in sq_paulis:
print(sq.to_label())
print('cliffords found:')
for clifford in cliffords:
print(clifford.print_operators())
print('single-qubit list: {}'.format(sq_list))
# Use the found symmetries, single qubit operators, and cliffords to taper qubits from the original qubit operator. For each Z2 symmetry one can taper one qubit. However, different tapered operators can be built, corresponding to different symmetry sectors.
tapered_ops = []
for coeff in itertools.product([1, -1], repeat=len(sq_list)):
tapered_op = Operator.qubit_tapering(qubit_op, cliffords, sq_list, list(coeff))
tapered_ops.append((list(coeff), tapered_op))
print("Number of qubits of tapered qubit operator: {}".format(tapered_op.num_qubits))
# The user has to specify the symmetry sector he is interested in. Since we are interested in finding the ground state here, let us get the original ground state energy as a reference.
ee = ExactEigensolver(qubit_op, k=1)
result = core.process_algorithm_result(ee.run())
for line in result[0]:
print(line)
# Now, let us iterate through all tapered qubit operators to find out the one whose ground state energy matches the original (un-tapered) one.
# +
smallest_eig_value = 99999999999999
smallest_idx = -1
for idx in range(len(tapered_ops)):
ee = ExactEigensolver(tapered_ops[idx][1], k=1)
curr_value = ee.run()['energy']
if curr_value < smallest_eig_value:
smallest_eig_value = curr_value
smallest_idx = idx
print("Lowest eigenvalue of the {}-th tapered operator (computed part) is {:.12f}".format(idx, curr_value))
the_tapered_op = tapered_ops[smallest_idx][1]
the_coeff = tapered_ops[smallest_idx][0]
print("The {}-th tapered operator matches original ground state energy, with corresponding symmetry sector of {}".format(smallest_idx, the_coeff))
# -
# Alternatively, one can run multiple VQE instances to find the lowest eigenvalue sector.
# Here we just validate that `the_tapered_op` reach the smallest eigenvalue in one VQE execution with the UCCSD variational form, modified to take into account of the tapered symmetries.
# +
# setup initial state
init_state = HartreeFock(num_qubits=the_tapered_op.num_qubits, num_orbitals=core._molecule_info['num_orbitals'],
qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction,
num_particles=core._molecule_info['num_particles'], sq_list=sq_list)
# setup variationl form
var_form = UCCSD(num_qubits=the_tapered_op.num_qubits, depth=1,
num_orbitals=core._molecule_info['num_orbitals'],
num_particles=core._molecule_info['num_particles'],
active_occupied=None, active_unoccupied=None, initial_state=init_state,
qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction,
num_time_slices=1,
cliffords=cliffords, sq_list=sq_list, tapering_values=the_coeff, symmetries=symmetries)
# setup optimizer
optimizer = COBYLA(maxiter=1000)
# set vqe
algo = VQE(the_tapered_op, var_form, optimizer, 'matrix')
# setup backend
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend=backend)
# -
algo_result = algo.run(quantum_instance)
# +
result = core.process_algorithm_result(algo_result)
for line in result[0]:
print(line)
print("The parameters for UCCSD are:\n{}".format(algo_result['opt_params']))
# -
|
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classifier comparison
# https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
# Per cercare di dividere i bot dai veri utenti, in questa parte del progetto si provano diverse tecniche di data mining.
# +
from sqlalchemy import create_engine
import pandas as pd
import mysql.connector
# conda install pymysql
import time
import numpy as np
import seaborn as sn
import pathlib
import os
import matplotlib.pyplot as plt
from scipy.stats.stats import pearsonr
from numpy import cov
import plotly.graph_objects as go
from sklearn.linear_model import LinearRegression
import time
import plotly.offline as pyo
from matplotlib import pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier # Import Decision Tree Classifier
from sklearn.model_selection import train_test_split # Import train_test_split function
from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation
# +
query_verbose: bool = False
mydb = mysql.connector.connect(host="localhost", user="root", password="<PASSWORD>", database="sql1238724_5")
db_connection_str = 'mysql+pymysql://root:admin@127.0.0.1/sql1238724_5'
# Query the DB. The result is return as dataframe
def query_db(sql_query: str):
db_connection = create_engine(db_connection_str)
data = pd.read_sql(sql_query, con=db_connection)
db_connection.dispose()
return data.copy(deep=True)
def save_dataset(dataset, table_name):
# Create SQLAlchemy engine to connect to MySQL Database
engine = create_engine(db_connection_str)
# Convert dataframe to sql table
dataset.to_sql(table_name, engine, index=False)
def update_db(sql_query: str) -> int:
mycursor = mydb.cursor()
if query_verbose:
print(sql_query)
mycursor.execute(sql_query)
mydb.commit()
mycursor.close()
mydb.close()
return mycursor.rowcount
# -
# Inizialmente viene caricato il dataset (si veda il notebook precedente).
sql = 'SELECT * FROM ese_analytics_classifier_comparison'
data_result = query_db(sql)
del data_result['time_in_page']
print(data_result)
# max_page_visit
# ### Computation of descriptive statistics for the dependent and the independent variables
# Decision Tree Classifier Building
# <br>
# https://mljar.com/blog/visualize-decision-tree/<br>
# https://www.datacamp.com/community/tutorials/decision-tree-classification-python<br>
#
#
# Decision Tree Classifier Building è un algoritmo supervisionato ma viene usato lo stesso anche se in questo progetto non c'è la verità nei dati.<br>
# Infatti se "si setta la verità" come colonna dell'utente autentificato, si possono scoprire quali features sono le più significative.
data_result.columns
data_result["user_device"] = data_result["user_device"].astype(str).astype(int)
data_result["is_cookie_accept"] = data_result["is_cookie_accept"].astype(str).astype(int)
# +
# Bisogna non prendere in considerazione la features is_cookie_accept
# feature_cols = ['count_page', 'max_page_visit', 'average_time_between_page','count_days', 'user_device', 'is_cookie_accept']
feature_cols = ['count_page', 'max_page_visit', 'average_time_between_page','count_days', 'user_device']
X = data_result[feature_cols] # Features
y = data_result.is_user_signup # Target variable
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) # 70% training and 30% test
# 2 class - just bug and not bug
y_train[y_train > 0 ] = 1
y_test[y_test > 0 ] = 1
# Create Decision Tree classifer object
clf = DecisionTreeClassifier(criterion="entropy", max_depth=3)
# Train Decision Tree Classifer
clf = clf.fit(X_train,y_train)
#Predict the response for test dataset
y_pred = clf.predict(X_test)
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
fig = plt.figure(figsize=(50,50))
_ = tree.plot_tree(clf,feature_names=feature_cols, class_names=["BOT","HUMAN"],filled=True,rounded=True,)
# -
# Come possiamo vedere dal grafico:
# - l'utente registrato permette l'utilizzo di javascript e dei cookie e quindi possiamo scoprire la dimensione dello schermo
# - è stato sul sito meno di 20 giorni
# - ha visitato meno di 250 pagine
# ### Naive Bayes
#
# https://www.edureka.co/blog/naive-bayes-tutorial/ <br>
# https://www.aionlinecourse.com/tutorial/machine-learning/bayes-theorem
#
#
# Un altro algoritmo supervisionato è Naive Bayes. Ci può aiutare a capire meglio se ci sono altre relazioni tra utenti autentificati e utenti non autentificati
# +
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import plot_confusion_matrix
from sklearn.preprocessing import StandardScaler
# feature_cols = ['count_page', 'max_page_visit', 'average_time_between_page','count_days', 'user_device', 'is_cookie_accept']
data_naive_bayes = data_result.copy(deep=True)
# le prime 2 features sono la x e la y del nostro grafico
# la terza è la verita
features_list = ['average_time_between_page','max_page_visit','is_user_signup']
drop_list = []
for current_features in data_naive_bayes.columns:
if not current_features in features_list:
drop_list.append(current_features)
data_naive_bayes = data_naive_bayes.drop(drop_list, axis=1)
data_naive_bayes.shape
sn.relplot(x=features_list[0], y=features_list[1], hue='is_user_signup',data=data_naive_bayes)
# +
# Making the Feature matris and dependent vector
X = data_naive_bayes.iloc[:, [0, 1]].values
y = data_naive_bayes.iloc[:, 2].values
y[y > 0 ] = 1
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) # 70% training and 30% test
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
model = GaussianNB()
model.fit(X_train, y_train)
expected = y_test
predicted = model.predict(X_test)
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
# numpy.meshgrid -> Return coordinate matrices from coordinate vectors.
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
# contour and contourf draw contour lines and filled contours, respectively.
# Except as noted, function signatures and return values are the same for both versions.
plt.contourf(X1, X2, model.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('green', 'white')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
colors = ['red','blue']
labels = ['Bot','Human' ]
print(X1.ravel())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = colors[i], label = labels[j])
plt.title('Naive Bayes (Training set)')
plt.xlabel(features_list[0])
plt.ylabel(features_list[1])
plt.legend()
plt.show()
print(metrics.classification_report(expected, predicted))
disp = plot_confusion_matrix(model, X_test, y_test)
# -
# Anche cambiando le features i risultati sono simili:
# +
# # features_list = ['max_page_visit','user_device','is_user_signup']
# precision recall f1-score support
#
# 0 0.98 0.81 0.88 24540
# 1 0.51 0.91 0.65 5423
#
# accuracy 0.83 29963
# macro avg 0.74 0.86 0.77 29963
# weighted avg 0.89 0.83 0.84 29963
# features_list = ['average_time_between_page','count_days','is_user_signup']
# precision recall f1-score support
#
# 0 1.00 0.67 0.80 24540
# 1 0.40 0.99 0.57 5423
#
# accuracy 0.73 29963
# macro avg 0.70 0.83 0.69 29963
# weighted avg 0.89 0.73 0.76 29963
# -
# # Creating Cluster using Kmeans Algorithm.
# https://blog.floydhub.com/introduction-to-k-means-clustering-in-python-with-scikit-learn/
# Kmeans è un algoritmo non supervisionato. In questa parte del progetto proviamo a scoprire se riusciamo a dividere i dati in maniera tale che si possano creare 2 gruppi:
#
# - uno che assomigli agli utenti loggati
# - uno che è etichettato come visite effettuate da un bot
# +
## using Kmean to make 2 cluster group
from sklearn.cluster import KMeans
# all features = ['average_time_between_page','count_page','max_page_visit','count_days','is_user_signup']
# Carico il dataset
data_k_means = data_result.copy(deep=True)
# Cambio l'ordine delle colonne (per il grafico)
# https://stackoverflow.com/questions/53141240/pandas-how-to-swap-or-reorder-columns
cols = list(data_k_means.columns)
a, b = cols.index('max_page_visit'), cols.index('average_time_between_page')
cols[b], cols[a] = cols[a], cols[b]
data_k_means = data_k_means[cols]
# Copio i dati in questo altro dataset per mostrare is_user_signup successivamente
data_k_means_all = data_k_means.copy(deep=True)
# Tengo solo 2 features, le altre le elimino
del data_k_means['ip']
#del data_k_means['count_page']
#del data_k_means['count_days']
#del data_k_means['is_user_signup']
#del data_k_means['is_cookie_accept']
#del data_k_means['user_device']
# Salvo per dopo
data_k_means_3 = data_k_means.copy(deep=True)
# Inizializzo il KMeans con 2 cluster
kmeans = KMeans(n_clusters=2)
y_pred = kmeans.fit_predict(data_k_means)
# Il risultato lo salvo nella colonna "cluster"
data_k_means['cluster'] = y_pred
print(data_k_means)
# +
# Get the cluster centroids
# print(kmeans.cluster_centers_)
# -
## For plotting the graph of Cluster
dataset_bot = data_k_means[data_k_means.cluster == 0]
# print(dataset_bot)
dataset_human = data_k_means[data_k_means.cluster == 1]
# print(dataset_human)
# +
plt.scatter(dataset_bot.average_time_between_page,
dataset_bot.max_page_visit,
color='green')
plt.scatter(dataset_human.average_time_between_page,
dataset_human.max_page_visit,
color='pink')
dataset_is_user_signup = data_k_means_all[data_k_means_all.is_user_signup == 1]
plt.scatter(dataset_is_user_signup.average_time_between_page,dataset_is_user_signup.max_page_visit,color='yellow')
plt.scatter(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],color='red', marker='x')
plt.title("Kmeans Algorithm")
plt.xlabel("average_time_between_page")
plt.ylabel("max_page_visit")
print("Pink = BOT")
print("Green = HUMAN")
print("Yellow = is_user_signup")
print("Red = centroid")
# -
# Per visualizzare come l'algoritmo ha diviso i dati, si plotta un grafico.
#
# Nella parte gialla vediamo gli utenti realmente loggati.
# Il verde e il rosa invece sono i 2 gruppi che ha creato l'algoritmo.
# Il Kmeans è riuscito (grossolanamente) a dividere i 2 gruppi.
# <br><br>
# La prova successiva è aumentare il numero di cluster per vedere se è possibile cercare di creare un gruppo più omogeo di dati degli utenti loggati.
# +
# Inizializzo il KMeans con 3 cluster
kmeans = KMeans(n_clusters=3)
y_pred = kmeans.fit_predict(data_k_means_3)
# Il risultato lo salvo nella colonna "cluster"
data_k_means['cluster'] = y_pred
print(data_k_means)
# +
dataset_group_1 = data_k_means[data_k_means.cluster == 0]
dataset_group_2 = data_k_means[data_k_means.cluster == 1]
dataset_group_3 = data_k_means[data_k_means.cluster == 2]
plt.scatter(dataset_group_1.average_time_between_page,
dataset_group_1.max_page_visit,
color='green')
plt.scatter(dataset_group_2.average_time_between_page,
dataset_group_2.max_page_visit,
color='orange')
plt.scatter(dataset_group_3.average_time_between_page,
dataset_group_3.max_page_visit,
color='blue')
dataset_is_user_signup = data_k_means_all[data_k_means_all.is_user_signup == 1]
plt.scatter(dataset_is_user_signup.average_time_between_page,dataset_is_user_signup.max_page_visit,color='yellow')
plt.scatter(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],color='red', marker='x')
plt.title("Kmeans Algorithm")
plt.xlabel("average_time_between_page")
plt.ylabel("max_page_visit")
print("Pink = dataset_group_1")
print("Green = dataset_group_2")
print("Blue = dataset_group_3")
print("Yellow = is_user_signup")
print("Red = centroid")
# -
# Aumentando il numero di cluster, si è creato un nuovo centroide che raggruppa le pagine che hanno avuto un alto tempo medio sul sito.
# Queste visite molto probabilmente sono state effettuate da un bot.
# <br>
# Si è provato ad incrementare k al valore 4 ma i risultati non sono soddisfacenti.
|
BotDetectionClustering.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf
# language: python
# name: tf
# ---
import plotnine as p9
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
x = np.array([1,2,2,3,4,4,4,4,5,6,7,8])
n = x.shape[0]
q = np.array([np.sum(x <= x_i) for x_i in x])
q = q / n
q
def plot_qt(x, q):
df = pd.DataFrame({"x_val": x, "q": q})
p = p9.ggplot(df) + p9.ylab(r"$\hat F(x)$")
df_quiver = pd.DataFrame({"x_val": x.astype("float"), "y_begin": 0, "y_end": q})
x_unique, x_count = np.unique(x, return_counts=True)
for x_i, count_i in zip(x_unique[x_count > 1], x_count[x_count > 1]):
print()
df_quiver.loc[df_quiver["x_val"] == x_i, "x_val"] += np.linspace(-0.05 * count_i, 0.05 * count_i, count_i)
df_quiver_long = pd.wide_to_long(df_quiver, ["y"], i="x_val", j="Arrow_part", sep='_', suffix="(begin|end)").reset_index()
p += p9.geom_line(data=df_quiver_long, mapping=p9.aes(x="x_val", y="y", group="x_val"),
color="grey", linetype="-.",
arrow=p9.arrow(angle=15, type="closed"))
p += p9.geom_step(mapping=p9.aes(x = "x_val", y = "q"))
p.draw()
plot_qt(x, q)
df_quiver.loc[df_quiver["x_val"] == 2.057763]
df_quiver = pd.DataFrame({"x_val": x + np.random.uniform(-0.1, 0.1, size=x.shape[0]), "y_begin": 0, "y_end": q})
df_quiver_long = pd.wide_to_long(df_quiver, ["y"], i="x_val", j="Arrow_part", sep='_', suffix="(begin|end)").reset_index()
df_quiver_long
|
scripts/plots_for_master_presentation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ##### CSCI 303
# # Introduction to Data Science
# <p/>
# ### 16 - Support Vector Machines
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## This Lecture
# ---
# - Classification via Support Vector Machine
# + [markdown] slideshow={"slide_type": "slide"}
# ## Setup
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from scipy.stats import norm
from pandas import Series, DataFrame
from matplotlib.colors import ListedColormap
plt.style.use('ggplot')
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example Problem
# ---
# This synthetic problem creates two clusters of points which are normally distributed in two dimensions. We're going to start with data which is linearly separable in order to explain SVM better.
# +
# ensure repeatability of this notebook
# (comment out for new results each run)
np.random.seed(12345)
# Get some normally distributed samples
def sample_cluster(n, x, y, sigma):
x = np.random.randn(n) * sigma + x;
y = np.random.randn(n) * sigma + y;
return np.array([x, y]).T
c1 = sample_cluster(25, 1, 0, 0.3)
c2 = sample_cluster(25, 0, 1, 0.2)
d1 = DataFrame(c1, columns=['x','y'])
d2 = DataFrame(c2, columns=['x','y'])
d1['class'] = 'a'
d2['class'] = 'b'
data = d1.append(d2)
data.index = pd.RangeIndex(50)
# + slideshow={"slide_type": "subslide"}
plt.plot(c1[:,0], c1[:,1], 'bs', label='a')
plt.plot(c2[:,0], c2[:,1], 'r^', label='b')
plt.title('The Data')
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# Note that this data is well separated, and there are many possible linear separators.
#
# Which one is best?
# -
plt.plot(c1[:,0], c1[:,1], 'bs', label='a')
plt.plot(c2[:,0], c2[:,1], 'r^', label='b')
plt.plot([-0.5, 2.0], [0, 1.5], 'k:')
plt.plot([-0.5, 2.0], [0.5, 1.0], 'g:')
plt.plot([-0.5, 2.0], [-0.35, 1.65], 'm:')
plt.title('Linear Separators')
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Maximum Margin Classifier
# ---
# One answer to the question is, where can we draw the line such that the nearest exemplar(s) in each class are equidistant from the line, and where the distance is as maximal as possible?
#
# It turns out this produces a neat little convex quadratic program, which we can feed to any QP solver.
#
# Downside: it can be expensive with lots of data!
#
# (Math omitted - beyond the scope of this course)
# + [markdown] slideshow={"slide_type": "subslide"}
# To show what the maximum margin classifier looks like on our data, we're going to create a linear SVM classifier, and fit it using all of the data.
# -
from sklearn import svm
model = svm.SVC(kernel='linear', C=10)
print(data)
model.fit(data[['x','y']], data['class'])
# + [markdown] slideshow={"slide_type": "subslide"}
# The `plot_predicted` function below is what we've been using to visualize our data points (correctly and incorrectly classified), together with lines that show us the decision boundary and the support vectors.
# -
def plot_predicted(model, data):
predicted = model.predict(data[['x','y']])
correct = data[data['class'] == predicted]
correcta = correct[correct['class'] == 'a']
correctb = correct[correct['class'] == 'b']
incorrect = data[data['class'] != predicted]
incorrecta = incorrect[incorrect['class'] == 'b']
incorrectb = incorrect[incorrect['class'] == 'a']
plt.plot(correcta['x'], correcta['y'], 'bs', label='a')
plt.plot(correctb['x'], correctb['y'], 'r^', label='b')
plt.plot(incorrecta['x'], incorrecta['y'], 'bs', markerfacecolor='w', label='a (misclassified)')
plt.plot(incorrectb['x'], incorrectb['y'], 'r^', markerfacecolor='w', label='b (misclassified)')
plt.legend(ncol=2)
# + [markdown] slideshow={"slide_type": "subslide"}
# The rather complicated `plot_linear_separator` function below extracts the relevant data from the model to plot the linear decision function and the parallel "maximum margin" that was found.
# -
def plot_linear_separator(model, data):
# This code modified from Scikit-learn documentation on SVM
plt.figure(figsize=(8,6))
# get the separating hyperplane as ax + y + c = 0
w = model.coef_[0]
a = w[0] / w[1]
c = (model.intercept_[0]) / w[1]
xx = np.linspace(data['x'].min(), data['x'].max())
yy = -a * xx - c
# find the support vectors that define the maximal separation
# there ought to be a better way...
spos = 0
sneg = 0
sposdist = 0
snegdist = 0
for s in model.support_vectors_:
# find the orthogonal point
ox = (s[0] - a * s[1] - a * c) / (a * a + 1)
oy = (a * (a * s[1] - s[0]) - c) / (a * a + 1)
# find the squared distance
d = (s[0] - ox)**2 + (s[1] - oy)**2
if s[1] > oy and d > sposdist:
spos = s
sposdist = d
if s[1] < oy and d > snegdist:
sneg = s
snegdist = d
# plot the parallels to the separating hyperplane that pass through the
# support vectors
yy_pos = -a * xx + (spos[1] + a * spos[0])
yy_neg = -a * xx + (sneg[1] + a * sneg[0])
# plot the separator and the maximum margin lines
plt.plot(xx, yy, 'k-', label='Boundary')
plt.plot(xx, yy_pos, 'k:')
plt.plot(xx, yy_neg, 'k:')
plt.plot(model.support_vectors_[:, 0], model.support_vectors_[:, 1], 'ko', markerfacecolor='#00000044', markersize=10, label='Support')
# plot the points
plot_predicted(model, data)
plt.legend(ncol=3)
# + [markdown] slideshow={"slide_type": "slide"}
# And finally, here's the plot, showing our data:
# -
plot_linear_separator(model, data)
plt.title('Maximum Margin Classifier')
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# The solid line in the plot above is the decision boundary.
#
# The dotted lines show the margin area between the classifier and the nearest points in the two clusters.
#
# Note that the dotted lines pass through points in the clusters; these points are called the *support vectors* of the classifier.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Non-Separable Data
# ---
# So what happens when the data is not linearly separable?
#
# Our QP will break, because there is no feasible solution.
#
# So, the clever fix is to relax the QP to allow points to be *misclassified*; but to get the best classifier possible, a penalty is attached to each misclassified point.
# + slideshow={"slide_type": "subslide"}
c1 = sample_cluster(25, 1, 0, 0.5)
c2 = sample_cluster(25, 0, 1, 0.4)
d1 = DataFrame(c1, columns=['x','y'])
d2 = DataFrame(c2, columns=['x','y'])
d1['class'] = 'a'
d2['class'] = 'b'
data = d1.append(d2)
data.index = pd.RangeIndex(50)
plt.figure(figsize=(8,6))
plt.plot(c1[:,0], c1[:,1], 'bs', label='a')
plt.plot(c2[:,0], c2[:,1], 'r^', label='b')
plt.title('The (Non-Separable) Data')
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's see what the support vector classifier does with this data:
# + slideshow={"slide_type": "-"}
model = svm.SVC(kernel='linear', C=10)
model.fit(data[['x','y']], data['class'])
plot_linear_separator(model, data)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Effect of C parameter
# ---
# The strength of the penalty term for the QP is controlled by a new parameter: C.
#
# A larger C means a stronger penalty, i.e., gives the QP incentive to reduce misclassifications.
#
# Above we used C = 10.
#
# Let's see the effects of different choices for C:
# + [markdown] slideshow={"slide_type": "subslide"}
# C = 1
# -
model = svm.SVC(kernel='linear', C=1)
model.fit(data[['x','y']], data['class'])
plot_linear_separator(model, data)
# + [markdown] slideshow={"slide_type": "subslide"}
# C = 0.1
# -
model = svm.SVC(kernel='linear', C=0.1)
model.fit(data[['x','y']], data['class'])
plot_linear_separator(model, data)
# + [markdown] slideshow={"slide_type": "subslide"}
# C = 100
# -
model = svm.SVC(kernel='linear', C=100)
model.fit(data[['x','y']], data['class'])
plot_linear_separator(model, data)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Non-Linear Data
# ---
# It turns out that a quirk in the QP formulation for SVMs allows us to efficiently replace the linear separator model with a non-linear model.
#
# This quirk is known as the "kernel trick".
#
# It lets us use different *kernels* without significant added expense.
#
# The most popular kernels are linear, polynomial, and radial basis function (RBF). Radial basis functions are basically Gaussian surfaces centered on the data points.
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's see how this works on our example problem from before:
# + slideshow={"slide_type": "-"}
def f(X):
return 3 + 0.5 * X - X**2 + 0.15 * X**3
# convenience function for generating samples
def sample(n, fn, limits, sigma):
width = limits[1] - limits[0]
height = limits[3] - limits[2]
x = np.random.random(n) * width + limits[0]
y = np.random.random(n) * height + limits[2]
s = y > fn(x)
p = norm.cdf(np.abs(y - fn(x)), scale = sigma)
r = np.random.random(n)
def assign(sign, prob, rnum):
if sign:
if rnum > prob:
return 'b'
else:
return 'a'
else:
if rnum > prob:
return 'a'
else:
return 'b'
c = [assign(s[i], p[i], r[i]) for i in range(n)
return DataFrame({'x' : x, 'y' : y, 'class' : c})
# + slideshow={"slide_type": "subslide"}
data = sample(100, f, [-5, 5, -25, 25], 5)
plt.figure(figsize=(8,6))
dataa = data[data['class']=='a']
datab = data[data['class']=='b']
plt.plot(dataa['x'], dataa['y'],'bs', label='class a')
plt.plot(datab['x'], datab['y'],'r^', label='class b')
plt.legend()
plt.title('The Data')
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# The "out of the box" default kernel for the Scikit-learn SVC is 'rbf':
# + slideshow={"slide_type": "-"}
model = svm.SVC()
print(data)
model.fit(data[['x','y']], data['class'])
# + [markdown] slideshow={"slide_type": "subslide"}
# As before, we can visualize the decision boundary by simply plotting all the points in our plane:
# + slideshow={"slide_type": "-"}
def plot_boundary(model, data):
cmap = ListedColormap(['#8888FF','#FF8888'])
xmin, xmax, ymin, ymax = -5, 5, -25, 25
grid_size = 0.2
xx, yy = np.meshgrid(np.arange(xmin, xmax, grid_size),
np.arange(ymin, ymax, grid_size))
pp = model.predict(np.c_[xx.ravel(), yy.ravel()])
zz = np.array([{'a':0,'b':1}[ab] for ab in pp])
zz = zz.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, zz, cmap = cmap)
plot_predicted(model, data)
plt.legend(loc='upper left', ncol=2)
# + slideshow={"slide_type": "subslide"}
plot_boundary(model, data)
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also plot the decision function in the plane:
# -
def plot_decision(model, data):
cmap = 'RdBu_r'
xmin, xmax, ymin, ymax = -5, 5, -25, 25
grid_size = 0.2
xx, yy = np.meshgrid(np.arange(xmin, xmax, grid_size),
np.arange(ymin, ymax, grid_size))
pp = model.decision_function(np.c_[xx.ravel(), yy.ravel()])
zz = pp.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, zz, cmap = cmap)
plt.colorbar()
plot_predicted(model, data)
plt.legend(loc='upper left', ncol=2)
# + slideshow={"slide_type": "subslide"}
plot_decision(model, data)
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Effects of $\gamma$ and C
# -
# The RBF kernel has an additional parameter, called `gamma`. A smaller gamma results in a shallower, more spread out Gaussian function, and therefore a smoother result:
model = svm.SVC(gamma=0.1, C = 1)
model.fit(data[['x','y']], data['class'])
plot_boundary(model, data)
# Conversely, a large gamma results in very narrow, spiky Gaussians.
model = svm.SVC(gamma=2, C = 1)
model.fit(data[['x','y']], data['class'])
plot_boundary(model, data)
# As before, the C parameter plays the part of penalizing misclassifications. It is a bit harder to think about what this means in an RBF context, though.
model = svm.SVC(gamma=0.1, C = 0.1)
model.fit(data[['x','y']], data['class'])
plot_boundary(model, data)
model = svm.SVC(gamma=0.1, C = 10)
model.fit(data[['x','y']], data['class'])
plot_boundary(model, data)
# ## Polynomial Kernel
# ---
# There is also a polynomial kernel, which computes separators based on polynomial functions of the inputs. It has an additional parameter, `coef0`, which generally needs to be set to 1. The `degree` parameter determines the degree of the polynomial; typically degree is best kept at 2 or 3.
# + slideshow={"slide_type": "-"}
model = svm.SVC(kernel='poly', degree=3, coef0 = 1, C = 0.1)
model.fit(data[['x','y']], data['class'])
plot_boundary(model, data)
plt.show()
# -
# ## Next Time
# ---
# Model selection
# - Cross validation
# - Parameter search
|
python/examples/16-svm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Logarithmic Returns
import numpy as np
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
MSFT = wb.DataReader('MSFT', data_source='iex', start='2015-1-1')
MSFT
# ### Log Returns
# $$
# ln(\frac{P_t}{P_{t-1}})
# $$
# Calculate the Log returns for Microsoft.
# Plot the results on a graph.
# Estimate the daily and the annual mean of the obtained log returns.
# Print the result in a presentable form.
# ****
# Repeat this exercise for any stock of interest to you. :)
|
Python for Finance - Code Files/65 Logarithmic Returns/Online Financial Data (APIs)/Python 3 APIs/Logarithmic Returns - Exercise_IEX.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Canary Rollout with Seldon and Ambassador
#
# ## Setup Seldon Core
#
# Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
# !kubectl create namespace seldon
# !kubectl config set-context $(kubectl config current-context) --namespace=seldon
# +
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, "w") as f:
f.write(cell.format(**globals()))
# -
# VERSION = !cat ../../../version.txt
VERSION = VERSION[0]
VERSION
# ## Launch main model
#
# We will create a very simple Seldon Deployment with a dummy model image `seldonio/mock_classifier:1.0`. This deployment is named `example`.
# %%writetemplate model.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
labels:
app: seldon
name: example
spec:
name: canary-example
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
imagePullPolicy: IfNotPresent
name: classifier
terminationGracePeriodSeconds: 1
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: main
replicas: 1
# !kubectl create -f model.yaml
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example -o jsonpath='{.items[0].metadata.name}')
# ### Get predictions
# +
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="example", namespace="seldon")
# -
# #### REST Request
r = sc.predict(gateway="ambassador", transport="rest")
assert r.success == True
print(r)
# ## Launch Canary
#
# We will now extend the existing graph and add a new predictor as a canary using a new model `seldonio/mock_classifier_rest:1.1`. We will add traffic values to split traffic 75/25 to the main and canary.
# %%writetemplate canary.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
labels:
app: seldon
name: example
spec:
name: canary-example
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
imagePullPolicy: IfNotPresent
name: classifier
terminationGracePeriodSeconds: 1
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: main
replicas: 1
traffic: 75
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
imagePullPolicy: IfNotPresent
name: classifier
terminationGracePeriodSeconds: 1
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: canary
replicas: 1
traffic: 25
# !kubectl apply -f canary.yaml
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example -o jsonpath='{.items[0].metadata.name}')
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example -o jsonpath='{.items[1].metadata.name}')
# Show our REST requests are now split with roughly 25% going to the canary.
sc.predict(gateway="ambassador", transport="rest")
# +
from collections import defaultdict
counts = defaultdict(int)
n = 100
for i in range(n):
r = sc.predict(gateway="ambassador", transport="rest")
# -
# Following checks number of prediction requests processed by default/canary predictors respectively.
# default_count = !kubectl logs $(kubectl get pod -lseldon-app=example-main -o jsonpath='{.items[0].metadata.name}') classifier | grep "root:predict" | wc -l
# canary_count = !kubectl logs $(kubectl get pod -lseldon-app=example-canary -o jsonpath='{.items[0].metadata.name}') classifier | grep "root:predict" | wc -l
canary_percentage = float(canary_count[0]) / float(default_count[0])
print(canary_percentage)
assert canary_percentage > 0.1 and canary_percentage < 0.5
# !kubectl delete -f canary.yaml
|
doc/jupyter_execute/examples/ambassador/canary/ambassador_canary.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="6q8yZl1z9AIT" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1c5a246c-848b-418a-9a26-692e8a14e23c"
# !pip install basedosdados
# + id="ys3X7WK19aT5"
import basedosdados as bd
import pandas as pd
import numpy as np
# + id="3IwHS6SA9e38"
query = """
SELECT id_regiao, id_escola, media_lp_leitura, media_lp_escrita, media_mt
FROM `basedosdados.br_inep_ana.escola`
WHERE ano = 2016
"""
# + colab={"base_uri": "https://localhost:8080/"} id="Q_gWqjbK9oNp" outputId="b7128d5a-05cf-4cdb-939e-b4f144cd2ec5"
escola = bd.read_sql(query, billing_project_id='basedosdados-input')
# + id="xLUOHaKfgt0K"
value_vars = ['media_lp_leitura', 'media_lp_escrita', 'media_mt']
notas_melt = pd.melt(escola, id_vars=['id_regiao', 'id_escola'], value_vars=value_vars, var_name='prova', value_name='valor')
notas_melt['valor'] = notas_melt['valor'].astype('float64')
notas_melt.dropna(inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 612} id="y6IevC0594OT" outputId="44c5e4ff-c50e-4bac-ac36-97a0c6b4e763"
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
# set a grey background (use sns.set_theme() if seaborn version 0.11.0 or above)
sns.set()
# Grouped violinplot
fig = plt.figure(figsize=(15,10))
sns.color_palette("hls", 8)
sns.boxplot(x="id_regiao", y="valor", hue="prova", data=notas_melt, palette='muted')
plt.legend(loc='lower left')
plt.show()
fig.savefig('correlacao.svg', transparent=True, dpi=300)
|
redes_sociais/br_inep_ana_20211105.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Advanced Lane Finding Project
#
# The goals / steps of this project are the following:
#
# * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
# * Apply a distortion correction to raw images.
# * Use color transforms, gradients, etc., to create a thresholded binary image.
# * Apply a perspective transform to rectify binary image ("birds-eye view").
# * Detect lane pixels and fit to find the lane boundary.
# * Determine the curvature of the lane and vehicle position with respect to center.
# * Warp the detected lane boundaries back onto the original image.
# * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
# +
# import modules
import glob
from IPython.display import HTML
import cv2
import matplotlib.pyplot as plt
from moviepy.editor import VideoFileClip
import numpy as np
# %matplotlib qt
# -
# ## Previous Project
# +
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
slope_threshold = 0.4
imshape = img.shape
# Find out left lanes and right lanes
left_lines, right_lines = [], []
slope_left_lines, slope_right_lines = [], []
b_left_lines, b_right_lines = [], []
for ind, line in enumerate(lines):
for x1, y1, x2, y2 in line:
dx = x2 - x1
if dx == 0: # to avoid infinite slope
dx = 1
dy = y2 - y1
slope = dy/dx
b = y1 - slope*x1
# Filter out lines with wrong inclination
if -slope_threshold < slope < slope_threshold:
continue
# Filter out lines with wrong bottom intersection
xb = (imshape[0] - b)/slope
if xb < 0 or xb > imshape[1]:
continue
# Group left and right lines
if slope < 0:
left_lines.append(line[0].tolist())
slope_left_lines.append(slope)
b_left_lines.append(b)
else:
right_lines.append(line[0].tolist())
slope_right_lines.append(slope)
b_right_lines.append(b)
# Remove outliers based on slope and b values
def remove_outliers(lines, slope_list, b_list):
if len(lines) == 0:
return []
avg_slope = np.mean(slope_list)
std_slope = np.std(slope_list)
avg_b = np.mean(b_list)
std_b = np.std(b_list)
new_lines = []
for slope, b, line in zip(slope_list, b_list, lines):
if slope < avg_slope - std_slope or slope > avg_slope + std_slope:
continue
if b < avg_b - std_b or b > avg_b + std_b:
continue
new_lines.append(line)
return new_lines
left_lines = remove_outliers(left_lines, slope_left_lines, b_left_lines)
right_lines = remove_outliers(right_lines, slope_right_lines, b_right_lines)
# Form one line
def form_one_line(lines, slope_list, b_list):
if len(lines) == 0:
return []
y_list = [y1 for x1, y1, x2, y2 in lines]
y_list.extend([y2 for x1, y1, x2, y2 in lines])
top_y = min(y_list)
avg_slope = np.mean(slope_list)
avg_b = np.mean(b_list)
top_x = int(round((top_y - avg_b)/avg_slope))
bottom_x = int(round((imshape[0] - avg_b)/avg_slope))
return [[bottom_x, imshape[0], top_x, top_y]]
left_lines = form_one_line(left_lines, slope_left_lines, b_left_lines)
right_lines = form_one_line(right_lines, slope_right_lines, b_right_lines)
lines = left_lines
lines.extend(right_lines)
len_lines = len(lines)
lines = np.array(lines).reshape(len_lines, 1, 4)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, thickness=10)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
# -
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
# Step 1: convert the image to gray scale
gray_image = grayscale(image)
# Step 2: apply gaussian_blur to smooth the image
blur_image = gaussian_blur(gray_image, 5)
# Step 3: apply Canny edge detection algorithm
edge_image = canny(blur_image, 50, 150)
# Step 4: apply mask filter
imshape = image.shape
vertices = np.array([[(20,imshape[0]),(imshape[1]/2-50, imshape[0]/2+60), (imshape[1]/2+50, imshape[0]/2+60), (imshape[1]-20,imshape[0])]], dtype=np.int32)
mask_image = region_of_interest(edge_image, vertices)
# Step 5: find lines using Hough transform
hough_image = hough_lines(mask_image, 2, np.pi/180, 20, 50, 30)
# Merge detected lines to the original image
merg_image = weighted_img(hough_image, image)
result = merg_image
return result
#clip1 = VideoFileClip("test_videos/project_video.mp4")
clip1 = VideoFileClip("test_videos/project_video.mp4").subclip(1,5)
white_output = 'test_videos/project_video_pre.mp4'
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Step 1: Camera Calibration Using Chessboard Images
# +
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/calibration??.jpg')
images.sort()
# Step through the list and search for chessboard corners
for fname in images[:]:
print(fname)
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == False:
continue
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
fname_out = fname[:-4] + "_pts.jpg"
print(fname_out)
cv2.imwrite(fname_out, img)
# -
# ## Step 2: Undistort the images
# +
# Make a list of original images
images = glob.glob('camera_cal/calibration??.jpg')
images.sort()
def get_calibrate(objpoints, imgpoints, img_shape):
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_shape, None, None)
return mtx, dist
def get_undistort(img, mtx, dist):
dst = cv2.undistort(img, mtx, dist, None, mtx)
return dst
img = cv2.imread(images[0])
mtx, dist = get_calibrate(objpoints, imgpoints, img.shape[1::-1])
for fname in images[:]:
print(fname)
img = cv2.imread(fname)
#dst_img, dist_mtx, dist_coef = cal_undistort(img, objpoints, imgpoints)
dst_img = get_undistort(img, mtx, dist)
fname_out = fname[:-4] + "_und.jpg"
cv2.imwrite(fname_out, dst_img)
fname = "test_images/straight_lines1.jpg"
print(fname)
img = cv2.imread(fname)
dst_img = get_undistort(img, mtx, dist)
fname_out = fname[:-4] + "_und.jpg"
cv2.imwrite(fname_out, dst_img)
# -
# ## Step 3 : Create Thresholded Binary Image
# +
def convert_thresh(img, sx_thresh=(20, 100), s_thresh=(170, 255), color="gray"):
# Conver to HLS color space
hls_img = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
# Sobel x
l_channel = hls_img[:, :, 1]
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0)
abs_sobelx = np.absolute(sobelx)
scaled_sobel = np.uint8(255 * abs_sobelx / np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel < sx_thresh[1])] = 1
# Threshold color channel
s_channel = hls_img[:, :, 2]
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel < s_thresh[1])] = 1
# Stack each channel
if color == "color":
combined_binary = np.dstack((np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
else:
tmp_binary = np.zeros_like(sxbinary)
tmp_binary[(sxbinary == 1) | (s_binary == 1)] = 1
tmp_binary = np.dstack((tmp_binary, tmp_binary, tmp_binary)) * 255
combined_binary = cv2.cvtColor(tmp_binary, cv2.COLOR_BGR2GRAY)
return combined_binary
fnameL = glob.glob("test_images/test?.jpg")
fnameL.extend(glob.glob("test_images/straight_lines?.jpg"))
fnameL.sort()
for fname in fnameL:
print(fname)
img = cv2.imread(fname)
und_img = get_undistort(img, mtx, dist)
thr_img = convert_thresh(und_img, color="gray")
fname_out = fname[:-4] + "_thres.jpg"
cv2.imwrite(fname_out, thr_img)
# -
# ## Step 4 : Perspective Transform
# +
# Get list of undistorted images
fname = "test_images/straight_lines1.jpg"
# Read image
img = cv2.imread(fname)
und_img = get_undistort(img, mtx, dist)
thr_img = convert_thresh(und_img, color="gray")
img_size = thr_img.shape[1::-1]
# Find corners
corner1 = (200, img_size[1])
corner2 = (img_size[0]//2 - 57, img_size[1]//2+100)
corner3 = (img_size[0]//2 + 60, img_size[1]//2+100)
corner4 = (img_size[0] - 180, img_size[1])
src_pts = np.float32([corner1, corner2, corner3, corner4])
# Plot corner lines
ori_thr_img = cv2.cvtColor(thr_img, cv2.COLOR_GRAY2BGR)
cv2.line(ori_thr_img, corner1, corner2, (0, 0, 255), 3)
cv2.line(ori_thr_img, corner2, corner3, (0, 0, 255), 3)
cv2.line(ori_thr_img, corner3, corner4, (0, 0, 255), 3)
fname_out = fname[:-4] + "_corn.jpg"
cv2.imwrite(fname_out, ori_thr_img)
# Compute transformation matrix
offsetx = 300
dst_pts = np.float32([[offsetx, img_size[1]],
[offsetx, 0],
[img_size[0] - offsetx, 0],
[img_size[0] - offsetx, img_size[1]]])
M = cv2.getPerspectiveTransform(src_pts, dst_pts)
# Transform the original image
wrp_img = cv2.warpPerspective(thr_img, M, img_size)
fname_out = fname[:-4] + "_warp.jpg"
cv2.imwrite(fname_out, wrp_img)
# -
# ## Step 5 : Find The Lane Boundary
# +
def find_lane_boundary(img):
img_shape = img.shape[1::-1]
histogram = np.sum(img[img_shape[1]//2:, :], axis=0)
midpoint = img_shape[0]//2
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
out_img = np.dstack((img, img, img))
# Hyperparameters
nwindows = 9
margin = 100
minpix = 50
window_height = np.int(img_shape[1]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
leftx_current = leftx_base
rightx_current = rightx_base
left_lane_inds = []
right_lane_inds = []
for window in range(nwindows):
# Identify window boundaries in x and y
win_y_low = img_shape[1] - (window + 1) * window_height
win_y_high = img_shape[1] - window * window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows
cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high), (0, 255, 0), 2)
cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high, win_y_high), (0, 255, 0), 2)
# Identify nonzero pixels within window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If there are more than minpix pixels, recenter the next window
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y for plotting
ploty = np.linspace(0, img_shape[1] - 1, img_shape[1])
left_fitx = left_fit[0] * ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0] * ploty**2 + right_fit[1] * ploty + right_fit[2]
# Plot left and right polynomial
for y, x in zip(ploty, left_fitx):
cv2.circle(out_img, (int(x), int(y)), 3, [0, 0, 255])
for y, x in zip(ploty, right_fitx):
cv2.circle(out_img, (int(x), int(y)), 3, [0, 0, 255])
return left_fit, right_fit, out_img
fname = "test_images/straight_lines1_warp.jpg"
img = cv2.imread(fname, cv2.IMREAD_GRAYSCALE)
img_shape = img.shape[1::-1]
left_fit, right_fit, out_img = find_lane_boundary(img)
#out_img[lefty, leftx] = [255, 0, 0]
#out_img[righty, rightx] = [0, 0, 255]
fname_out = fname[:-4] + "_wind.jpg"
cv2.imwrite(fname_out, out_img)
# +
def find_lane_boundary_with_line(img, left_fit, right_fit):
img_shape = img.shape[1::-1]
out_img = np.dstack((img, img, img))
# Hyperparameters
margin = 30
# Identify the x and y positions of all nonzero pixels in the image
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y for plotting
ploty = np.linspace(0, img_shape[1] - 1, img_shape[1])
left_fitx = left_fit[0] * ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0] * ploty**2 + right_fit[1] * ploty + right_fit[2]
# Plot left and right polynomial
for y, x in zip(ploty, left_fitx):
cv2.circle(out_img, (int(x), int(y)), 3, [0, 0, 255])
for y, x in zip(ploty, right_fitx):
cv2.circle(out_img, (int(x), int(y)), 3, [0, 0, 255])
return left_fit, right_fit, out_img
fname = "test_images/straight_lines1_warp.jpg"
img = cv2.imread(fname, cv2.IMREAD_GRAYSCALE)
img_shape = img.shape[1::-1]
print(left_fit)
print(right_fit)
left_fit, right_fit, out_img = find_lane_boundary_with_line(img, left_fit, right_fit)
print(left_fit)
print(right_fit)
#out_img[lefty, leftx] = [255, 0, 0]
#out_img[righty, rightx] = [0, 0, 255]
fname_out = fname[:-4] + "_line.jpg"
cv2.imwrite(fname_out, out_img)
# -
# ## Step 6 : Determine the curvature of the lane and vehicle position with respect to center
# +
mx = 3.7/690
my = 30/720
def calculate_curvature(curve_fit, y, mx, my):
fit0 = mx/(my**2)*curve_fit[0]
fit1 = mx/my*curve_fit[1]
fit2 = mx*curve_fit[2]
curvature = ((1 + (2 * fit0 * y + fit1)**2)**1.5)/abs(2 * fit0)
return curvature
def calculate_vehicle_position(left_fit, right_fit, plotx, ploty, mx):
left_fitx = left_fit[0] * ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0] * ploty**2 + right_fit[1] * ploty + right_fit[2]
position = (left_fitx + right_fitx)/2.0
center = plotx/2.0
position_from_center = position - center
return position_from_center*mx
print(calculate_curvature(left_fit, img_shape[1], mx, my))
print(calculate_curvature(right_fit, img_shape[1], mx, my))
print(calculate_vehicle_position(left_fit, right_fit, img_shape[0], img_shape[1]-1, mx))
# -
# ## Step 7 : Warp the detected lane boundaries back onto the original image.
# +
def draw_lanes_my(img, left_fit, right_fit, M):
lane_wrp_img = np.zeros_like(img)
img_size = lane_wrp_img.shape[1::-1]
# Generate x and y for plotting
ploty = np.linspace(0, img_shape[1] - 1, img_shape[1])
left_fitx = left_fit[0] * ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0] * ploty**2 + right_fit[1] * ploty + right_fit[2]
# Plot left and right polynomial
alpha_mask = np.zeros_like(lane_wrp_img)
for y, x in zip(ploty, left_fitx):
cv2.circle(lane_wrp_img, (int(x), int(y)), 10, [255, 0, 0])
cv2.circle(alpha_mask, (int(x), int(y)), 10, [255, 255, 255])
for y, x in zip(ploty, right_fitx):
cv2.circle(lane_wrp_img, (int(x), int(y)), 10, [255, 0, 0])
cv2.circle(alpha_mask, (int(x), int(y)), 10, [255, 255, 255])
lane_unw_img = cv2.warpPerspective(lane_wrp_img, M, img_shape, flags=cv2.WARP_INVERSE_MAP)
alpha_mask = cv2.warpPerspective(alpha_mask, M, img_shape, flags=cv2.WARP_INVERSE_MAP)
# now use transparent mask (alpha blending)
# https://www.learnopencv.com/alpha-blending-using-opencv-cpp-python/
alpha_mask = alpha_mask.astype(float)/255
foreground = cv2.multiply(alpha_mask, lane_unw_img.astype(float))
background = cv2.multiply(1.0 - alpha_mask, img.astype(float))
out_img = cv2.add(foreground, background)
return out_img
# -
def draw_lanes(img, warped, left_fit, right_fit, M):
# Create an image to draw the lines on
img_shape = warped.shape[1::-1]
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Generate x and y for plotting
ploty = np.linspace(0, img_shape[1] - 1, img_shape[1])
left_fitx = left_fit[0] * ploty**2 + left_fit[1] * ploty + left_fit[2]
right_fitx = right_fit[0] * ploty**2 + right_fit[1] * ploty + right_fit[2]
# Plot left and right polynomial
for y, x in zip(ploty, left_fitx):
cv2.circle(color_warp, (int(x), int(y)), 10, [0, 0, 255])
for y, x in zip(ploty, right_fitx):
cv2.circle(color_warp, (int(x), int(y)), 10, [0, 0, 255])
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, np.linalg.inv(M), (img_shape[0], img_shape[1]))
# Combine the result with the original image
result = cv2.addWeighted(img, 1, newwarp, 0.5, 0)
return result
# +
fname = "test_images/straight_lines1.jpg"
img = cv2.imread(fname)
und_img = get_undistort(img, mtx, dist)
thr_img = convert_thresh(und_img)
wrp_img = cv2.warpPerspective(thr_img, M, thr_img.shape[1::-1])
left_fit, right_fit, out_img = find_lane_boundary(wrp_img)
out_img = draw_lanes(img, wrp_img, left_fit, right_fit, M)
#out_img = draw_lanes_my(img, left_fit, right_fit, M)
cv2.imwrite("test_images/straight_lines1_final.jpg", out_img)
# -
# ## Step 8 : Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
# ## Put everything together
def process_image_updated(img, left_fit=[None], right_fit=[None], count=[0], curvature_list=[[]], pcurvature=[0],
position_list=[[]], pposition=[0]):
count[0] += 1 # frame count
und_img = get_undistort(img, mtx, dist)
thr_img = convert_thresh(und_img)
wrp_img = cv2.warpPerspective(thr_img, M, thr_img.shape[1::-1])
wrp_img_shape = wrp_img.shape[1::-1]
if left_fit == [None]:
left_fit[0], right_fit[0], out_img = find_lane_boundary(wrp_img)
else:
left_fit[0], right_fit[0], out_img = find_lane_boundary_with_line(wrp_img, left_fit[0], right_fit[0])
out_img = draw_lanes(img, wrp_img, left_fit[0], right_fit[0], M)
#out_img = draw_lanes_my(img, left_fit[0], right_fit[0], M)
# Write curvature
curvature_left = calculate_curvature(left_fit[0], wrp_img_shape[1], mx, my)
curvature_right = calculate_curvature(right_fit[0], wrp_img_shape[1], mx, my)
ccurvature = (curvature_left + curvature_right)/2.0
if len(curvature_list[0]) < 10:
curvature_list[0].append(ccurvature)
curvature = pcurvature[0]
else:
curvature_list[0].append(ccurvature)
curvature = np.mean(curvature_list[0])
pcurvature[0] = curvature
curvature_list[0] = []
out_img = cv2.putText(out_img, f"Curvature={curvature:6.0f}m", (10, 100),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
# Write position
cposition = calculate_vehicle_position(left_fit[0], right_fit[0], wrp_img_shape[0], wrp_img_shape[1]-1, mx)
if len(position_list[0]) < 10:
position_list[0].append(cposition)
position = pposition[0]
else:
position_list[0].append(cposition)
position = np.mean(position_list[0])
pposition[0] = position
position_list[0] = []
out_img = cv2.putText(out_img, f"Position from center={position:6.2f}m", (10, 150),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
return out_img
clip = VideoFileClip("test_videos/project_video.mp4")
#clip = VideoFileClip("test_videos/project_video.mp4").subclip(22, 29)
white_output = 'test_videos/project_video_out_updated.mp4'
white_clip = clip.fl_image(process_image_updated) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
clip = VideoFileClip("test_videos/challenge_video.mp4")
white_output = 'test_videos/challenge_video_out_updated.mp4'
white_clip = clip.fl_image(process_image_updated) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
|
P2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Pyspark 2
# language: python
# name: pyspark2
# ---
#
# ## Boolean Operators
#
# Let us understand details about boolean operators while filtering data in Spark Data Frames.
# + tags=["remove-cell"]
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/u3No8zZivpo?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
# -
# * If we have to validate against multiple columns then we need to use boolean operations such as `AND` or `OR` or both.
# * Here are some of the examples where we end up using Boolean Operators.
# * Get count of flights which are departed late at origin and reach destination early or on time.
# * Get count of flights which are departed early or on time but arrive late by at least 15 minutes.
# * Get number of flights which are departed late on Saturdays as well as on Sundays.
# Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS.
# +
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Basic Transformations'). \
master('yarn'). \
getOrCreate()
# -
# If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
#
# **Using Spark SQL**
#
# ```
# spark2-sql \
# --master yarn \
# --conf spark.ui.port=0 \
# --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
# ```
#
# **Using Scala**
#
# ```
# spark2-shell \
# --master yarn \
# --conf spark.ui.port=0 \
# --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
# ```
#
# **Using Pyspark**
#
# ```
# pyspark2 \
# --master yarn \
# --conf spark.ui.port=0 \
# --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
# ```
# ### Tasks
#
# Let us perform some tasks to understand filtering in detail. Solve all the problems by passing conditions using both SQL Style as well as API Style.
#
# * Read the data for the month of 2008 January.
# + pycharm={"name": "#%%\n"}
airtraffic_path = "/public/airtraffic_all/airtraffic-part/flightmonth=200801"
# + pycharm={"name": "#%%\n"}
airtraffic = spark. \
read. \
parquet(airtraffic_path)
# + pycharm={"name": "#%%\n"}
airtraffic.printSchema()
# -
# * Get count of flights which are departed late at origin and reach destination early or on time.
#
airtraffic. \
select('IsDepDelayed', 'IsArrDelayed', 'Cancelled'). \
distinct(). \
show()
airtraffic. \
filter("IsDepDelayed = 'YES' AND IsArrDelayed = 'NO' AND Cancelled = 0"). \
show()
# + pycharm={"name": "#%%\n"}
airtraffic. \
filter("IsDepDelayed = 'YES' AND IsArrDelayed = 'NO' AND Cancelled = 0"). \
count()
# -
# * API Style
# + pycharm={"name": "#%%\n"}
from pyspark.sql.functions import col
# + pycharm={"name": "#%%\n"}
airtraffic. \
filter((col("IsDepDelayed") == "YES") &
(col("IsArrDelayed") == "NO") &
(col("Cancelled") == 0)
). \
count()
# -
airtraffic. \
filter((airtraffic["IsDepDelayed"] == "YES") &
(airtraffic.IsArrDelayed == "NO") &
(airtraffic.Cancelled == 0)
). \
count()
# * Get count of flights which are departed early or on time but arrive late by at least 15 minutes.
#
airtraffic. \
select('IsDepDelayed', 'IsArrDelayed', 'Cancelled'). \
distinct(). \
show()
# + pycharm={"name": "#%%\n"}
# Cancelled is always 0 when there is no delay related to departure
# We can ignore check against Cancelled
airtraffic. \
filter("IsDepDelayed = 'NO' AND ArrDelay >= 15"). \
count()
# -
airtraffic. \
filter("IsDepDelayed = 'NO' AND ArrDelay >= 15 AND cancelled = 0"). \
count()
# * API Style
# + pycharm={"name": "#%%\n"}
from pyspark.sql.functions import col
airtraffic. \
filter((col("IsDepDelayed") == "NO") &
(col("ArrDelay") >= 15)
). \
count()
# -
# * Get number of flights departed late on Sundays as well as on Saturdays. We can solve such kind of problems using `IN` operator as well.
# + pycharm={"name": "#%%\n"}
from pyspark.sql.functions import col, concat, lpad
airtraffic. \
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
). \
show()
# -
l = [('X',)]
df = spark.createDataFrame(l, "dummy STRING")
from pyspark.sql.functions import current_date
df.select(current_date()).show()
# +
from pyspark.sql.functions import date_format
df.select(current_date(), date_format(current_date(), 'EE').alias('day_name')).show()
# +
from pyspark.sql.functions import date_format
df.select(current_date(), date_format(current_date(), 'EEEE').alias('day_name')).show()
# + pycharm={"name": "#%%\n"}
from pyspark.sql.functions import col, concat, lpad
airtraffic. \
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
). \
filter("""
IsDepDelayed = 'YES' AND Cancelled = 0 AND
(date_format(to_date(FlightDate, 'yyyyMMdd'), 'EEEE') = 'Saturday'
OR date_format(to_date(FlightDate, 'yyyyMMdd'), 'EEEE') = 'Sunday'
)
"""). \
count()
# -
# * API Style
# + pycharm={"name": "#%%\n"}
from pyspark.sql.functions import col, concat, lpad, date_format, to_date
airtraffic. \
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
). \
filter((col("IsDepDelayed") == "YES") & (col("Cancelled") == 0) &
((date_format(
to_date("FlightDate", "yyyyMMdd"), "EEEE"
) == "Saturday") |
(date_format(
to_date("FlightDate", "yyyyMMdd"), "EEEE"
) == "Sunday")
)
). \
count()
# -
|
05_basic_transformations/06_boolean_operators.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color = blue> Real-Time Driver Distraction Detection using TimeDistributed Convolutional LSTM Network for Mobile Platforms </font>
# ### <font color = green> Author: <NAME> </font>
# ## <font color =blue> Import the required libraries </font>
# +
import numpy as np
import os
from imageio import imread
import cv2 as cv
from cv2 import resize
import datetime
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
import random as rn
from keras import backend as K
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, GRU, Flatten, TimeDistributed, Flatten, BatchNormalization, Activation, Dropout, LSTM
from tensorflow.keras.layers import Conv2D, Conv3D, MaxPooling2D, MaxPooling3D, GlobalAveragePooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.applications import ResNet50, mobilenet_v2
from tensorflow.keras.preprocessing import image
from tensorflow.keras import initializers
from tensorflow.keras.models import load_model, Model
from sklearn.metrics import confusion_matrix, f1_score, matthews_corrcoef, precision_score, recall_score
import seaborn as sns
# -
# Set the random seed
np.random.seed(30)
rn.seed(30)
tf.random.set_seed(30)
# ## <font color = blue> Load the Training and Validation CSV files </font>
train_doc = np.random.permutation(open('./Project_Data/train_cam_1_2_mv2_t6.csv').readlines())
val_doc = np.random.permutation(open('./Project_Data/val_cam_1_2_mv2.csv').readlines())
batch_size = 10
# +
# Set the date and time variable
curr_dt_time = datetime.datetime.now()
train_path = 'Project_Data/train_cam_1_2_mv2_t6'
val_path = 'Project_Data/val_cam_1_2_mv2'
num_train_sequences = len(train_doc)
print('# training sequences =', num_train_sequences)
num_val_sequences = len(val_doc)
print('# validation sequences =', num_val_sequences)
# choose the number of epochs
num_epochs = 25
print ('# epochs =', num_epochs)
# -
# ## <font color = blue> Custom video data generator for training and validation datasets </font>
def video_generator(source_path, folder_list, batch_size):
print( 'Source path = ', source_path, '; batch size =', batch_size)
# Read every alternate frame of the video; So 10 frames will be used for Project
img_idx = [x for x in range(0,20,2) ] #create a list of image numbers you want to use for a particular video
while True:
t = np.random.permutation(folder_list)
num_batches = len(folder_list) // batch_size # calculate the number of batches
for batch in range(num_batches): # we iterate over the number of batches
# We will use image size of (224 x 224) with 3 channels
batch_data = np.zeros((batch_size,len(img_idx),224,224,3)) # x is the number of images you use for each video, (y,z) is the final size of the input images and 3 is the number of channels RGB
batch_labels = np.zeros((batch_size,10)) # batch_labels is the one hot representation of the output
for folder in range(batch_size): # iterate over the batch_size
imgs = os.listdir(source_path+'/'+ t[folder + (batch*batch_size)].split(';')[0]) # read all the images in the folder
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (batch*batch_size)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder, int(t[folder + (batch*batch_size)].strip().split(';')[1])] = 1
yield batch_data, batch_labels # yield the batch_data and the batch_labels, remember what does yield do
# write the code for the remaining data points which are left after full batches
batch_size_rem = (len(folder_list) % batch_size)
if batch_size_rem != 0:
batch_data = np.zeros((batch_size_rem,len(img_idx),224,224,3))
batch_labels = np.zeros((batch_size_rem,10))
for folder in range(batch_size_rem):
imgs = os.listdir(source_path+'/'+ t[folder + (num_batches*batch_size)].split(';')[0]) # read all the images in the folder
# For each video, iterate through the images
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (num_batches*batch_size)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder, int(t[folder + (num_batches*batch_size)].strip().split(';')[1])] = 1
yield batch_data, batch_labels # yield the batch_data and the batch_labels, remember what does yield do
# ## <font color = blue> Define the deep learning model </font>
# Define the base model MobileNetV2
def build_mobilenet(shape):
model = tf.keras.applications.MobileNetV2(
include_top=False,
input_shape=shape,
weights='imagenet')
return model
def distr_detect_model(shape, n_classes):
# Create the base model
mobinet = build_mobilenet(shape[1:])
# Start creating the time-distribted model with Sequential
model = tf.keras.Sequential()
# Add base model and pooling layers
model.add(TimeDistributed(mobinet, input_shape=shape))
model.add(TimeDistributed(GlobalAveragePooling2D()))
model.add(BatchNormalization())
# Decision making layers
# Fully Connected layer with 1024 neurons with Relu activation. Batch Normalized and Dropout at 0.25
model.add(Dense(1024, activation='relu', kernel_initializer="he_normal", bias_initializer='zeros'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
# Add a fully connected layer with 512 neurons and 25% connections dropped out
model.add(TimeDistributed(Dense(512, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.25))
# Add a fully connected layer with 256 neurons and 25% connections dropped out
model.add(TimeDistributed(Dense(256, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.25))
# Add a fully connected layer with 128 neurons and 25% connections dropped out
model.add(TimeDistributed(Dense(128, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.25))
# Add a LSTM layer with 512 neurons and 50% dropout
model.add(LSTM(512, return_sequences=False, dropout=0.5))
# Softmax layer for final 10-class output
model.add(Dense(n_classes, activation='softmax'))
return model
# +
# Define the model input shape of 10 images with size (224,224,3)
INSHAPE = (10, 224, 224, 3)
# No. of target classes = 10
n_classes = 10
# Call the model function to load the model
model = distr_detect_model(INSHAPE, n_classes)
# Adam optimizer with initial learning rate = 0.0005
optimiser =Adam(lr=5e-4)
# Compile the model withh optimiser and loss parameters
model.compile(optimizer=optimiser, loss='categorical_crossentropy', metrics=['categorical_accuracy'])
# Print Model Summary
print (model.summary())
# -
# ## <font color = blue> Load the training and validation data using the generator </font>
train_generator = video_generator(train_path, train_doc, batch_size)
val_generator = video_generator(val_path, val_doc, batch_size)
# ## <font color = blue> Set the model training parameters, checkpoint and callbacks </font>
# +
model_name = 'model_init' + '_' + str(curr_dt_time).replace(' ','').replace(':','_') + '/'
if not os.path.exists(model_name):
os.mkdir(model_name)
filepath = model_name + 'model-{epoch:05d}-{loss:.5f}-{categorical_accuracy:.5f}-{val_loss:.5f}-{val_categorical_accuracy:.5f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_categorical_accuracy', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', save_freq='epoch')
# ReduceLROnplateau
LR = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2, cooldown=1, verbose=1)
callbacks_list = [checkpoint, LR]
# -
# ## <font color = blue> Set the steps per epoch for both training and validation </font>
# +
if (num_train_sequences%batch_size) == 0:
steps_per_epoch = int(num_train_sequences/batch_size)
else:
steps_per_epoch = (num_train_sequences//batch_size) + 1
if (num_val_sequences%batch_size) == 0:
validation_steps = int(num_val_sequences/batch_size)
else:
validation_steps = (num_val_sequences//batch_size) + 1
# -
H5 = model.fit(train_generator, steps_per_epoch=steps_per_epoch, epochs=num_epochs, verbose=1,
callbacks=callbacks_list, validation_data=val_generator,
validation_steps=validation_steps, class_weight=None, workers=-1, initial_epoch=0)
# ## <font color = blue> Plot the model training and validation accuracy & loss over 25 epochs </font>
# +
# plot the training loss and accuracy
N = num_epochs
plt.style.use("ggplot")
plt.figure(figsize=(15,5))
# summarize history for Accuracy
plt.subplot(121)
plt.plot(np.arange(1, N+1), H5.history["categorical_accuracy"], label="train_acc")
plt.plot(np.arange(1, N+1), H5.history["val_categorical_accuracy"], label="val_acc")
plt.title("MobileNet V2:Training and Validation Accuracy on Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Accuracy")
plt.legend(loc="lower right")
# summarize history for loss
plt.subplot(122)
plt.plot(np.arange(1, N+1), H5.history['loss'], label="train_loss")
plt.plot(np.arange(1, N+1), H5.history['val_loss'], label="validation_loss")
plt.title("MobileNet V2:Training and Validation Loss on Dataset")
plt.xlabel('Epoch #')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.show()
# -
# ## <font color = blue> Create the test data video generator </font>
def test_video_generator(source_path, folder_list, batch_size):
print( 'Source path = ', source_path, '; batch size =', batch_size)
# Read first 10 frames
img_idx = [x for x in range(0,10) ] #create a list of image numbers you want to use for a particular video
# Read next 10 frames
img_idx_2 = [y for y in range(10,20) ]
while True:
t = np.random.permutation(folder_list)
num_batches = len(folder_list)*2 // batch_size # calculate the number of batches
for batch in range(num_batches): # we iterate over the number of batches
# We will use image size of (224 x 224) with 3 channels
batch_data = np.zeros((batch_size,len(img_idx),224,224,3)) # x is the number of images you use for each video, (y,z) is the final size of the input images and 3 is the number of channels RGB
batch_labels = np.zeros((batch_size,10)) # batch_labels is the one hot representation of the output
for folder in range(batch_size//2): # iterate over the batch_size
imgs = os.listdir(source_path+'/'+ t[folder + (batch*batch_size//2)].split(';')[0]) # read all the images in the folder
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (batch*batch_size//2)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder*2,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder*2,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder*2,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder*2, int(t[folder + (batch*batch_size//2)].strip().split(';')[1])] = 1
for idx,item in enumerate(img_idx_2): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (batch*batch_size//2)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder*2 + 1,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder*2 + 1,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder*2 + 1,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder*2 + 1, int(t[folder + (batch*batch_size//2)].strip().split(';')[1])] = 1
yield batch_data, batch_labels # yield the batch_data and the batch_labels, remember what does yield do
# write the code for the remaining data points which are left after full batches
batch_size_rem = (len(folder_list)*2 % batch_size)
if batch_size_rem != 0:
batch_data = np.zeros((batch_size_rem,len(img_idx),224,224,3))
batch_labels = np.zeros((batch_size_rem,10))
for folder in range(batch_size_rem//2):
imgs = os.listdir(source_path+'/'+ t[folder + (num_batches*batch_size//2)].split(';')[0]) # read all the images in the folder
# For each video, iterate through the images
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (num_batches*batch_size//2)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder*2,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder*2,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder*2,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder*2, int(t[folder + (num_batches*batch_size//2)].strip().split(';')[1])] = 1
for idx,item in enumerate(img_idx_2): # Iterate iver the frames/images of a folder to read them in
img = image.load_img(source_path+'/'+ t[folder + (num_batches*batch_size//2)].strip().split(';')[0]+'/'+imgs[item], target_size=(224, 224))
img_array = image.img_to_array(img)
#Pre-process using mobilenetv2 pre-processor
image_norm = tf.keras.applications.mobilenet_v2.preprocess_input(img_array)
# Batch of 3 channel images normalized and fed into the batch_data array
batch_data[folder*2 + 1,idx,:,:,0] = image_norm[:,:,0] #normalise and feed in the image
batch_data[folder*2 + 1,idx,:,:,1] = image_norm[:,:,1] #normalise and feed in the image
batch_data[folder*2 + 1,idx,:,:,2] = image_norm[:,:,2] #normalise and feed in the image
batch_labels[folder*2 + 1, int(t[folder + (num_batches*batch_size//2)].strip().split(';')[1])] = 1
yield batch_data, batch_labels # yield the batch_data and the batch_labels, remember what does yield do
# ## <font color = blue> Start loading the best model ".h5" file and evaluate the model performance on test data </font>
# +
#######
model_name = 'model_init_2021-02-1605_09_47.319278/model-00020-0.01655-0.99655-1.01868-0.85000.h5'
#######
test_doc = open('Project_Data/test_cam_1_2_t1_mv2.csv').readlines()
test_path = 'Project_Data/test_cam_1_2_t1_mv2'
num_test_sequences = len(test_doc)*2
print ('# Testing sequences =', num_test_sequences)
test_datagen = test_video_generator(test_path, test_doc, batch_size)
model = load_model(model_name)
print("Model loaded.")
model_func = Model(inputs=[model.input], outputs=[model.output])
acc = 0
num_batches = int(num_test_sequences/batch_size)
print('Number of batches =',num_batches)
# -
# ## <font color = blue> Start making batch predictions on the test data </font>
# +
acc = 0
actual_list = []
pred_list = []
for i in range(num_batches):
x,true_labels = test_datagen.__next__()
print ("shape of x:", x.shape, "and shape of true_labels:", true_labels.shape)
pred_idx = np.argmax(model_func.predict_on_batch(x), axis=1)
actual_list.append(np.where(true_labels==1)[1])
pred_list.append(pred_idx)
for j,k in enumerate(pred_idx):
if true_labels[j,k] == 1:
acc += 1
if (num_test_sequences%batch_size) != 0:
x,true_labels = test_datagen.__next__()
print ("shape of x:", x.shape, "and shape of true_labels:", true_labels.shape)
pred_idx = np.argmax(model_func.predict_on_batch(x), axis=1)
actual_list.append(np.where(true_labels==1)[1])
pred_list.append(pred_idx)
for j,k in enumerate(pred_idx):
if true_labels[j,k] == 1:
acc += 1
# -
# ## <font color = blue> Check Test Accuracy </font>
print('Test Accuracy =', round((acc/num_test_sequences)*100,2),'%')
# ## <font color = blue> Check Test F1-Score </font>
# +
test_weighted_f1 = f1_score(
np.concatenate(actual_list),
np.concatenate(pred_list), average='weighted'
)
print("Weighted F1-score on test data = ", round(test_weighted_f1*100,2),'%')
# +
test_micro_f1 = f1_score(
np.concatenate(actual_list),
np.concatenate(pred_list), average='micro'
)
print("Micro F1-score on test data = ", round(test_micro_f1*100,2),'%')
# +
test_macro_f1 = f1_score(
np.concatenate(actual_list),
np.concatenate(pred_list), average='macro'
)
print("Macro F1-score on test data = ", round(test_macro_f1*100,2),'%')
# -
# ## <font color = blue> Check Test Precision and Recall Scores </font>
# +
test_precision_score = precision_score(
np.concatenate(actual_list),
np.concatenate(pred_list), average='macro'
)
print("Precision score on test data = ", round(test_precision_score*100,2),'%')
# +
test_recall_score = recall_score(
np.concatenate(actual_list),
np.concatenate(pred_list), average='macro'
)
print("Recall score on test data = ", round(test_recall_score*100,2),'%')
# -
# ## <font color = blue> Create Confusion Matrix of 10-class accuracy </font>
# +
cf_matrix = confusion_matrix(
np.concatenate(actual_list),
np.concatenate(pred_list)
)
categories= [ 'Drive_Safe','Text_Right','Talk_Right','Text_Left','Talk_Left','Adjust_Radio','Drink','Reach_Behind','Hair_Makeup','Talk_Passenger' ]
sns.set(rc={'figure.figsize':(12,8)})
sns.heatmap(cf_matrix, annot=True, linewidths=.5, cmap="YlGnBu",cbar=False,xticklabels=categories,yticklabels=categories)
plt.ylabel("Actual Label")
plt.xlabel("Predicted Label")
plt.show()
# -
# ## <font color = Blue> Display accuracy percentages in confusion matrix </font>
sns.set(rc={'figure.figsize':(12,8)})
sns.heatmap(cf_matrix / np.sum(cf_matrix, axis =1 ).reshape(10,-1), annot=True,
fmt='.2%', linewidths=.5, cmap="YlGnBu",cbar=False,xticklabels=categories,yticklabels=categories)
plt.ylabel("Actual Label")
plt.xlabel("Predicted Label")
plt.show()
# # <font color = blue> End of Notebook </font>
|
Distraction_Detection_MobileNetV2_LSTM_Main.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Breast Cancer Proteome
#
# Data
# https://www.kaggle.com/piotrgrabo/breastcancerproteomes
#
#
# ## Description
#
# ### Context:
# This data set contains published iTRAQ proteome profiling of 77 breast cancer samples generated by the Clinical Proteomic Tumor Analysis Consortium (NCI/NIH). It contains expression values for ~12.000 proteins for each sample, with missing values present when a given protein could not be quantified in a given sample.
#
# ### Content:
#
# #### File: 77_cancer_proteomes_CPTAC_itraq.csv
#
# - RefSeq_accession_number: RefSeq protein ID (each protein has a unique ID in a RefSeq database)
# - gene_symbol: a symbol unique to each gene (every protein is encoded by some gene)
# - gene_name: a full name of that gene
# - Remaining columns:
# - each column is a person indicated by ID number
# - __log2 iTRAQ ratios for each sample__ (protein expression data, most important!), three last columns are from healthy individuals
#
# #### File: clinical_data_breast_cancer.csv
#
# First column "Complete TCGA ID" lists 105 people, (note 28 were dropped)
# - use to match the sample IDs in the cancer proteomes file (see example script).
# All other columns have self-explanatory names, contain data about the cancer classification of a given sample using different methods. 'PAM50 mRNA' classification is being used in the example script.
#
# #### File: PAM50_proteins.csv
#
# Contains the list of 100 genes / proteins used by the PAM50 classification system.
# - The column RefSeqProteinID contains the protein IDs that can be matched with the IDs in the cancer proteomes data set.
#
# ### Past Research:
# The original study: http://www.nature.com/nature/journal/v534/n7605/full/nature18003.html (paywall warning)
#
# In brief: the data were used to assess how the mutations in the DNA are affecting the protein expression landscape in breast cancer. Genes in our DNA are first transcribed into RNA molecules which then are translated into proteins. Changing the information content of DNA has impact on the behavior of the proteome, which is the main functional unit of cells, taking care of cell division, DNA repair, enzymatic reactions and signaling etc. They performed K-means clustering on the protein data to divide the breast cancer patients into sub-types, each having unique protein expression signature. They found that the best clustering was achieved using 3 clusters (original PAM50 gene set yields four different subtypes using RNA data).
#
# ### Inspiration:
#
# This is an interesting study and I myself wanted to use this breast cancer proteome data set for other types of analyses using machine learning that I am performing as a part of my PhD. However, I though that the Kaggle community (or at least that part with biomedical interests) would enjoy playing with it. I added a simple K-means clustering example for that data with some comments, the same approach as used in the original paper. One thing is that there is a panel of genes, the PAM50 which is used to classify breast cancers into subtypes. This panel was originally based on the RNA expression data which is (in my opinion) not as robust as the measurement of mRNA's final product, the protein. Perhaps using this data set, someone could find a different set of proteins (they all have unique NP_/XP_ identifiers) that would divide the data set even more robustly? Perhaps into a higher numbers of clusters with very distinct protein expression signatures?
#
# ### Example K-means analysis script:
# http://pastebin.com/A0Wj41DP
# - uses K-means cluster (kajot, Berlin)
# My Notes
# - This data was generated by the Clinical Proteomic Tumor Analysis Consortium (CPTAC) (NCI/NIH)
# - N=77 patients
# - this is a very small sample size
# - with flu I had 45 and could just about tell the difference between 15 patients and 30 patients with ~7 factors
# - rule of thumb, 10-fold more samples than factors, so only use 7 factors if analyzing this data
# - 100 proteins detected by iTRAQ proteome profiling
# - 12,500 proteins detected with relative expression values for each sample
# - when a protein was not detected there will be NaN
# - what are these values normalized to? how are there negatives?
#
# - What is PAM50 classification?
# - there are Basal-like, HER2-enriched, Luminal A, & Luminal B classes
# - PAM50 only uses 100 proteins
# - PAM50 was based on mRNA
#
#
# - I should run an unsupervised clustering on this before I look at the types of proteins that were detected
#
#
# - What are the following columns
# - Converted stage
# -
# - ER status*
# - estrogen receptor
# - PR status*
# - progesterone receptor
# - HER2*
# - A central goal in breast cancer research has been the identification of druggable kinases beyond HER2
# -
# - Tumor
# - The variable Tumor tells something about the size of the tumor and if the tumor is growing into the chest wall or skin. For 15 individuals their breast tumor was at stage T1 when diagnosed. For 65 individuals their breast tumor was at stage T2 when diagnosed. For 19 individuals their breast tumor was at stage T3 when diagnosed. For 6 individuals their breast tumor was at stage T4 when diagnosed
# - Tumor: T other vs T1
# - Node: 0-3 & pos/neg
# - The variable Node indicates the degree of spread to regional lymph nodes. For 53 individuals tumor cells are absent from regional lymph nodes. In 29 individuals there is regional lymph node metastatis present. In 14 individuals there are metastasis present in 4 to 9 lymph nodes. And in 9 individuals metastatasis is present in more than 9 lymph nodes
# - AJCC Stage & converted stage
# - Staging determines whether the cancer has spread & if so how far
# - this is determined by using Tumor, node, & metastasis
# - more information: http://www.cancer.org/cancer/breastcancer/detailedguide/breast-cancer-staging
# - OS event & Time
# - RPPA clusters*
# - CN clusters
#
# - Delete patients
# - Take out the two people with metastasis there are not enough to do anything with
# -
#
# - What can I do with age?
# - what is the spread of age?
# - drop a few patients who are outside of a 10-20 year range?
#
# - Is there a reason to drop deceased (i.e. cancer was further progressed when sample was taken?)
#
# - Ignore columns
# - Days to Date of Last Contact because it does not give health related info (individuals seek medical help at different stages, stage at time of sampling is more important)
#
#
# Questions to answer using the data
# - which features (proteins) drive any found classification? (this is useful because it can elucidate potential therapeutical targets or shed light into novel cancer biology)
# - Which features are at the root of correlation
# - perform Prinicpal Component Analysis (PCA) prior to performing hierarchical clustering in order to reduce the dimensionality of our data
# - requires no missing values
# - omit patients with missing values or proteins with missing values?
# - look at distribution of missing values
# - are there patients with loads missing or proteins with loads missing?
#
# #### Kernals with good lessons
#
# - Hierarchical Clustering (<NAME>, Detriot, Software engineer @ Plex Systems)
# - https://www.kaggle.com/anthonyleo/hierarchical-clustering
# - objective: to analyze the proteom data set to determine if there is any underlying structure and practice implementing Principal Component Analysis and Hierarchical Clustering
# - use the R prcomp function to perform Prinicpal Component Analysis
# - determine the number of principal components to use
# - Each principal component captures a percentage of the variance within our data, and we can see exactly how much variance each principal component accounts for by creating a scree plot
# - 8 principal components (PC) account for 52.6% of the variance, which is rather low
# - pass the principal component vectors into the hclust function
# - focus on 4 clusters
# - plot PC1 v PC2
# - Conclude: It appears that there is not a lot of underlying structure to this dataset. Since our principal component analysis was not able to effectively reduce the dimensionality of our data, we were not able to create well defined clusters using hierarchical clustering, and it is likely that most clustering methods would also fail on this dataset due to the curse of dimensionality.
# - other pages by <NAME>
# - https://www.kaggle.com/anthonyleo/linear-regression
# - https://www.kaggle.com/anthonyleo/hypothesis-testing
#
# - Breast Cancer Sub-Characterization w/ Genomic Data (<NAME>, Antarctica)
# - https://www.kaggle.com/jasontsmith2718/breast-cancer-sub-characterization-w-genomic-data
# - Linear Discriminant Analysis (LDA) was used to reduce dimensionality
# - graph dimension x by dimension y (similar to Heirarchical clustering above)
# - decision tree & x-fold cross validation
# - ROC curves: A receiver operating characteristic curve, i.e. ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied
# - higher Area Under an ROC curve = better
# - what is a multi-class AUROC metric?
# - see sklearn example at 'http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html'
#
# - Data Visualization of Molecular Subtype (<NAME>, Data science student, Penn)
# - https://www.kaggle.com/jwlzdh1/data-visualization-of-molecular-subtype
# - This guy even has intro slides of prevalence (very impressive!)
# - It is common that genetic datasets exhibit cases where n, the number of observations is larger than the p, the number or predictors. When p >n, there is no longer a unique least squares coefficient estimate. Often when p>n, high variance and overfitting are a major concern in this setting. Thus, simple, highly regularized approaches often become the methods of choice. To address this issue, there are many techniques (__Subset selection), (Lasso and Ridge) and (Dimension reduction__) to exclude irrelevant variables from a regression or a dataset.
# - Use Lasso (R) to shrink dataset from 12553 predictors to 57 relevant variables
# - Look for correlations between the 57 protein variables and specific tumor types (but what if tumor types aren't good classifications to determine root cause of disease or cures?)
# - Plot variable importance to identify the 5 most "influencial" genes that will be used for visualization
# - Graphs
# - Pie chart to show 4 tumor types: Luminal A/B, Basal, HER2
# - ...
# - Basal can be differentiated from the other 3 by Mical & HNF3g (hepatocyte nuclear factor)
# - box plot & violin plot of mical by the 4 cancer types
#
#
# - Ensemble Clustering (<NAME>, Italy)
# - https://www.kaggle.com/noise42/ensemble-clustering
# - good at inspecting information in spreadsheets
# - create random set of proteins
# - nice heatmap
# - ensemble clustering
# - import itertools
# - from sklearn.cluster import DBSCAN
# - visualize with K==3
# - construct a cooccurrence (consensus) matrix
# - sns.clustermap(MatrixData)
# - from scipy.cluster.hierarchy import dendrogram
# - used pre-defined categories of tumors
#
#
# - IntroUnsupervisedAnalysis (srhoades10, Pharmacology PhD, Penn)
# - https://www.kaggle.com/srhoades10/introunsupervisedanalysis
# - histogram graph of protein intensity distribution
# - 3D graph of principal components
# - #Plot the clusters and the observations with respect to the cluster boundaries
# - #Some of these commands are adapted from the scikit-learn example for KMeans
# - #... which is found at http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py (thanks!)
# - 5D graph of principal components
# - very few comments, no analysis or conclusion
#
#
# - Breast cancer markers using simple tools ( Efejiro Ashano, Bioinformatics @ NIBDA, Lagos Nigeria)
# - aim to find early markers of breast cancer in the data
# - compares to the three healthy people, which is no good, because there are only 3 of them! (I will try to differentiate between types of cancers)
# - box plots
#
#
# - Clustering and analysis on Breast Cancer (ML_Enthusiast)
# - graphs miRNA vs methylation clustering for different stages
#
# - Patient characteristics (Anika)
# - https://www.kaggle.com/sinaasappel/patient-characteristics
# - good analysis!
# - delete the males!!!
# - focus on age 40-70
# - focus on tumor T1 & T2, smaller tumors (<2-5cm)
# - focus on node N0, N1 (1-3), & N2(4-9) (spread to 0-9 lymphnodes)
# - ignore staging because it is determined using tumor & node
#
#
# - Clustering proteins (<NAME>)
# - https://www.kaggle.com/petebleackley/clustering-proteins
# - good list of sklearn programs to import
# - identifies 8 protein clusters
# - def ReduceDimensions(data,keep_variance=0.8,tolerance=1.0E-12,max_iter=1024)
# - Previous work on this dataset has involved clustering patients according to their protein activity. For this analysis, I decided to cluster proteins according to their activity in different patients, and then use the activity of the different clusters to predict clinical features.
# - predict Tumor and Node by proteins!
# - First, the dimensionality of dataset is reduced, and then a hierarchical cluster is fitted to the principal components, which is visualised as a dendrogram to select the appropriate number of components
#
#
# - Cancer Proteomes vs Clinical Data Analysis (mbschultz1)
# - https://www.kaggle.com/mbschultz1/cancer-proteomes-vs-clinical-data-analysis
# - similar analysis to Anika (above)
# - the rest is bad
#
#
# - Kmeans-3d-plot (AkhilPrakash)
# - https://www.kaggle.com/prakashakhil/kmeans-3d-plot
# - a 3D plot
#
# #### Plan of attack
#
# Question:
#
# Do clusters determined by using all proteins present in larger & further spread tumors lead to good definition of clusters for smaller & less spread tumors?
# - Then can we compare expression levels of each protein expressed in low-T/N tumors to expression levels in high-T/N tumors to show which kinases will become more over expressed?
# - compute change in expression levels between low & high-TN tumors of same cluster,
# - sort proteins by amount of change
# - greater change means greater potential for increased expression and therefore makes those proteins more likely to be good drug targets
# - make a list of top 10 targets
#
# Can clustering by ALL proteins predict T & N of patients better than clustering by the 35 PAM50 proteins?
#
#
# (1) Drop NaNs
# - drop males
# - drop healthy individuals
# - drop pateints that are too old or young, graph, but probably focus on ages 40-70
# - Look at proteins and drop proteins that have a large number of missing values
# - Look at samples and drop samples that have a large number of missing values
# - Look at distribution of missing values and drop more proteins and/or samples (or fill these in with average inferred values)
#
#
# (2) Use correlation between proteins expressed at later stage of disease (characterized by N, O) and treatment groups (ER, PR, HER2) to predict correlation between earlier disease and treatment group
# - maybe: Drop top 8-ish proteins with most variance - use data to predict which one will be
#
#
# (3) reduce dimensionality
# - "dimension reduction"
# - "ReduceDimensions"
# by/or select a subset of proteins to base analysis on
# - Lasso and Ridge
# - "Subset selection"
# perform Prinicpal Component Analysis (PCA)
# or perform Linear Discriminant Analysis (LDA)
# perform hierarchical clustering
# decision tree & x-fold cross validation
# ROC curves
# "Ensemble Clustering"
# kmeans clustering
#
# __Need to look at variance!!!__ (chromosome 5 genes varry to a much larger extent than others)
#
# - review Kernals below - do they all use scikitlearn
# - Clustering proteins (<NAME>)
# - https://www.kaggle.com/petebleackley/clustering-proteins
# - heirarchical clustering into 8 groups
# - beautifully colored bar graphs (need better ordering)
# - Data Visualization of Molecular Subtype (<NAME>)
# - https://www.kaggle.com/jwlzdh1/data-visualization-of-molecular-subtype
# - Hierarchical Clustering (<NAME>, Detriot, Software engineer @ Plex Systems)
# - https://www.kaggle.com/anthonyleo/hierarchical-clustering
# - Breast Cancer Sub-Characterization w/ Genomic Data (<NAME>, Antarctica)
# - https://www.kaggle.com/jasontsmith2718/breast-cancer-sub-characterization-w-genomic-data
#
#
#
# (3) Show correlation between PAM50(35) cluster and T&N
# Show correlation between All preotein clusters and T&N
# (see one of the kernals with nice bar graphs)
# which clusters predict low T&N, which clusters predict high T&N
#
# pick out new kinases to target in non HER2 mutants
#
# XPredict
# - AJCC stage/score (includes tumor size (T) and spread to LNs (N))
# - ER, PR, Her2, basal...
# - delete the top 8 then determine which clusters determine the 8
# - identify small molecule drug targets: proteins that have the string 'ase'
# - identify antibody targets: proteins expressed on the surface of cells
# - identify CRISPR targets: proteins that are excessively expressed that could be targeted for knockdown
#
#
#
# +
import numpy as np
# import numpy.linalg
import pandas as pd
import matplotlib # Do I need to import this separately
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
# import re # could import with a comma ', re' after sklearn
# import itertools
# import sklearn
# from sklearn import metrics
# from sklearn import preprocessing
# from sklearn.preprocessing import Imputer # Bewarem imputer.transform returns a numpy array, not a dataframe
# from sklearn.preprocessing import StandardScaler
# from sklearn.preprocessing import LabelEncoder
# from sklearn.decomposition import PCA
# from sklearn.cluster import KMeans
# from sklearn.cluster import DBSCAN
# from sklearn.cluster import SpectralClustering
# from sklearn.cluster import MeanShift
# from sklearn.metrics import adjusted_rand_score as rn
# from sklearn.metrics import adjusted_mutual_info_score as mi
# from scipy.cluster.hierarchy import dendrogram # do I need to import scipy?
# from collections import defaultdict
# from collections import OrderedDict
# from mpl_toolkits.mplot3d import Axes3D
# -
# import files as dataframes
ProtExpressDF = pd.read_csv('origdata/77_cancer_proteomes_CPTAC_itraq.csv', header = 0, low_memory = False)
ClinMetaDF = pd.read_csv('origdata/clinical_data_breast_cancer.csv', header = 0, low_memory = False)
PAM50DF = pd.read_csv('origdata/PAM50_proteins.csv', header = 0, low_memory = False)
ProtExpressDF
# change column name
# helped: https://stackoverflow.com/questions/11346283/renaming-columns-in-pandas
ProtExpressDF.rename(columns={'RefSeq_accession_number': 'RefSeq_accession'}, inplace=True)
ProtExpressDF.head(3)
PAM50DF.head(3)
# change column name
PAM50DF.rename(columns={'RefSeqProteinID': 'RefSeq_accession'}, inplace=True)
PAM50DF.head(1)
# count number of rows
# helped: https://stackoverflow.com/questions/15943769/how-do-i-get-the-row-count-of-a-pandas-dataframe
# note df.count()[0] will only return the count of non-NA/NaN rows
PAM50DF.shape[0]
# +
# list3TRUEs = [TRUE]*3
# must have quotes '' around TRUE for the list to generate
# -
# make a list with 100 'TRUE'
# helped: https://stackoverflow.com/questions/3459098/create-list-of-single-item-repeated-n-times-in-python
list5TRUEs = [True]*5
list5TRUEs
list100TRUEs = [True]*100
# make a series
# helped: https://pandas.pydata.org/pandas-docs/stable/dsintro.html
# note .Series must be capitalized!
TRUEseries = pd.Series(list100TRUEs)
TRUEseries.head()
# append series to PAM50DF with header = PAM50included
PAM50DF['PAM50included'] = TRUEseries
PAM50DF.head(3)
PAM50DF.tail(3)
|
BreastCancerProteome_20180813_start.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from PyQt5 import QtWidgets, QtGui, QtCore
from mainwindows import Ui_MainWindow
import sys
class mywindow(QtWidgets.QMainWindow):
def __init__(self):
super(mywindow, self).__init__()
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
# Меняем шрифт
self.ui.label.setFont(
QtGui.QFont('SanSerif', 30)
)
# Меняем геометрию блока
self.ui.label.setGeometry(
QtCore.QRect(10, 10, 200, 200)
)
# Меняем текст ярлыка
self.ui.label.setText("Вау, вот это ты дурачок!")
# Меняем текст строки lineEdit
self.ui.lineEdit.setText("Привет, как тебе такое Илон Маск???????")
#Указываем максимальную длину
self.ui.lineEdit_2.setMaxLenght(10)
#ввод пароля
self.ui.LineEdit_3.setEchoMode(QtWidgets.QLineEdit.Password)
#только для чтения строка без изменения
self.ui.lineEdit_4.setReadOnly(True)
#меняем цвет вводимого текста
self.ui.lineEdit_5.setStyleSheet("color: rgb(28, 43m 255);")
app = QtWidgets.QApplication([])
application = mywindow()
application.show()
sys.exit(app.exec())
# -
|
PyQt.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import random
import os
import shutil
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torchvision.transforms as transforms
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision.datasets as dsets
import torchvision
from scipy.ndimage.filters import gaussian_filter
import PIL
from PIL import Image
random.seed(42)
# +
class resBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class resTransposeBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resTransposeBlock, self).__init__()
self.conv1 = nn.ConvTranspose2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.ConvTranspose2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class VGG19_extractor(nn.Module):
def __init__(self, cnn):
super(VGG19_extractor, self).__init__()
self.features1 = nn.Sequential(*list(cnn.features.children())[:3])
self.features2 = nn.Sequential(*list(cnn.features.children())[:5])
self.features3 = nn.Sequential(*list(cnn.features.children())[:12])
def forward(self, x):
return self.features1(x), self.features2(x), self.features3(x)
# -
vgg19_exc = VGG19_extractor(torchvision.models.vgg19(pretrained=True))
vgg19_exc = vgg19_exc.cuda()
# ### Designing Encoder (E)
# +
class Encoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Encoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.Conv2d(3, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv2 = nn.Conv2d(64, 32, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.Conv2d(32, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv4 = nn.Conv2d(8, 1, 3, stride=1, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
E1 = Encoder(n_res_blocks=10)
# -
# ### Designing Decoder (D)
# +
class Decoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Decoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.ConvTranspose2d(1, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resTransposeBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv2 = nn.ConvTranspose2d(8, 32, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resTransposeBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.ConvTranspose2d(32, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resTransposeBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv4 = nn.ConvTranspose2d(64, 3, 3, stride=2, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
D1 = Decoder(n_res_blocks=10)
# -
# ### Putting it in box, AE
class AE(nn.Module):
def __init__(self, encoder, decoder):
super(AE, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
# print('encoder out checking for nan ', np.isnan(h_enc.data.cpu()).any())
y = self.D(h_enc)
# print('decoder out checking for nan ', np.isnan(y.data.cpu()).any())
return y
A = AE(E1, D1)
A = A.cuda()
# ### Dataloading and stuff
# ##### Auto encoder accepts 181X181 as input and outputs 181X181 as output, however the bottle neck output i.e that of encoder is much smaller
# +
def mynorm2(x):
m1 = torch.min(x)
m2 = torch.max(x)
if m2-m1 < 1e-6:
return x-m1
else:
# return x-m1
return (x-m1)/(m2-m1)
mytransform2 = transforms.Compose(
[transforms.RandomCrop((181,181)),
# transforms.Lambda( lambda x : Image.fromarray(gaussian_filter(x, sigma=(10,10,0)) )),
# transforms.Resize((41,41)),
transforms.ToTensor(),
transforms.Lambda( lambda x : mynorm2(x) )])
# ])
trainset = dsets.ImageFolder(root='../sample_dataset/train/',transform=mytransform2)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = dsets.ImageFolder(root='../sample_dataset/test/',transform=mytransform2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=True, num_workers=2)
# functions to show an image
def imshow(img):
#img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
def imshow2(img):
m1 = torch.min(img)
m2 = torch.max(img)
# img = img/m2
if m2-m1 < 1e-6:
img = img/m2
else:
img = (img-m1)/(m2-m1)
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = next(dataiter) #all the images under the same 'unlabeled' folder
# print(labels)
# show images
print('a training batch looks like ...')
imshow(torchvision.utils.make_grid(images))
# -
# ### training thingy
def save_model(model, model_name):
try:
os.makedirs('../saved_models')
except OSError:
pass
torch.save(model.state_dict(), '../saved_models/'+model_name)
print('model saved at '+'../saved_models/'+model_name)
# dataloader = iter(trainloader)
testiter = iter(testloader)
testX, _ = next(testiter)
def eval_model(model):
testX, _ = next(testiter)
model.cpu()
X = testX
print('input looks like ...')
plt.figure()
imshow(torchvision.utils.make_grid(X))
X = Variable(X)
Y = model(X)
print('output looks like ...')
plt.figure()
imshow2(torchvision.utils.make_grid(Y.data.cpu()))
# +
def train_ae(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 10
Criterion2 = nn.MSELoss()
Criterion1 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
vgg_in_trf = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((224,224)),
transforms.ToTensor()
])
loss_track = []
loss_L2_track = []
loss_vl3_track = []
model.cuda()
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
l2 = Criterion2(reconX, X)
# l1 = Criterion1(reconX, X)
X1 = torch.zeros(X.shape[0], X.shape[1], 224, 224)
reconX1 = torch.zeros(reconX.shape[0], reconX.shape[1], 224, 224)
batch_n = X.shape[0]
for bi in range(batch_n):
X1[bi,:,:,:] = vgg_in_trf(X[bi,:,:,:].data.cpu())
reconX1[bi,:,:,:] = vgg_in_trf(reconX[bi,:,:,:].data.cpu())
# print('yoyoyoy', X1.shape, reconX1.shape)
X1 = Variable(X1).cuda()
reconX1 = Variable(reconX1).cuda()
t1, t2, t3 = vgg19_exc(X1)
rt1, rt2, rt3 = vgg19_exc(reconX1)
# t1 = Variable(t1.data)
# rt1 = Variable(rt1.data)
# t2 = Variable(t2.data)
# rt2 = Variable(rt2.data)
# print('hooray', t3, rt3)
t3 = Variable(t3.data).cuda()
rt3 = Variable(rt3.data).cuda()
# print('did cuda ')
# vl1 = Criterion2(rt1, t1)
# vl2 = Criterion2(rt2, t2)
vl3 = Criterion2(rt3, t3)
reconTerm = 30*l2 + vl3
loss = reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
loss_L2_track.append(l2.data[0])
loss_vl3_track.append(vl3.data[0])
if i%disp_interval == 0:
print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(
eph, i, l2.data[0], vl3.data[0], loss.data[0]))
#saving the last model
save_model(model, 'camelyon16_AE_181_last.pth')
return loss_track, loss_L2_track, loss_vl3_track
# -
# #### Notes on training
# It seems like the combination of L1 and L2 loss is not helping and also the features from deeper layers from VGG19 are more effective than the features on the shallow leve
loss_track, loss_L2_track, loss_vl3_track = train_ae(A, disp_interval=50)
import pickle
def save_train_log(val_arr, model_name, fname):
try:
os.makedirs('../train_logs')
except OSError:
pass
try:
os.makedirs('../train_logs/'+model_name)
except OSError:
pass
filehandler = open('../train_logs/{}/{}.pkl'.format(model_name, fname),'wb')
pickle.dump(val_arr,filehandler)
filehandler.close()
print('log saved at '+'../train_logs/{}/{}.pkl'.format(model_name, fname))
loss_track = np.array(loss_track)
loss_L2_track = np.array(loss_L2_track)
loss_vl3_track = np.array(loss_vl3_track)
plt.plot(loss_track)
plt.plot(30*loss_L2_track)
plt.plot(loss_vl3_track)
save_train_log(loss_track, 'camelyon16_AE_181', 'loss_track')
save_train_log(loss_L2_track, 'camelyon16_AE_181', 'loss_L2_track')
save_train_log(loss_vl3_track, 'camelyon16_AE_181', 'loss_vl3_track')
testiter = iter(testloader)
testX, _ = next(testiter)
tx = A(Variable(testX).cuda())
tx.shape
eval_model(A)
# #### Encoded space is shown below, encoded space is 1X46X46
testX, _ = next(testiter)
plt.figure()
imshow(torchvision.utils.make_grid(testX))
Y1 = A.E(Variable(testX))
plt.figure()
imshow2(torchvision.utils.make_grid(Y1.data))
Z1 = A.D(Y1)
plt.figure()
imshow2(torchvision.utils.make_grid(Z1.data))
|
notebooks/AE_testbed.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="ac76a406-f2bd-4983-8aaa-9f35cc130bbf" _uuid="98dd4df3bde2484278a6bc3e76f4e2f3738fa917"
# Source: https://www.kaggle.com/ahassaine/pure-image-processing-lb-0-274/code
#
# -
# # Libraries and Global Parameters
# + _cell_guid="963e73bf-7e69-4a68-845f-e4a4202cfff2" _uuid="241c639d46630418b994f941241f75440a8f0f9e"
import os
import cv2
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from skimage.io import imread, imshow, imread_collection, concatenate_images
from skimage.util import img_as_bool, img_as_uint, img_as_ubyte
from skimage.transform import resize
#import skimage
#import glob
import random
from random import randint #, shuffle
from skimage.morphology import label
from keras import regularizers
from keras.models import Model, load_model
from keras.optimizers import Adam, SGD, RMSprop
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Activation, Dense, \
UpSampling2D, BatchNormalization, add, Dropout
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau, LearningRateScheduler
from keras import backend as K
from keras.losses import binary_crossentropy, sparse_categorical_crossentropy
model_checkpoint_file='meshnet_v4.h5'
# Root folders for test and training data
train_root = "./stage1_train"
test_root = "./stage1_test"
# Size we resize all images to
#image_size = (128,128)
img_height = 128
img_width = 128
import warnings
warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
# + [markdown] _cell_guid="4ae021eb-5807-47b6-9e7d-51735cdfb314" _uuid="1b7f0851ca37aafdf1e3cb813f90d7f5eb417f11"
# # Preparing the Data
# +
## Import Training Data Images
train_dirs = os.listdir(train_root)
train_filenames=[os.path.join(train_root,file_id) + "/images/"+file_id+".png" for file_id in train_dirs]
# Convert to B&W inline
#train_images=[cv2.cvtColor(cv2.imread(imagefile),cv2.COLOR_BGR2GRAY) for imagefile in train_filenames]
train_images=[imread(imagefile,as_grey=True) for imagefile in train_filenames]
# Use this instead if you want color images
#train_images=[imread(imagefile) for imagefile in train_filenames]
# +
## Import Training Masks
# this takes longer than the training images because we have to
# combine a lot of mask files
# This function creates a single combined mask image
# when given a list of masks
# Probably a computationally faster way to do this...
def collapse_masks(mask_list):
for i, mask_file in enumerate(mask_list):
if i != 0:
# combine mask with previous mask in list
mask = np.maximum(mask, imread(os.path.join(train_root,mask_file)))
else:
# read first mask in
mask = imread(os.path.join(train_root,mask_file))
return mask
# Import all the masks
train_mask_dirs = [ os.path.join(path, 'masks') for path in os.listdir(train_root) ]
train_mask_files = [ [os.path.join(dir,file) for file in os.listdir(os.path.join(train_root,dir)) ] for dir in train_mask_dirs]
#def collapse_masks(mask_list):
# for i, mask_file in enumerate(mask_list):
# print(i)
# print(mask_file)
#testing = [collapse_masks(mask_files) for mask_files in train_mask_files]
# Divide output of each mask by 255 to make a 1/0 binary mask
train_masks = [ collapse_masks(mask_files) for mask_files in train_mask_files ]
# -
# # Computer Vision Technique
def comp_viz_mask(img):
#green channel happends to produce slightly better results
#than the grayscale image and other channels
# img_gray=img_rgb[:,:,1]#cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
#morphological opening (size tuned on training data)
circle7=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7,7))
img_open=cv2.morphologyEx(img, cv2.MORPH_OPEN, circle7)
#Otsu thresholding
img_th=cv2.threshold(img_open,0,255,cv2.THRESH_OTSU)[1]
#Invert the image in case the objects of interest are in the dark side
if(np.sum(img_th==255)>np.sum(img_th==0)):
img_th=cv2.bitwise_not(img_th)
#second morphological opening (on binary image this time)
bin_open=cv2.morphologyEx(img_th, cv2.MORPH_OPEN, circle7)
#connected components
cc=cv2.connectedComponents(bin_open)[1]
#cc=segment_on_dt(bin_open,20)
return cc # convert to 1/0
# +
# Plot images side by side for a list of datasets
def plot_side_by_side(ds_list,image_num):
#print('Image #: ' + str(image_num) + '. Image Sizes: ' + str(image_ds[image_num].shape) + ' ' + str(mask_ds[image_num].shape))
fig = plt.figure(figsize=(20,10))
for i in range(len(ds_list)):
ax1 = fig.add_subplot(1,len(ds_list),i+1)
ax1.imshow(ds_list[i][image_num])
plt.show()
# Plots random corresponding images and masks
def plot_check(ds_list,rand_imgs=None,img_nums=None):
if rand_imgs != None:
for i in range(rand_imgs):
plot_side_by_side(ds_list, randint(0,len(ds_list[0])-1))
if img_nums != None:
for i in range(len(img_nums)):
plot_side_by_side(ds_list,img_nums[i])
#plot_side_by_side(train_images,train_mask_images,38)
# Plot a few random images
#plot_check(train_images,train_masks,rand_imgs=3)
plot_check([train_images,train_masks],rand_imgs=1)
#plot_check(train_images,cv2_masks,rand_imgs=3)
#plot_check(train_images,train_mask_images,img_nums=[309])
# +
# Resize everything
# Also do dtype conversions
# Scaling
resized_train_images = [ img_as_ubyte(resize(image,(img_width,img_height))) for image in train_images]
resized_train_masks = [ img_as_bool(resize(image,(img_width,img_height))) for image in train_masks]
#resized_train_cv2_masks = [ img_as_bool(resize(image,(img_width,img_height))) for image in train_cv2_masks]
#Croping
#crop_size=64
#resized_train_images = [ image[int(0.5*(image.shape[0]-crop_size)):int(0.5*(image.shape[0]+crop_size)),
# int(0.5*(image.shape[1]-crop_size)):int(0.5*(image.shape[1]+crop_size))] for image in train_images]
#resized_train_mask_images = [ image[int(0.5*(image.shape[0]-crop_size)):int(0.5*(image.shape[0]+crop_size)),
# int(0.5*(image.shape[1]-crop_size)):int(0.5*(image.shape[1]+crop_size))] for image in train_mask_images]
# -
# check max pixel values
print(resized_train_images[309].max())
print(resized_train_images[16].max())
# +
# Reshape model inputs
train_X = np.reshape(np.array(resized_train_images),(len(resized_train_images),img_height,img_width,1))
# Stack cv2 masks on top of images as a channel
#train_X = np.reshape(np.stack((np.array(resized_train_images),np.array(resized_train_cv2_masks)),axis=3), \
# (len(resized_train_images),img_height,img_width,2))
train_Y = np.reshape(np.array(resized_train_masks),(len(resized_train_masks),img_height,img_width,1))
# Check size of arrays we are inputting to model
# This is important! We need the datasets to be as
# small as possible to reduce computation time
print(train_X.shape)
print(train_Y.shape)
print(train_X.nbytes)
print(train_Y.nbytes)
# +
# Check datatypes
print(train_Y.dtype)
print(train_X.dtype)
#train_X[0]
# -
# # Now Let's Build the Model
# +
# Loss and metric functions for the neural net
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred = K.cast(y_pred, 'float32')
y_pred_f = K.cast(K.greater(K.flatten(y_pred), 0.5), 'float32')
intersection = y_true_f * y_pred_f
score = 2. * K.sum(intersection) / (K.sum(y_true_f) + K.sum(y_pred_f))
return score
def dice_loss(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = y_true_f * y_pred_f
score = (2. * K.sum(intersection) + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return 1. - score
def bce_dice_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
def create_block(x, filters=20, filter_size=(3, 3), activation='relu',dil_rate=1,dropout_rate=0.1):
# for i in range(n_block):
x = Conv2D(filters, filter_size, padding='same', activation='relu', dilation_rate = dil_rate) (x)
x = BatchNormalization() (x)
x = Dropout(dropout_rate) (x)
return x
## master function for creating a net
def get_net(
input_shape=(img_height, img_width,1),
loss=binary_crossentropy,
n_class=1
):
inputs = Input(input_shape)
# Create layers
net_body = create_block(inputs)
net_body = create_block(net_body)
net_body = create_block(net_body)
net_body = create_block(net_body,dil_rate=2)
net_body = create_block(net_body,dil_rate=4)
net_body = create_block(net_body,dil_rate=8)
net_body = create_block(net_body)
classify = Conv2D(n_class,(1,1),activation='sigmoid') (net_body)
#classify = Activation(activation='sigmoid') (net_body)
#classify = Dense(1,activation='sigmoid') (net_body)
model = Model(inputs=inputs, outputs=classify)
model.compile(optimizer=Adam(0.001), loss=loss, metrics=[bce_dice_loss, dice_coef])
return model
# -
my_model = get_net()
print(my_model.summary())
# +
# Fit model
earlystopper = EarlyStopping(patience=12, verbose=1)
checkpointer = ModelCheckpoint(model_checkpoint_file, verbose=1, save_best_only=True)
reduce_plateau = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=4,
verbose=1,
# min_lr=0.00001,
epsilon=0.001,
mode='auto')
results = my_model.fit(train_X, train_Y, validation_split=0.1, batch_size=20, epochs=100, verbose=1,
shuffle=True, callbacks=[ earlystopper, checkpointer, reduce_plateau])
# +
## Import Test Data and Make Predictions with Model
# Import images (either test or training)
# Decolorize, resize, store in array, and save filenames, etc.
def import_images(root):
dirs = os.listdir(root)
filenames=[os.path.join(root,file_id) + "/images/"+file_id+".png" for file_id in dirs]
images=[imread(imagefile,as_grey=True) for imagefile in filenames]
resized_images = [ img_as_ubyte(resize(image,(img_width,img_height))) for image in images]
Array = np.reshape(np.array(resized_images),(len(resized_images),img_height,img_width,1))
return Array, resized_images, images, filenames, dirs
test_dirs = os.listdir(test_root)
test_filenames=[os.path.join(test_root,file_id) + "/images/"+file_id+".png" for file_id in test_dirs]
test_images=[imread(imagefile,as_grey=True) for imagefile in test_filenames]
resized_test_images = [ img_as_ubyte(resize(image,(img_width,img_height))) for image in test_images]
test_X = np.reshape(np.array(resized_test_images),(len(resized_test_images),img_height,img_width,1))
final_model = load_model(model_checkpoint_file, custom_objects={'dice_coef': dice_coef, 'bce_dice_loss':bce_dice_loss})
preds_test = final_model.predict(test_X, verbose=1)
preds_test_t = (preds_test > 0.5)
# Create list of upsampled test masks
preds_test_upsampled = []
for i in range(len(preds_test)):
preds_test_upsampled.append(resize(np.squeeze(preds_test[i]),
(test_images[i].shape[0], test_images[i].shape[1]),
mode='constant', preserve_range=True))
preds_test_upsampled_bool = [ (mask > 0.5).astype(bool) for mask in preds_test_upsampled ]
# -
plot_check([test_images,preds_test_upsampled,preds_test_upsampled_bool],rand_imgs=2)
# +
# Run-length encoding stolen from https://www.kaggle.com/rakhlin/fast-run-length-encoding-python
def rle_encoding(x):
dots = np.where(x.T.flatten() == 1)[0]
run_lengths = []
prev = -2
for b in dots:
if (b>prev+1): run_lengths.extend((b + 1, 0))
run_lengths[-1] += 1
prev = b
return run_lengths
def prob_to_rles(x, cutoff=0.5):
lab_img = label(x > cutoff)
for i in range(1, lab_img.max() + 1):
yield rle_encoding(lab_img == i)
def generate_prediction_file(image_names,predictions,filename):
new_test_ids = []
rles = []
for n, id_ in enumerate(image_names):
rle = list(prob_to_rles(predictions[n]))
rles.extend(rle)
new_test_ids.extend([id_] * len(rle))
sub = pd.DataFrame()
sub['ImageId'] = new_test_ids
sub['EncodedPixels'] = pd.Series(rles).apply(lambda x: ' '.join(str(y) for y in x))
sub.to_csv(filename, index=False)
# -
generate_prediction_file(test_dirs,preds_test_upsampled_bool,'meshnetv4_mesh_pred.csv')
# Ideas
# - Experiment with compression of training data. Am I preserving as much detail
# as I can in dtype np.uint8 (values of 0 to 255) ?
# - Color vs B&W?
# - Combine mask and prediction images to show false positives and negatives
# - What is the best resizing method? Reflect??
# - Put computer vision / threshold method output as an input to neural net
# - Output intermediate layers for inspection
# - Crop images to train networks faster for testing ??
# - Take random crops of images to create, and then combine outputs in the end
# - Is combining the masks really the best thing to do? Should I be keeping the individual cells separate?
# - Pseudo-labelled data
|
Archive/MeshNet V4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from rdkit.Chem.Draw import IPythonConsole
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns; sns.set()
prediction_40_top = []
with open("prediction_40.txt", "r") as fp:
lines = fp.readlines()
for line in lines:
prediction_40_top.append(int(line.rstrip("\n")))
prediction_40_top
prediction_40_prob = []
with open("prediction_40_prob.txt", "r") as fp:
lines = fp.readlines()
for line in lines:
prediction_40_prob.append(float(line.rstrip("\n")))
prediction_40_prob
percentages = [0] * 40
for i in range(len(prediction_40_top)):
percentages[prediction_40_top[i]] += 1
dict_mols = {}
for i in range(len(prediction_40_top)):
if prediction_40_top[i] not in dict_mols.keys():
dict_mols[prediction_40_top[i]] = []
dict_mols[prediction_40_top[i]].append((i, prediction_40_prob[i]))
percentages[39]
len(dict_mols[39])
for key in dict_mols.keys():
dict_mols[key] = sorted(dict_mols[key], key=lambda x: x[1], reverse=True)
dict_mols[0][:15]
labels = pd.read_csv('./vectos_prop_20k/meltPt_prop2.csv')
labels.head()
aa_smis = labels['smiles'].tolist()[:10]
aa_codes = labels['mf'].tolist()[:10]
labels.set_index('smiles', inplace=True)
labels.drop(columns=["Unnamed: 0"], inplace=True)
aas = [Chem.MolFromSmiles(x) for x in aa_smis]
Draw.MolsToGridImage(aas, molsPerRow=5, useSVG=False, legends=aa_codes)
properties = labels.columns.tolist()
# ### Topic 0
sample = [val[0] for val in dict_mols[0][:15]]
df = labels.iloc[sample]
df.head()
aa_smis = df.index.tolist()
aa_codes = df['mf'].tolist()
df.drop(columns=["mf"], inplace=True)
aas = [Chem.MolFromSmiles(x) for x in aa_smis]
Draw.MolsToGridImage(aas, molsPerRow=5, useSVG=False, legends=aa_codes)
variances = []
hists = []
bin_edges_list = []
for col in properties:
variances.append(df[col].std())
fig, ax = plt.subplots(figsize = (6,6))
plt.scatter(properties, variances)
ax.set_xlabel('properties')
ax.set_ylabel('variances')
ax.set_xticks(np.arange(len(properties)))
ax.set_xticklabels(properties)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
ax.set_title('Variance of Different Properties')
plt.show()
fig = plt.figure(figsize = (12,12))
fig.subplots_adjust(hspace=0.6, wspace=0.6)
ax = fig.gca()
df.hist(ax=ax)
plt.show()
# ### Topic 5
sample = [val[0] for val in dict_mols[5][:15]]
df = labels.iloc[sample]
df.head()
aa_smis = df.index.tolist()
aa_codes = df['mf'].tolist()
df.drop(columns=["mf"], inplace=True)
aas = [Chem.MolFromSmiles(x) for x in aa_smis]
Draw.MolsToGridImage(aas, molsPerRow=5, useSVG=False, legends=aa_codes)
variances = []
hists = []
bin_edges_list = []
for col in properties:
variances.append(df[col].std())
fig, ax = plt.subplots(figsize = (6,6))
plt.scatter(properties, variances)
ax.set_xlabel('properties')
ax.set_ylabel('variances')
ax.set_xticks(np.arange(len(properties)))
ax.set_xticklabels(properties)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
ax.set_title('Variance of Different Properties')
plt.show()
fig = plt.figure(figsize = (12,12))
fig.subplots_adjust(hspace=0.6, wspace=0.6)
ax = fig.gca()
df.hist(ax=ax)
plt.show()
# ### Topic 10
sample = [val[0] for val in dict_mols[10][:15]]
df = labels.iloc[sample]
df.head()
aa_smis = df.index.tolist()
aa_codes = df['mf'].tolist()
df.drop(columns=["mf"], inplace=True)
aas = [Chem.MolFromSmiles(x) for x in aa_smis]
Draw.MolsToGridImage(aas, molsPerRow=5, useSVG=False, legends=aa_codes)
variances = []
hists = []
bin_edges_list = []
for col in properties:
variances.append(df[col].std())
fig, ax = plt.subplots(figsize = (6,6))
plt.scatter(properties, variances)
ax.set_xlabel('properties')
ax.set_ylabel('variances')
ax.set_xticks(np.arange(len(properties)))
ax.set_xticklabels(properties)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
ax.set_title('Variance of Different Properties')
plt.show()
fig = plt.figure(figsize = (12,12))
fig.subplots_adjust(hspace=0.6, wspace=0.6)
ax = fig.gca()
df.hist(ax=ax)
plt.show()
# ### Topic 29
sample = [val[0] for val in dict_mols[29][:15]]
df = labels.iloc[sample]
df.head()
aa_smis = df.index.tolist()
aa_codes = df['mf'].tolist()
df.drop(columns=["mf"], inplace=True)
aas = [Chem.MolFromSmiles(x) for x in aa_smis]
Draw.MolsToGridImage(aas, molsPerRow=5, useSVG=False, legends=aa_codes)
variances = []
hists = []
bin_edges_list = []
for col in properties:
variances.append(df[col].std())
fig, ax = plt.subplots(figsize = (6,6))
plt.scatter(properties, variances)
ax.set_xlabel('properties')
ax.set_ylabel('variances')
ax.set_xticks(np.arange(len(properties)))
ax.set_xticklabels(properties)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
ax.set_title('Variance of Different Properties')
plt.show()
fig = plt.figure(figsize = (12,12))
fig.subplots_adjust(hspace=0.6, wspace=0.6)
ax = fig.gca()
df.hist(ax=ax)
plt.show()
|
topic_extraction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PROBLEMAS DIVERSOS
# <h3>1.</h3>
# Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
# +
#1. Declarando Lista vacia
lista_alumnos = []
#2. Definiendo la función para cargar n alumnos
def alumnos(lista_alumnos, cantidad):
for i in range(cantidad):
n = 0
alumno = {}
nombre = input(f"Ingrese el nombre completo del alumno {len(lista_alumnos) + 1}: ")
alumno['nombre'] = nombre
while n < 3:
try:
nota = float(input(f"Ingresa la nota {n + 1}: "))
if nota >= 0 and nota <= 10:
alumno[f'nota{n+1}'] = nota
n = n+1
else:
print("La nota debe estar comprendida entre 0 y 10")
except:
print("Ingrese una nota valida.")
lista_alumnos.append(alumno)
#3. Ingresando datos
while True:
try:
cantidad = int(input("Ingrese la cantidad de alumnos a insertar"))
if cantidad <= 0:
print("Se debe registrar una cantidad de alumnos mayor a 0")
else:
break
except:
print("Por favor ingrese un valor de cantidad válido: ")
alumnos(lista_alumnos, cantidad)
#4. Imprimiendo datos
lista_alumnos
# +
#------------- SOLUCIÓN DEL PROFESOR -----------------
#cantidad = int(input('¿Cuántos alumnos desea ingresar?'))
#cantidad
# +
#lista_alumnos = []
#for i in range(cantidad):
# alumno = {}
#ingreso nombre
# nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
# alumno['nombre'] = nombre
#ingreso de notas
# alumno['notas'] = []
# for n in range(3):
# nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
# alumno['notas'].append(nota)
#agrupando datos en lista
# lista_alumnos.append(alumno)
# +
#lista_alumnos
# +
#alumno
#-----------------------------------------------------
# -
# ### 2.
# Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
# +
def promedio (lista_alumnos):
for alumno in lista_alumnos:
promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3
alumno['promedio'] = promedio
def evaluar(lista_alumnos):
aprobados = 0
desaprobados = 0
#Hallando promedio de cada alumno
promedio(lista_alumnos)
for alumno in lista_alumnos:
if alumno['promedio'] >= 4:
alumno['estado'] = 'Aprobado'
aprobados += 1
else:
alumno['estado'] = 'Desaprobado'
desaprobados += 1
print(f'La cantidad de alumnos aprobados es de: {aprobados}')
print(f'La cantidad de alumnos desaprobados es de: {desaprobados}')
evaluar(lista_alumnos)
# -
# ### 3.
# Informar el promedio de nota del curso total.
# +
def promedio_curso(lista_alumnos):
promedio = 0
for alumno in lista_alumnos:
promedio += alumno['promedio']
return promedio / len(lista_alumnos)
print(f"El promedio de nota del curso total es: {promedio_curso(lista_alumnos)}")
# -
# ### 4.
# Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
# +
def puesto_promedio(lista_alumnos):
palto = 0
pbajo = 10
for alumno in lista_alumnos:
nombre = alumno['nombre']
if alumno['promedio'] >= palto:
alumno_alto = alumno['nombre']
palto = alumno['promedio']
if alumno['promedio'] <= pbajo:
alumno_bajo = alumno['nombre']
pbajo = alumno['promedio']
print(f"El alumno con el promedio más alto es: {alumno_alto}")
print(f"El alumno con el promedio más bajo es: {alumno_bajo}")
puesto_promedio(lista_alumnos)
# -
# ### 5.
# Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
# +
def buscar_alumno(nombre, lista_alumnos):
for alumno in lista_alumnos:
if alumno['nombre'] == nombre:
print(alumno)
nombre = input("Ingrese el nombre del o los alumnos a buscar: ")
buscar_alumno(nombre, lista_alumnos)
|
Modulo2/Ejercicios/Problemas Diversos.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 导入所需的包
# !git log
from autox import AutoX
import pandas as pd
import numpy as np
import os
from tqdm import tqdm
# ## 配置数据信息
# 选择数据集
data_name = 'ventilator'
path = f'./data/{data_name}'
feature_type = {
'train.csv': {
'id': 'cat',
'breath_id': 'cat',
'R': 'num',
'C': 'num',
'time_step': 'num',
'u_in': 'num',
'u_out': 'num',
'pressure': 'num'
},
'test.csv': {
'id': 'cat',
'breath_id': 'cat',
'R': 'num',
'C': 'num',
'time_step': 'num',
'u_in': 'num',
'u_out': 'num'
}
}
autox = AutoX(target = 'pressure', train_name = 'train.csv', test_name = 'test.csv',
id = ['id'], path = path, feature_type = feature_type, metric = 'mae')
sub = autox.get_submit()
sub.to_csv("autox_1018_kaggle_ventilator_oneclick.csv", index = False)
# !zip -r autox_1018_kaggle_ventilator_oneclick.csv.zip autox_1018_kaggle_ventilator_oneclick.csv
|
demo/ventilator/autox_kaggle_ventilator.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import os
import pickle
import numpy as np
import pandas as pd
def atisfold(fold):
assert fold in range(5)
filename = os.path.join('data','atis.fold'+str(fold)+'.pkl')
f = open(filename, 'rb')
try:
train_set, valid_set, test_set, dicts = pickle.load(f, encoding='latin1')
except:
train_set, valid_set, test_set, dicts = pickle.load(f)
return train_set, valid_set, test_set, dicts
train_set, valid_set, test_set, dic = atisfold(0)
# +
def rev_map(d):
return dict((d[i],i) for i in d)
labels2idx = rev_map(dic['labels2idx'])
tables2idx = rev_map(dic['tables2idx'])
words2idx = rev_map(dic['words2idx'])
word_indices = train_set[0]
name_entities = train_set[1]
labels = train_set[2]
print ("Number of sentences : ",len(word_indices))
def display(n):
sense = []
for i in range(len(word_indices[n])):
# sense.append({"word_index":word_indices[0][i],"word":words2idx[word_indices[0][i]],"entity_index":name_entities[0][i],"entity":tables2idx[name_entities[0][i]],"label_index":labels[0][i],"label":labels2idx[labels[0][i]]})
sense.append({"word":words2idx[word_indices[n][i]],"entity":tables2idx[name_entities[n][i]],"label":labels2idx[labels[n][i]]})
return pd.DataFrame(sense)
# -
display(2)
# # Reading IOB files from data2/
# +
def get_data(filename):
df = pd.read_csv(filename,delim_whitespace=True,names=['word','label'])
beg_indices = list(df[df['word'] == 'BOS'].index)+[df.shape[0]]
sents,labels,intents = [],[],[]
for i in range(len(beg_indices[:-1])):
sents.append(df[beg_indices[i]+1:beg_indices[i+1]-1]['word'].values)
labels.append(df[beg_indices[i]+1:beg_indices[i+1]-1]['label'].values)
intents.append(df.loc[beg_indices[i+1]-1]['label'])
return np.array(sents),np.array(labels),np.array(intents)
def get_data2(filename):
with open(filename) as f:
contents = f.read()
sents,labels,intents = [],[],[]
for line in contents.strip().split('\n'):
words,labs = [i.split(' ') for i in line.split('\t')]
sents.append(words[1:-1])
labels.append(labs[1:-1])
intents.append(labs[-1])
return np.array(sents),np.array(labels),np.array(intents)
read_method = {'data2/atis-2.dev.w-intent.iob':get_data,
'data2/atis.train.w-intent.iob':get_data2,
'data2/atis.test.w-intent.iob':get_data,
'data2/atis-2.train.w-intent.iob':get_data2}
def fetch_data(fname):
func = read_method[fname]
return func(fname)
# +
# Example
sents,labels,intents = fetch_data('data2/atis.train.w-intent.iob')
def display(n,intents):
sense = []
print ("INTENT : ",intents[n])
for i in range(len(sents[n])):
# sense.append({"word_index":word_indices[0][i],"word":words2idx[word_indices[0][i]],"entity_index":name_entities[0][i],"entity":tables2idx[name_entities[0][i]],"label_index":labels[0][i],"label":labels2idx[labels[0][i]]})
sense.append({"word":sents[n][i],"label":labels[n][i]})
return pd.DataFrame(sense)
display(1)
|
.ipynb_checkpoints/atis_experiments-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from simpledbf import Dbf5
import os
# Lets start this exploration by looking at documents from 2001 Lets open one manually.
dbf = Dbf5('original_data/2001/NFIRS 2001 PDR 031415/hazmat.dbf')
dbf
# What's in the structure?
dbf.columns
# Can we Pandas Dataframe this?
df = dbf.to_dataframe()
df
# +
# Note, when I tried to run this again, it sorta crashed in the same session... Will figure out why later.
# -
dbf_2001 = {}
for file in os.listdir('original_data/2001/NFIRS 2001 PDR 031415/'):
if file[-3:] == "dbf":
dbf_2001[file[:-4]] = Dbf5('original_data/2001/NFIRS 2001 PDR 031415/' + file)
dbf_2001.keys()
incident01 = dbf_2001["fireincident"].to_dataframe()
# +
#It appears reading it from dataframe fries if you try to do it multiple times. So that's not a good thing.
# +
dbf_2001 = {}
for file in os.listdir('original_data/2001/NFIRS 2001 PDR 031415/'):
if file[-3:] == "dbf":
dbf_2001[file[:-4]] = Dbf5('original_data/2001/NFIRS 2001 PDR 031415/' + file)
incident01 = dbf_2001["fireincident"].to_dataframe()
# -
incident01
incident01.describe()
|
.ipynb_checkpoints/first-exploration-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sympy as sp
import numpy as np
# + pycharm={"name": "#%%\n"}
x, y = [sp.IndexedBase(e) for e in ['x', 'y']]
m = sp.symbols('m', integer=True)
a, b = sp.symbols('a b', real=True)
i = sp.Idx('i', m)
# + pycharm={"name": "#%%\n"}
loss = (y[i] - (a*x[i] + b))**2
# + pycharm={"name": "#%%\n"}
loss
# + [markdown] pycharm={"name": "#%% md\n"}
# Having defined the loss function using indexed variables we might hope that the
# implicit summation of repeated indexes might fall through to derivative
# but it looks like this isn't the case.
#
# Below we see taking derivative wrt to fit parameters is only applied
# to each point rather than the whole sum, which is incorrect.
# + pycharm={"name": "#%%\n"}
sp.solve(loss.diff(a), a)
# + pycharm={"name": "#%%\n"}
sp.solve(loss.diff(b), b)
# + [markdown] pycharm={"name": "#%% md\n"}
# Try adding explicit summation around the loss expression. This gives the
# correct set of equations for derivatives but a solution can't be found.
# + pycharm={"name": "#%%\n"}
sp.diff(sp.Sum(loss, i),a)
# + pycharm={"name": "#%%\n"}
sp.diff(sp.Sum(loss, i), b)
# + pycharm={"name": "#%%\n"}
sp.solve([sp.diff(sp.Sum(loss, i),a), sp.diff(sp.Sum(loss, i),b)], [a, b])
# + pycharm={"name": "#%%\n"}
sp.solve([loss.expand().diff(a), loss.expand().diff(b)], [a,b])
# + [markdown] pycharm={"name": "#%% md\n"}
# MatrixSymbol seems to be the trick
# + pycharm={"name": "#%%\n"}
x_2 = sp.MatrixSymbol('x', m, 1)
y_2 = sp.MatrixSymbol('y', m, 1)
a_2 = sp.MatrixSymbol('a', 1, 1)
b_2 = b*sp.OneMatrix(m, 1)
# + pycharm={"name": "#%%\n"}
err = y_2 - (x_2*a_2 + b_2)
err
# + pycharm={"name": "#%%\n"}
objective = (err.T * err)
objective
# + pycharm={"name": "#%%\n"}
objective.diff(a_2)
# + pycharm={"name": "#%%\n"}
objective.diff(b)
# + [markdown] pycharm={"name": "#%% md\n"}
# Functions of Matrices e.g. generator of rotations
# + pycharm={"name": "#%%\n"}
t = sp.symbols('t', real=True)
g = sp.Matrix([[0, -t], [t, 0]])
# + pycharm={"name": "#%%\n"}
g
# + pycharm={"name": "#%%\n"}
sp.exp(g)
# + pycharm={"name": "#%%\n"}
|
scripts/indexed_expressions_20211107.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as f
from time import time
import numpy as np
from matplotlib import pyplot
import matplotlib as mpl
# %matplotlib inline
# -
use_cuda = torch.cuda.is_available()
print('cuda', use_cuda)
# +
image_size = 28 * 28
layer1 = 300
output_size = 10
learning_rate = 0.1
iterations = 15000
dropout = 0.5
batch_size = 128
# -
# if not exist, download mnist dataset
train_set = torchvision.datasets.MNIST(root='/tmp', train=True, transform=transforms.ToTensor(), download=True)
test_set = torchvision.datasets.MNIST(root='/tmp', train=False, transform=transforms.ToTensor(), download=True)
train_loader = torch.utils.data.DataLoader(dataset=train_set, batch_size=batch_size, shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test_set, batch_size=batch_size, shuffle=False)
print('total training batches: {}'.format(len(train_loader)))
print('total testing batches: {}'.format(len(test_loader)))
def show(image):
image = image.view(28,28).numpy()
print(image.shape)
fig = pyplot.figure(figsize=(1,1))
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(image, cmap=mpl.cm.Greys)
pyplot.show()
show(train_set[2][0])
print(train_set[2][1])
show(test_set[3][0])
print(test_set[3][1])
# +
class DenseClassifier(nn.Module):
def __init__(self):
super().__init__()
self.dense1 = nn.Linear(image_size, layer1)
self.relu = nn.ReLU()
self.drop = nn.Dropout(dropout)
self.dense2 = nn.Linear(layer1, output_size)
def forward(self, x):
x = x.view(-1, image_size)
x = self.dense1(x)
x = self.relu(x)
x = self.drop(x)
y = self.dense2(x)
return y
def count_parameters(self, module=None):
if module is None:
module = self
return sum(p.numel() for p in module.parameters() if p.requires_grad)
model = DenseClassifier()
if use_cuda:
model = model.cuda()
print(model.count_parameters())
# -
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# +
def test(model, test_loader):
with torch.no_grad():
test_loss = []
accs = []
errs = 0
for x,target in test_loader:
if use_cuda:
x, target = x.cuda(), target.cuda()
out = model(x)
_, pred = f.softmax(out, dim=-1).max(dim=-1)
match = target == pred
acc = match.sum().float()/len(match)
errs += len(match)-match.sum().cpu().item()
loss = criterion(out, target)
test_loss.append(loss.cpu().item())
accs.append(acc.cpu().item())
print('Errs', errs)
return np.mean(test_loss), np.mean(accs)
def acc(out, target):
_, pred = f.softmax(out, dim=-1).max(dim=-1)
match = target == pred
return match.sum().float()/len(match)
# -
# %%time
i=0
losses = []
accs = []
for _ in range(iterations):
dt = time()
train_loss = []
model.train()
for batch_idx, (x, target) in enumerate(train_loader):
i+=1
optimizer.zero_grad()
if use_cuda:
x, target = x.cuda(), target.cuda()
out = model(x)
loss = criterion(out, target)
loss.backward()
optimizer.step()
train_loss.append(loss.cpu().item())
losses.append(loss.cpu().item())
accs.append(acc(out, target))
if i >= iterations:
break
test_loss, accuracy = test(model, test_loader)
train_loss = np.mean(train_loss)
print('epoch %d, train loss %.4f, test loss %.4f, acc %.4f, time %3.1fs' % (i, train_loss, test_loss, accuracy, time()-dt))
fig, ax = pyplot.subplots(2,1, figsize=(15, 6))
ax[0].plot(losses, color='b', label="loss")
ax[1].plot(accs, color='g', label="accuracy")
legend = ax[0].legend(loc='best')
legend = ax[1].legend(loc='best')
pyplot.show()
test_loss, acc = test(model, test_loader)
print('Test accuracy %.3f' % acc)
|
pytorch_mnist.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="THN-pFmXLq_x"
# # Stock Price Prediction
#
# In this notebook, we demonstrate a reference use case where we use historical stock price data to predict the future price. The dataset we use is the daily stock price of S&P500 stocks during 2013-2018 ([data source](https://www.kaggle.com/camnugent/sandp500/)). We demostrate how to do univariate forecasting using the past 80% of the total days' MMM price to predict the future 20% days' daily price.
#
# Reference: https://github.com/jwkanggist/tf-keras-stock-pred
#
#
# + [markdown] id="2LLu44mMwCTN"
# ## Get Data
#
# We will use the close prices of MMM stock for our experiment. We will
# 1. download raw dataset and load into dataframe.
# 2. Extract the close prices of MMM stock from the dataframe into a numpy array
# + id="IwF7ovI0Lq_-"
import numpy as np
import pandas as pd
import os
# + colab={"base_uri": "https://localhost:8080/"} id="wUYapsflLq__" outputId="c5385ba6-6ece-448d-a5a1-c51fb8d71f06"
# S&P 500
FILE_NAME = 'all_stocks_5yr.csv'
SOURCE_URL = 'https://github.com/CNuge/kaggle-code/raw/master/stock_data/'
filepath = './data/'+ FILE_NAME
filepath = os.path.join('data', FILE_NAME)
print(filepath)
# + id="mJ44xd7nLrAA"
# download data
# !if ! [ -d "data" ]; then mkdir data; cd data; wget https://github.com/CNuge/kaggle-code/raw/master/stock_data/individual_stocks_5yr.zip; wget https://raw.githubusercontent.com/CNuge/kaggle-code/master/stock_data/merge.sh; chmod +x merge.sh; unzip individual_stocks_5yr.zip; ./merge.sh; fi
# + colab={"base_uri": "https://localhost:8080/"} id="dMefLEwXLrAB" outputId="0007508f-de13-46ab-849a-235fb4b4d404"
# read data
data = pd.read_csv(filepath)
print(data[:10])
target_rows = data[data['Name']=='MMM']
print(target_rows[:10])
# + colab={"base_uri": "https://localhost:8080/"} id="j6OfgfNDLrAC" outputId="23fd8f13-d920-4217-f85b-1b4f90a7503c"
# extract close value
close_val = target_rows[['close']].values
print(close_val[:10])
# +
# Visualize data
import matplotlib.pyplot as plt
plt.plot(close_val, color='blue', label='MMM daily price Raw')
plt.xlabel("Time Period")
plt.ylabel("Stock Price")
plt.legend()
plt.show()
# + [markdown] id="_dD-qd0Z8JGC"
# ## Data Pre-processing
# Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
#
# For the stock price data we're using, the processing contains 2 parts:
#
# 1. Data normalization such that the normalized stock prices fall in the range of 0 to 1
# 2. Extract time series of given window size
#
# We generate a built-in TSDataset to complete the whole processing.
#
#
# +
from zoo.chronos.data import TSDataset
from sklearn.preprocessing import MinMaxScaler
df = target_rows[['date', 'close']]
tsdata_train, _, tsdata_test = TSDataset.from_pandas(df, dt_col="date", target_col="close", with_split=True, test_ratio=0.2)
minmax_scaler = MinMaxScaler()
for tsdata in [tsdata_train, tsdata_test]:
tsdata.scale(minmax_scaler, fit=(tsdata is tsdata_train))\
.roll(lookback=50, horizon=1)
X_train, y_train = tsdata_train.to_numpy()
X_test, y_test = tsdata_test.to_numpy()
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# + [markdown] id="6fT-WKaB8Q5N"
# ## Time series forecasting
#
# We use LSTMForecaster for forecasting.
# + id="vrf48pWH_Vaf"
from zoo.chronos.forecast.lstm_forecaster import LSTMForecaster
# + [markdown] id="eCgIOQQK_YDS"
# First we initiate a LSTMForecaster.
#
#
# * `feature_dim` should match the dimension of the input data, so we just use the last dimension of train input data shape
# * `target_dim` equals the dimension of the output data, here we set `target_dim=1` for univariate forecasting.
#
#
# + id="QR04YjGiLrAE"
# Hyperparameters
feature_dim = X_train.shape[-1]
target_dim = 1
hidden_dim = 10
learning_rate = 0.01
batch_size = 16
epochs = 50
# + colab={"base_uri": "https://localhost:8080/"} id="2DE_F4ltLrAF" outputId="7448bb5b-78a7-4e20-d0b7-a944483297ce"
# build model
forecaster = LSTMForecaster(past_seq_len=X_train.shape[1],
input_feature_num=feature_dim,
output_feature_num=target_dim,
hidden_dim=32,
lr=learning_rate,
)
# + [markdown] id="tGhlYHGKA1Jw"
#
# Then we use fit to train the model. Wait sometime for it to finish.
# + id="pYknEGOMAziH"
# %%time
forecaster.fit(data=(X_train, y_train), batch_size=batch_size, epochs=epochs)
# + [markdown] id="pW40LT8KBJe6"
#
# After training is finished. You can use the forecaster to do prediction and evaluation.
# + id="NA8Wuuo7BPno"
# make prediction
y_pred = forecaster.predict(X_test)
# -
# Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
y_pred_unscale = tsdata_test.unscale_numpy(y_pred)
y_test_unscale = tsdata_test.unscale_numpy(y_test)
# + [markdown] id="5STThUzKBclt"
# Calculate the mean square error.
# + id="KTMc01PnBjBT"
# evaluate with mean_squared_error
from zoo.orca.automl.metrics import Evaluator
print("mean_squared error is", Evaluator.evaluate("mse", y_test_unscale, y_pred_unscale, multioutput='uniform_average'))
# + [markdown] id="C-hVQsz7BtXP"
# Visualize the prediction.
# + id="gGYlj52oBs4N"
# Plot predictions
plt.plot(y_test_unscale[:, :, 0], color='blue', label="MMM daily price Raw")
plt.plot(y_pred_unscale[:, :, 0], color='red', label="MMM daily price Predicted")
plt.xlabel("Time Period")
plt.ylabel("Normalized Stock Price")
plt.legend()
plt.show()
|
pyzoo/zoo/chronos/use-case/fsi/stock_prediction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
dataset = pd.read_csv("./../../splunk_data_180918_telenor.txt")
dataset.values.shape
df = dataset.tail(200)
df
|
data_analysis/recsys - telenor data set.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Determing Optimal Match Number for FDR Calculations (library peak max = 31)
import pandas as pd
from matplotlib import pyplot
# ## Loading Results
overall = pd.read_csv('FDRGraph_lib31_overall.csv')
overall
peptide = pd.read_csv('FDRGraph_lib31_peptide.csv')
peptide
protein = pd.read_csv('FDRGraph_lib31_protein.csv')
protein
# ## Optimal FDR Cutoff
# ### Overall (no filtering)
overall.plot.line(x="matches",y="FDRCutoff",title="Optimal FDR Cutoff (Overall)")
# ### Filtered for Top Peptides (no duplicate peptides)
peptide.plot.line(x="matches",y="FDRCutoff",title="Optimal FDR Cutoff (Peptide)")
# ### Filtered for Top Proteins (no duplicate proteins)
protein.plot.line(x="matches",y="FDRCutoff",title="Optimal FDR Cutoff (Protein)")
# ## Percentage Total Results Represented by Optimal FDR Cutoff
# ### Overall (no filtering)
overall['%'] = [(overall['FDRCutoff'].loc[i]/overall['total'].loc[i]) for i in range(len(overall))]
overall.plot.line(x="matches",y="%",title="Percentage Total Results Represented by Optimal FDR Cutoff (Overall)")
# ### Filtered for Top Peptides (no duplicate peptides)
peptide['%'] = [(peptide['FDRCutoff'].loc[i]/peptide['total'].loc[i]) for i in range(len(peptide))]
peptide.plot.line(x="matches",y="%",title="Percentage Total Results Represented by Optimal FDR Cutoff (Peptide)")
# ### Filtered for Top Proteins (no duplicate proteins)
protein['%'] = [(protein['FDRCutoff'].loc[i]/protein['total'].loc[i]) for i in range(len(protein))]
protein.plot.line(x="matches",y="%",title="Percentage Total Results Represented by Optimal FDR Cutoff (Protein)")
|
.ipynb_checkpoints/Optimal_FDR_Calculations_lib31-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.naive_bayes import GaussianNB
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.neighbors import KNeighborsClassifier
from imblearn.combine import SMOTETomek
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from catboost import CatBoostClassifier
from lightgbm import LGBMClassifier
from sklearn.model_selection import train_test_split,
import warnings
warnings.filterwarnings('ignore')
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# +
#### Import Data
# -
df = pd.read_csv('/kaggle/input/red-wine-quality-cortez-et-al-2009/winequality-red.csv')
df.head()
# #### EDA
df.info()
plt.figure(figsize=(10,6))
sns.kdeplot(df['fixed acidity'], hue=df['quality'])
#Fixed Activity data follows the normal distribution
plt.figure(figsize=(10,6))
sns.distplot(df['volatile acidity'])
#Volatile Acidity data follows the normal distribution.
df['citric acid'].value_counts()
plt.figure(figsize=(10,6))
sns.distplot(df['citric acid'])
df['residual sugar'].value_counts()
df.groupby('quality').mean()
# **Observations:**
# * Fixed acidity falls in same range for all quality level.
# * Lower the volatile acidity level, higher the quality of wine.
# * Higher the citric acid level, higher the quality of wine
# * There is no significant difference in residual sugar value between different quality levels.
# * Lower the chlorides level, higher the quality level of wine.
# * There is relationship between free sulfur dioxide and total sulfur dioxide.
# * Lower the pH value, higher the quality level of wine.
# * Higher the sulphates level, quality level of wine increases.
# * Alochol values falls in same range for all alcohol quality level.
df.describe()
corr = df.corr()
corr
plt.figure(figsize=(16,10))
sns.heatmap(corr, linewidths=3, annot=True)
# **Observation:**
# * There is collinearity between multiple independent variable
# ### Scaling
# +
sc = StandardScaler()
df_scaled = pd.DataFrame(sc.fit_transform(df.drop('quality',axis=1)),columns=df.columns[:-1])
df_scaled.head()
# -
# ### Train Test Split
X_train, X_test, y_train, y_test = train_test_split(df_scaled, df['quality'], test_size=0.3, random_state=100)
y_train.value_counts()
# ### Modelling
# +
def metrics(y_true, y_pred):
print('Confusion Matrix:\n', confusion_matrix(y_true, y_pred))
print('\n\nAccuracy Score:\n', accuracy_score(y_true, y_pred))
print('\n\nClassification Report: \n', classification_report(y_true, y_pred))
def predictions(model,X_train=X_train, X_test=X_test, y_train=y_train, y_test=y_test):
model.fit(X_train, y_train)
#predictions
train_pred = model.predict(X_train)
test_pred = model.predict(X_test)
actual = [y_train, y_test]
pred = [train_pred, test_pred]
for i in range(0,2):
if i==0:
print('----Train Metrics----')
else:
print('----Test Metrics----')
metrics(actual[i], pred[i])
# -
# ### Logistic Regression
# +
lg = LogisticRegression(multi_class='ovr')
# -
predictions(lg)
# ### KNN
knn = KNeighborsClassifier()
predictions(knn)
# #### Naive Bayes
# +
nb = GaussianNB()
predictions(nb)
# -
# #### Decision Tree
# +
dtree = DecisionTreeClassifier()
predictions(dtree)
# -
# #### Bagging
bag = BaggingClassifier()
predictions(bag)
# #### Random Forest
# +
rf = RandomForestClassifier()
predictions(rf)
# -
# #### Gradient Boosting
# +
gb = GradientBoostingClassifier()
predictions(gb)
# -
# #### XG Boost
# +
xgb = XGBClassifier()
predictions(xgb)
# -
# #### Light GBM
lgbm = LGBMClassifier()
predictions(lgbm)
# #### CAT Boost
# +
cat = CatBoostClassifier()
predictions(cat)
# -
|
Classification Problems/08-wine-quality-multi-classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# +
# default_exp evaluation.metrics
# -
# # Metrics
# > Metrics.
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
# +
#export
from typing import List, Tuple
import torch
import numpy as np
import pandas as pd
import math
from recohut.utils.common_utils import remove_duplicates, count_a_in_b_unique
# +
#export
def NDCG(true, pred):
match = pred.eq(true).nonzero(as_tuple=True)[1]
ncdg = torch.log(torch.Tensor([2])).div(torch.log(match + 2))
ncdg = ncdg.sum().div(pred.shape[0]).item()
return ncdg
def APAK(true, pred):
k = pred.shape[1]
apak = pred.eq(true).div(torch.arange(k) + 1)
apak = apak.sum().div(pred.shape[0]).item()
return apak
def HR(true, pred):
hr = pred.eq(true).sum().div(pred.shape[0]).item()
return hr
def get_eval_metrics(scores, true, k=10):
test_items = [torch.LongTensor(list(item_scores.keys())) for item_scores in scores]
test_scores = [torch.Tensor(list(item_scores.values())) for item_scores in scores]
topk_indices = [s.topk(k).indices for s in test_scores]
topk_items = [item[idx] for item, idx in zip(test_items, topk_indices)]
pred = torch.vstack(topk_items)
ncdg = NDCG(true, pred)
apak = APAK(true, pred)
hr = HR(true, pred)
return ncdg, apak, hr
# +
scores = [{1: 0.2, 2: 0.3, 3: 0.4, 4: 0.5, 9: 0.1},
{1: 0.2, 2: 0.3, 3: 0.4, 4: 0.5, 9: 0.1},
{1: 0.2, 2: 0.3, 3: 0.4, 4: 0.5, 9: 0.1},
{1: 0.2, 2: 0.3, 3: 0.4, 4: 0.5, 9: 0.1},
{1: 0.2, 2: 0.3, 3: 0.4, 4: 0.5, 9: 0.1}]
true = torch.tensor([[1],[1],[2],[3],[4]])
metric = get_eval_metrics(scores, true, k=3)
metric
# -
# it should all 1, because all relevant items are in range k=3
true = torch.tensor([[4],[4],[4],[4],[4]])
metric = get_eval_metrics(scores, true, k=3)
metric
# it should all 0, because no relevant item is in range k=3
true = torch.tensor([[9],[1],[9],[1],[1]])
metric = get_eval_metrics(scores, true, k=3)
metric
#export
def get_eval_metrics_v2(pred_list, topk=10):
NDCG = 0.0
HIT = 0.0
MRR = 0.0
for rank in pred_list:
if rank < topk:
MRR += 1.0 / (rank + 1.0)
NDCG += 1.0 / np.log2(rank + 2.0)
HIT += 1.0
return HIT /len(pred_list), NDCG /len(pred_list), MRR /len(pred_list)
test_eq(np.round(get_eval_metrics_v2(pred_list = [1,3,2], topk=3), 2),
np.array([0.67, 0.38, 0.28]))
test_eq(np.round(get_eval_metrics_v2(pred_list = [1,3,2], topk=2), 2),
np.array([0.33, 0.21, 0.17]))
test_eq(np.round(get_eval_metrics_v2(pred_list = [0,0,0], topk=2), 2),
np.array([1., 1., 1.]))
test_eq(np.round(get_eval_metrics_v2(pred_list = [3,3,3], topk=2), 2),
np.array([0., 0., 0.]))
#export
def precision_at_k_per_sample(actual, predicted, topk):
num_hits = 0
for place in predicted:
if place in actual:
num_hits += 1
return num_hits / (topk + 0.0)
predicted = [0,1,4]
actual = [0,1,2,3]
test_eq(np.round(precision_at_k_per_sample(actual, predicted, topk=2), 2),
np.array([1.]))
test_eq(np.round(precision_at_k_per_sample(actual, predicted, topk=3), 2),
np.array([0.67]))
#export
def precision_at_k(actual, predicted, topk):
sum_precision = 0.0
num_users = len(predicted)
for i in range(num_users):
act_set = set(actual[i])
pred_set = set(predicted[i][:topk])
sum_precision += len(act_set & pred_set) / float(topk)
return sum_precision / num_users
predicted = [[0,1,4], [1,3]]
actual = [[0,1,2,3], [0,1,2]]
test_eq(np.round(precision_at_k(actual, predicted, topk=2), 2),
np.array([0.75]))
test_eq(np.round(precision_at_k(actual, predicted, topk=3), 2),
np.array([0.5]))
#export
def ap_at_k(actual, predicted, topk=10):
"""
Computes the average precision at topk.
This function computes the average precision at topk between two lists of
items.
Parameters
----------
actual : list
A list of elements that are to be predicted (order doesn't matter)
predicted : list
A list of predicted elements (order does matter)
topk : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The average precision at topk over the input lists
"""
if len(predicted)>topk:
predicted = predicted[:topk]
score = 0.0
num_hits = 0.0
for i,p in enumerate(predicted):
if p in actual and p not in predicted[:i]:
num_hits += 1.0
score += num_hits / (i+1.0)
if not actual:
return 0.0
return score / min(len(actual), topk)
predicted = [0,1,4]
actual = [0,1,2,3]
test_eq(np.round(ap_at_k(actual, predicted, topk=2), 2),
np.array([1.]))
test_eq(np.round(ap_at_k(actual, predicted, topk=3), 2),
np.array([0.67]))
#export
def map_at_k(actual, predicted, topk=10):
"""
Computes the mean average precision at topk.
This function computes the mean average prescision at topk between two lists
of lists of items.
Parameters
----------
actual : list
A list of lists of elements that are to be predicted
(order doesn't matter in the lists)
predicted : list
A list of lists of predicted elements
(order matters in the lists)
topk : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The mean average precision at topk over the input lists
"""
return np.mean([ap_at_k(a, p, topk) for a, p in zip(actual, predicted)])
predicted = [[0,1,4], [1,3]]
actual = [[0,1,2,3], [0,1,2]]
test_eq(np.round(map_at_k(actual, predicted, topk=2), 2),
np.array([0.75]))
test_eq(np.round(map_at_k(actual, predicted, topk=3), 2),
np.array([0.5]))
#export
def recall_at_k(actual, predicted, topk):
sum_recall = 0.0
num_users = len(predicted)
true_users = 0
recall_dict = {}
for i in range(num_users):
act_set = set(actual[i])
pred_set = set(predicted[i][:topk])
if len(act_set) != 0:
#sum_recall += len(act_set & pred_set) / float(len(act_set))
one_user_recall = len(act_set & pred_set) / float(len(act_set))
recall_dict[i] = one_user_recall
sum_recall += one_user_recall
true_users += 1
return sum_recall / true_users, recall_dict
predicted = [[0,1,4], [1,3]]
actual = [[0,1,2,3], [0,1,2]]
test_eq(np.round(recall_at_k(actual, predicted, topk=2)[0], 2),
np.array([0.42]))
test_eq(np.round(recall_at_k(actual, predicted, topk=3)[0], 2),
np.array([0.42]))
#export
def cal_mrr(actual, predicted):
sum_mrr = 0.
true_users = 0
num_users = len(predicted)
mrr_dict = {}
for i in range(num_users):
r = []
act_set = set(actual[i])
pred_list = predicted[i]
for item in pred_list:
if item in act_set:
r.append(1)
else:
r.append(0)
r = np.array(r)
if np.sum(r) > 0:
#sum_mrr += np.reciprocal(np.where(r==1)[0]+1, dtype=np.float)[0]
one_user_mrr = np.reciprocal(np.where(r==1)[0]+1, dtype=np.float)[0]
sum_mrr += one_user_mrr
true_users += 1
mrr_dict[i] = one_user_mrr
else:
mrr_dict[i] = 0.
return sum_mrr / len(predicted), mrr_dict
predicted = [[0,1,4], [1,3]]
actual = [[0,1], [0,1]]
test_eq(np.round(cal_mrr(actual, predicted)[0], 2),
np.array([1.]))
#export
def ndcg_at_k(actual, predicted, topk):
res = 0
ndcg_dict = {}
for user_id in range(len(actual)):
k = min(topk, len(actual[user_id]))
# idcg = idcg_at_k(k)
res = sum([1.0/math.log(i+2, 2) for i in range(k)])
idcg = res if res else 1.0
dcg_k = sum([int(predicted[user_id][j] in
set(actual[user_id])) / math.log(j+2, 2) for j in range(topk)])
res += dcg_k / idcg
ndcg_dict[user_id] = dcg_k / idcg
return res / float(len(actual)), ndcg_dict
predicted = [[0,1,4]]
actual = [[0,1,2,3]]
test_eq(np.round(ndcg_at_k(actual, predicted, topk=2)[0], 2),
np.array([2.63]))
test_eq(np.round(ndcg_at_k(actual, predicted, topk=3)[0], 2),
np.array([2.9]))
# ## precision
#export
def precision(ground_truth, prediction):
"""
Compute Precision metric
:param ground_truth: the ground truth set or sequence
:param prediction: the predicted set or sequence
:return: the value of the metric
"""
ground_truth = remove_duplicates(ground_truth)
prediction = remove_duplicates(prediction)
precision_score = count_a_in_b_unique(prediction, ground_truth) / float(len(prediction))
assert 0 <= precision_score <= 1
return precision_score
# +
ground_truth = [[1],[3],[4],[8],[9]]
prediction = [[1],[4],[5],[9]]
test_eq(precision(ground_truth, prediction), 0.75)
# -
# ## recall
#export
def recall(ground_truth, prediction):
"""
Compute Recall metric
:param ground_truth: the ground truth set or sequence
:param prediction: the predicted set or sequence
:return: the value of the metric
"""
ground_truth = remove_duplicates(ground_truth)
prediction = remove_duplicates(prediction)
recall_score = 0 if len(prediction) == 0 else count_a_in_b_unique(prediction, ground_truth) / float(
len(ground_truth))
assert 0 <= recall_score <= 1
return recall_score
# +
ground_truth = [[1],[3],[4],[8],[9]]
prediction = [[1],[4],[5],[9]]
test_eq(recall(ground_truth, prediction), 0.6)
# -
# ## mrr
#export
def mrr(ground_truth, prediction):
"""
Compute Mean Reciprocal Rank metric. Reciprocal Rank is set 0 if no predicted item is in contained the ground truth.
:param ground_truth: the ground truth set or sequence
:param prediction: the predicted set or sequence
:return: the value of the metric
"""
rr = 0.
for rank, p in enumerate(prediction):
if p in ground_truth:
rr = 1. / (rank + 1)
break
return rr
# +
ground_truth = [[1],[3],[4],[8],[9]]
prediction = [[1],[4],[5],[9]]
test_eq(mrr(ground_truth, prediction), 1.)
prediction = [[5],[1],[4],[9]]
test_eq(mrr(ground_truth, prediction), 0.5)
# -
# ## novelty
#export
def novelty(predictions: List[list],
train_df: pd.DataFrame,
user_col: str = 'user_id',
item_col: str = 'item_id') -> Tuple[float, List[Tuple[float, float]]]:
pop = train_df[item_col].value_counts().to_dict()
u = train_df[user_col].nunique() # number of users in the training data
n = max(map(len, predictions)) # length of recommended lists per user
mean_self_information = []
k = 0
for sublist in predictions:
self_information = 0
k += 1
for i in sublist:
self_information += np.sum(-np.log2(pop[i]/u))
mean_self_information.append(self_information/n)
novelty = sum(mean_self_information)/k
return novelty, mean_self_information
# Example
_df = pd.DataFrame({
'song_id': {0: '16', 1: '17', 2: '18', 3: '60', 4: '61'},
'user_id': {0: '4', 1: '4', 2: '4', 3: '10', 4: '10'}
})
_df
predictions = [['16','17','18'],['16','60']]
print(novelty(predictions, _df, item_col='song_id'))
test_eq(novelty(predictions, _df, item_col='song_id')[0].round(2), 0.83)
# ## coverage
#export
def coverage(predictions: List[list],
train_df: pd.DataFrame,
item_col: str = 'item_id') -> float:
catalog = train_df[item_col].unique().tolist() # list of items in the training data
predictions_flattened = [p for sublist in predictions for p in sublist]
unique_predictions = len(set(predictions_flattened))
prediction_coverage = round(unique_predictions/(len(catalog)* 1.0)*100,2)
return prediction_coverage
# Example
predictions = [['16','17','18'],['16','60']]
test_eq(coverage(predictions, _df, item_col='song_id'), 80.0)
# > **References:-**
# - https://github.com/massquantity/DBRL/blob/master/dbrl/evaluate/metrics.py
# - [https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/transformers4rec/torch/ranking_metric.py](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/transformers4rec/torch/ranking_metric.py)
# - [https://github.com/karlhigley/ranking-metrics-torch](https://github.com/karlhigley/ranking-metrics-torch)
# - [https://github.com/mquad/sars_tutorial/blob/master/util/metrics.py](https://github.com/mquad/sars_tutorial/blob/master/util/metrics.py)
#hide
# %reload_ext watermark
# %watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
|
nbs/evaluation/evaluation.metrics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Aula 6 - File IO
# ---
#
# Uma das coisas mais úteis que temos quando vamos montar nossos scripts é o poder de ler e escrever em arquivos. Assim, podemos persistir informações e/ou ler para processamento. Vamos aprender como fazer isso utilizando Python!
# Vamos utilizar a função `open()` do Python. Ela espera um nome de arquivo e o modo com que vamos abri-lo. Por enquanto, vamos usar `r` para leitura e `w` para escrita.
#
# O algoritmo para criação de um arquivo python é:
#
# criar uma variável que receba o arquivo
# escrever alguma(s) string(s)
# fechar o arquivo
# +
file_name = 'meu_arquivo.txt'
file = open(file_name, "w")
file.write('Isso é uma linha\n')
# +
# Escreva mais uma linha em file aqui!
# -
file.close() # Fechando o arquivo
# Outra maneira de fazer isso é utilizando a palavra `with`. Nesse caso, o Python se encarrega de chamar a função `close()` ao final do uso do arquivo. Verifique como fica abaixo:
with open("meu_arquivo_2.txt", 'w') as f:
# Nesse caso, f é nossa variável de arquivo
f.write('Escrevendo no arquivo')
f.write('Escrevendo no arquivoo')
# Para realizar a leitura, basta alterar para o modo `r` e utilizar um loop `for` para iterar sobre todas as linhas do arquivo.
with open("meu_arquivo.txt", 'r') as f:
for line in f:
print(line)
# ---
#
# ## Arquivos CSV
#
# Arquivos CSV (Comma-separated values) são úteis para lermos e salvarmos informações de maneira estruturada, como uma tabela. Eles são bastante utilizados para pré-processamento e criação de bancos para transferências.
#
# O Python nos dá uma biblioteca `csv` para trabalhar com esse tipo de arquivo, facilitando a leitura e a escrita. Basta usarmos o comand `import csv`.
# +
import csv
with open('cadastros.csv', mode='w') as cadastro_file:
cadastro_writer = csv.writer(cadastro_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
cadastro_writer.writerow(['Name', 'Class', 'Birthday'])
cadastro_writer.writerow(['<NAME>', 'Jedi', '22/10/1988'])
cadastro_writer.writerow(['<NAME>', 'Jedi Master', '21/09/2540'])
# -
# Uma explicação sobre o parâmetro `quotechar`:
#
# The quotechar optional parameter tells the writer which character to use to quote fields when writing. Whether quoting is used or not, however, is determined by the quoting optional parameter:
#
# - If quoting is set to csv.QUOTE_MINIMAL, then .writerow() will quote fields only if they contain the delimiter or the quotechar. This is the default case.
# - If quoting is set to csv.QUOTE_ALL, then .writerow() will quote all fields.
# - If quoting is set to csv.QUOTE_NONNUMERIC, then .writerow() will quote all fields containing text data and convert all numeric fields to the float data type.
# - If quoting is set to csv.QUOTE_NONE, then .writerow() will escape delimiters instead of quoting them. In this case, you also must provide a value for the escapechar optional parameter.
#
with open('cadastros.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
print(row)
''' Como podemos imprimir cada valor de cada linha em linhas separadas, pulando a primeira linha?'
<NAME>
Jedi
22/10/1988
<NAME>
Jedi Master
21/09/2540
'''
pass
# ---
#
# ## Exercícios - Parte 1
#
# Nos próximos exercícios usaremos o arquivo `hp1_2.txt`.
# 1. Quantas vezes a palavra `harry` aparece? *dica: é possível usar a função `count()` da String*
# 2. Em quantas linhas a palavra `harry` aparece?
# 3. Considerando os personagens `Hagrid`, `Hermione`, `Harry`, `Rony`, `Draco` e `Snape`, conte quantas vezes cada um é citado no livro e escreva isso num csv, no formato:
#
# `Personagem,Citacoes
# Hagrid,10
# Hermione,12
# Harry,23
# Rony,30
# Draco,50
# Snape,10`
#
#
# 4. Faça uma função que receba um novo nome para `harry` e troque todas as ocorrências encontradas por um novo nome,r ecebido por parâmetro. Sua função deverá ser usada da seguinte maneira:
# `novo_livro = troca_harry('mickey')`
#
# 5. Faça a função acima ser genérica, ou seja, receber um nome de um personagem e um novo nome para a troca. A função devolve o livro com os nomes trocados. Exemplo: `novo_livro = reescreve_livro('Hermione', 'Mafalda')`
# ## Exercícios - Parte 2
#
# Nos próximos exercícios usaremos o arquivo `movies.csv`.
#
# 1. Quantos filmes estão neste banco?
# 2. Quantos filmes do gênero comédia existem?
# 3. Quantas animações existem? Quantas dessas são musicais?
# 4. Considere os anos `1990`, `1991`, `1992`, `1993`, `1994` e `1995`. Leia o arquivo csv e escreva um novo contendo o número de filmes por ano, em cada linha. Por ex:
#
# `ano,filmes
# 1990,10
# 1991,40
# 1992,30
# 1993,50
# 1994,10
# 1995,50`
# +
import csv
new_genres = []
with open('movies.csv', encoding = 'utf-8') as movies:
movie_reader = csv.DictReader(movies)
for linha in movie_reader:
new_genres = (linha['genres'].split('|'))
del linha['genres']
linha['genres'] = new_genres
print(linha['genres'])
# -
|
Aula 6/Aula 6 - File IO.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Para quem tiver curiosidade de saber como gerar uma sub igual à gender_submission, esse é o código.
import pandas as pd
data = pd.read_csv("test.csv")
data.head()
e_feminino = (data['Sex'] == 'female').astype(int)
e_feminino.head()
e_feminino.index = data['PassengerId']
e_feminino.head()
e_feminino.name = 'Survived'
e_feminino.head()
e_feminino.to_csv('gender_submission.csv', header=True)
# !head -n10 gender_submission.csv
|
Titanic/titanic_video1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="7qBwPxPuNCNM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="52e5ee2d-7e88-4311-b2b7-0bcdac7a9bc9" executionInfo={"status": "ok", "timestamp": 1557667273321, "user_tz": -480, "elapsed": 807, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-uQkngopOb5g/AAAAAAAAAAI/AAAAAAAADHE/eZZS-2UZ99w/s64/photo.jpg", "userId": "18251189279685617739"}}
# %cd /gdrive/My\ Drive/bert/bert-cn/
# + id="r_XaT6ZUNCNS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="a5afd4d2-99b2-4a5c-b3f9-ec040cf57b55" executionInfo={"status": "ok", "timestamp": 1557667336793, "user_tz": -480, "elapsed": 64276, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-uQkngopOb5g/AAAAAAAAAAI/AAAAAAAADHE/eZZS-2UZ99w/s64/photo.jpg", "userId": "18251189279685617739"}}
# !git add .
# + id="7aR1W1CfNCNV" colab_type="code" colab={}
# !git config --global user.email "<EMAIL>"
# !git config --global user.name "<NAME>"
# + id="Zp_2O_bQNCNX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="4c11313e-8d4b-484c-9995-0e3524a2246a" executionInfo={"status": "ok", "timestamp": 1557667345208, "user_tz": -480, "elapsed": 70431, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-uQkngopOb5g/AAAAAAAAAAI/AAAAAAAADHE/eZZS-2UZ99w/s64/photo.jpg", "userId": "18251189279685617739"}}
# !git commit -m "提交"
# + id="2RVpw5VfNCNa" colab_type="code" colab={} outputId="e327507e-0e52-4c1f-d2a1-ce4ebd6026fa"
#使用命令行执行
# !git push
# + id="FUY1n5ARNCNd" colab_type="code" colab={} outputId="7313d950-c98a-4558-9761-5ed469b0e336"
# # %cd .git
# + id="B3ZaryURNCNh" colab_type="code" colab={} outputId="de05b524-ea60-45b1-a112-c6162ea7bbf4"
# !ls
# + id="ycReoMwVNCNk" colab_type="code" colab={} outputId="7cc95a32-5a8a-49a4-8a20-d02c5065d2c4"
# !cat config
# + id="mmwbpFVYNCNn" colab_type="code" colab={}
|
tools/git.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/EvenSol/NeqSim-Colab/blob/master/notebooks/thermodynamics/thermodynamicsOfWax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="5tAs8wri2Z-K"
#@markdown This document is part of the module ["Introduction to Gas Processing using NeqSim in Colab"](https://colab.research.google.com/github/EvenSol/NeqSim-Colab/blob/master/notebooks/examples_of_NeqSim_in_Colab.ipynb#scrollTo=_eRtkQnHpL70).
# %%capture
# !pip install neqsim
# !pip install wget
import wget
url = "https://github.com/equinor/neqsim/releases/download/v2.0.1/neqsimthermodatabase.zip"
wget.download(url, 'neqsimdatabase.zip')
# !unzip neqsimdatabase.zip -d neqsimdatabase
from neqsim import setDatabase
setDatabase("jdbc:derby:neqsimdatabase/neqsimthermodatabase")
# + id="tsEAPIJ0G7PG" cellView="form" outputId="1f7a10d8-471e-44d0-a129-3cb3e43f1cd9" colab={"base_uri": "https://localhost:8080/", "height": 422}
#@title Introduction to Wax
#@markdown This video gives an intriduction to behavour of wax in oil and gas production
from IPython.display import YouTubeVideo
YouTubeVideo('9vne-gWFQBw', width=600, height=400)
# + [markdown] id="BUAuhJM2G14s"
# # Demonstartion of a wax calculation in neqsim
# + id="Z_VvMJF9FTUc" outputId="f151bcc0-36e5-4d8a-9081-759d2a860d69" colab={"base_uri": "https://localhost:8080/"}
from neqsim.thermo import *
# Start by creating a fluid in neqsim
fluid1 = fluid("srk") # create a fluid using the SRK-EoS
fluid1.addComponent("nitrogen", 1.0, "mol/sec")
fluid1.addComponent("CO2", 2.3, "mol/sec")
fluid1.addComponent("methane", 80.0, "mol/sec")
fluid1.addComponent("ethane", 6.0, "mol/sec")
fluid1.addComponent("propane", 3.0, "mol/sec")
fluid1.addComponent("i-butane", 1.0, "mol/sec")
fluid1.addComponent("n-butane", 1.0, "mol/sec")
fluid1.addPlusFraction("C11", 2.95, 217.0 / 1000.0, 0.8331);
fluid1.getCharacterization().characterisePlusFraction();
fluid1.getWaxModel().addTBPWax();
fluid1.createDatabase(True);
fluid1.setMixingRule(2);
fluid1.addSolidComplexPhase("wax");
fluid1.setMultiphaseWaxCheck(True);
fluid1.setTemperature(10.112, "C")
fluid1.setPressure(10.0, "bara")
TPflash(fluid1)
printFrame(fluid1)
fluid1.setTemperature(40.112, "C")
fluid1.setPressure(10.0, "bara")
waxTemp = WAT(fluid1)-273.15
#printFrame(fluid1)
print("WAT ", waxTemp, " °C")
# + [markdown] id="DlfUFnOvmkYU"
# # Tuning to wax PVT data
#
# Wax PVT studies are typically done by measureing the weigth fraction of wax formed as function of temperature and pressure. In the follwing example we illustrate how the neqsim wax model can be tuned to fit experiemntal data.
#
# + id="oUXdPdDmnCI1" colab={"base_uri": "https://localhost:8080/"} outputId="70fb9274-f752-4a9f-e33f-5c380e1e02e0"
from neqsim.thermo import tunewaxmodel
experimentaldata = {'temperature': [22.0, 20.0, 10.0],
'pressure': [10.0, 10.0, 10.0],
'experiment': [0.02, 0.04, 0.06]
}
waxfitresults = tunewaxmodel(fluid1, experimentaldata,maxiterations=5) # try reducing maxiterations if convergence problems
print(waxfitresults)
|
notebooks/thermodynamics/thermodynamicsOfWax.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p><font size="6"><b> CASE - Bacterial resistance experiment</b></font></p>
#
#
# > *DS Data manipulation, analysis and visualisation in Python*
# > *December, 2017*
#
# > *© 2017, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# In this case study, we will make use of the open data, affiliated to the following [journal article](http://rsbl.royalsocietypublishing.org/content/12/5/20160064):
#
# >Arias-Sánchez FI, Hall A (2016) Effects of antibiotic resistance alleles on bacterial evolutionary responses to viral parasites. Biology Letters 12(5): 20160064. https://doi.org/10.1098/rsbl.2016.0064
#
#
# <img src="http://blogs.discovermagazine.com/notrocketscience/files/2011/05/Bacteriophage.jpg">
# Check the full paper on the [web version](http://rsbl.royalsocietypublishing.org/content/12/5/20160064). The study handles:
# > Antibiotic resistance has wide-ranging effects on bacterial phenotypes and evolution. However, the influence of antibiotic resistance on bacterial responses to parasitic viruses remains unclear, despite the ubiquity of such viruses in nature and current interest in therapeutic applications. We experimentally investigated this by exposing various Escherichia coli genotypes, including eight antibiotic-resistant genotypes and a mutator, to different viruses (lytic bacteriophages). Across 960 populations, we measured changes in population density and sensitivity to viruses, and tested whether variation among bacterial genotypes was explained by their relative growth in the absence of parasites, or mutation rate towards phage resistance measured by fluctuation tests for each phage
# +
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import plotnine as pn
# -
# ## Reading and processing the data
# The data is available on [Dryad](http://www.datadryad.org/resource/doi:10.5061/dryad.90qb7.3), a general purpose data repository providing all kinds of data sets linked to journal papers. The downloaded data is available in this repository in the `data` folder as an excel-file called `Dryad_Arias_Hall_v3.xlsx`.
#
# For the exercises, two sheets of the excel file will be used:
# * `Main experiment`:
#
#
# | Variable name | Description |
# |---------------:|:-------------|
# |**AB_r** | Antibotic resistance |
# |**Bacterial_genotype** | Bacterial genotype |
# |**Phage_t** | Phage treatment |
# |**OD_0h** | Optical density at the start of the experiment (0h) |
# |**OD_20h** | Optical density after 20h |
# |**OD_72h** | Optical density at the end of the experiment (72h) |
# |**Survival_72h** | Population survival at 72h (1=survived, 0=extinct) |
# |**PhageR_72h** | Bacterial sensitivity to the phage they were exposed to (0=no bacterial growth, 1= colony formation in the presence of phage) |
#
# * `Falcor`: we focus on a subset of the columns:
#
# | Variable name | Description |
# |---------------:|:-------------|
# | **Phage** | Bacteriophage used in the fluctuation test (T4, T7 and lambda) |
# | **Bacterial_genotype** | Bacterial genotype. |
# | **log10 Mc** | Log 10 of corrected mutation rate |
# | **log10 UBc** | Log 10 of corrected upper bound |
# | **log10 LBc** | Log 10 of corrected lower bound |
# Reading the `main experiment` data set from the corresponding sheet:
main_experiment = pd.read_excel("../data/Dryad_Arias_Hall_v3.xlsx", sheet_name="Main experiment")
main_experiment.head()
# Read the `Falcor` data and subset the columns of interest:
falcor = pd.read_excel("../data/Dryad_Arias_Hall_v3.xlsx", sheet_name="Falcor",
skiprows=1)
falcor = falcor[["Phage", "Bacterial_genotype", "log10 Mc", "log10 UBc", "log10 LBc"]]
falcor.head()
# ## Tidy the `main_experiment` data
# *(If you're wondering what `tidy` data representations are, check again the `visualization_02_plotnine.ipynb` notebook)*
# Actually, the columns `OD_0h`, `OD_20h` and `OD_72h` are representing the same variable (i.e. `optical_density`) and the column names itself represent a variable, i.e. `experiment_time_h`. Hence, it is stored in the table as *short* format and we could *tidy* these columns by converting them to 2 columns: `experiment_time_h` and `optical_density`.
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Convert the columns `OD_0h`, `OD_20h` and `OD_72h` to a long format with the values stored in a column `optical_density` and the time in the experiment as `experiment_time_h`. Save the variable as `tidy_experiment`</li>
#
# </ul>
# </div>
# + clear_cell=true
tidy_experiment = main_experiment.melt(id_vars=['AB_r', 'Bacterial_genotype', 'Phage_t',
'Survival_72h', 'PhageR_72h'],
value_vars=['OD_0h', 'OD_20h', 'OD_72h'],
var_name='experiment_time_h',
value_name='optical_density', )
tidy_experiment.head()
# -
# ## Visual data exploration
tidy_experiment.head()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Make a histogram to check the distribution of the `optical_density`</li>
# <li>Change the border color of the bars to `white` and the fill color to `lightgrey`</li>
# <li>Change the overall theme to any of the available themes</li>
#
# </ul>
# </div>
# + clear_cell=true
(pn.ggplot(tidy_experiment, pn.aes(x='optical_density'))
+ pn.geom_histogram(bins=30, color='white', fill='lightgrey')
+ pn.theme_bw()
)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Use a *violin plot* to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h`)</li>
#
# </ul>
# </div>
# + clear_cell=true
(pn.ggplot(tidy_experiment, pn.aes(x='experiment_time_h',
y='optical_density'))
+ pn.geom_violin()
)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>For each `Phage_t` in an individual subplot, use a *violin plot* to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h`)</li>
#
# </ul>
# </div>
#
#
# + clear_cell=true
(pn.ggplot(tidy_experiment, pn.aes(x='experiment_time_h',
y='optical_density'))
+ pn.geom_violin()
+ pn.facet_wrap('Phage_t')
)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Create a summary table of the average `optical_density` with the `Bacterial_genotype` in the rows and the `experiment_time_h` in the columns</li>
# </ul>
# </div>
#
#
# + clear_cell=true
pd.pivot_table(tidy_experiment, values='optical_density',
index='Bacterial_genotype',
columns='experiment_time_h',
aggfunc='mean')
# + clear_cell=true
tidy_experiment.groupby(['Bacterial_genotype', 'experiment_time_h'])['optical_density'].mean().unstack()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate for each combination of `Bacterial_genotype`, `Phage_t` and `experiment_time_h` the *mean* `optical_density` and store the result as a dataframe called `density_mean`</li>
# <li>Based on `density_mean`, make a *barplot* of the mean values for each `Bacterial_genotype`, with for each Bacterial_genotype an individual bar per `Phage_t` in a different color (grouped bar chart).</li>
# <li>Use the `experiment_time_h` to split into subplots. As we mainly want to compare the values within each subplot, make sure the scales in each of the subplots are adapted to the data range, and put the subplots on different rows.</li>
# <li>(OPTIONAL) change the color scale of the bars to a color scheme provided by [colorbrewer](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3)</li>
#
# </ul>
# </div>
#
#
# + clear_cell=true
density_mean = (tidy_experiment
.groupby(['Bacterial_genotype','Phage_t', 'experiment_time_h'])
.median().reset_index())
# -
density_mean.head()
# + clear_cell=true
(pn.ggplot(density_mean, pn.aes(x='Bacterial_genotype',
y='optical_density',
fill='Phage_t'))
+ pn.geom_bar(stat='identity', position='dodge')
+ pn.facet_wrap('experiment_time_h', dir='v', scales='free')
+ pn.scale_fill_brewer(type='qual', palette=8)
)
# -
# ## Reproduce the graphs of the original paper
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Check Figure 2 of the original journal paper in the *correction* part of the pdf :http://rsbl.royalsocietypublishing.org/content/roybiolett/12/5/20160064.full.pdf:</li>
# <img src="http://rsbl.royalsocietypublishing.org/content/roybiolett/12/10/20160759/F1.large.jpg" width="500">
# <li>Reproduce the graph using the `falcor` data and the plotnine package (don't bother yet about the style or the order on the x axis). The "log10 mutation rate" on the figure corresponds to the `log10 Mc` column.</li>
# <li>Check the [documentation](http://plotnine.readthedocs.io/en/stable/generated/plotnine.geoms.geom_errorbar.html#plotnine.geoms.geom_errorbar) to find out how to add errorbars to the graph. The upper and lower bound for the error bars are given in the `log10 UBc` and `log10 LBc` columns.</li>
# <li>Make sure the `WT(2)` and `'MUT(2)'` are used as respectively `WT` and `MUT`.</li>
# </ul>
# </div>
#
# + clear_cell=true
falcor["Bacterial_genotype"] = falcor["Bacterial_genotype"].replace({'WT(2)': 'WT',
'MUT(2)': 'MUT'})
# + clear_cell=true
(pn.ggplot(falcor, pn.aes(x='Bacterial_genotype', y='log10 Mc'))
+ pn.geom_point()
+ pn.facet_wrap('Phage', dir='v')
+ pn.geom_errorbar(pn.aes(ymin='log10 LBc', ymax='log10 UBc'), width=.2)
+ pn.theme_bw()
)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE (OPTIONAL)</b>:
#
# <ul>
# <li>Check Figure 1 of the original journal paper: http://rsbl.royalsocietypublishing.org/content/12/5/20160064#F1: <img src="http://rsbl.royalsocietypublishing.org/content/roybiolett/12/5/20160064/F1.large.jpg" width="500"> </li>
# <li>Reproduce the graph using the `tidy_experiment` data and the plotnine package. Notice that the plot shows the optical density at the end of the experiment (72h).</li>
# <li>Take the `geom_` that closest represents the original.</li>
# <li>Check the [documentation](http://plotnine.readthedocs.io/en/stable/api.html) for further tuning, e.g. [`as_labeller`](http://plotnine.readthedocs.io/en/stable/generated/plotnine.facets.labelling.as_labeller.html#plotnine.facets.labelling.as_labeller)...</li>
# </ul>
# </div>
#
#
# + clear_cell=true
end_of_experiment = tidy_experiment[tidy_experiment["experiment_time_h"] == "OD_72h"].copy()
# + clear_cell=true
# The Nan-values of the PhageR_72h when no phage represent survival (1)
end_of_experiment["PhageR_72h"] = end_of_experiment["PhageR_72h"].fillna(0.)
# + clear_cell=true
# precalculate the median value
end_of_experiment["Phage_median"] = end_of_experiment.groupby(["Phage_t", "Bacterial_genotype"])['optical_density'].transform('median')
pn.options.figure_size = (8, 10)
(pn.ggplot(end_of_experiment, pn.aes(x='Bacterial_genotype',
y='optical_density'))
+ pn.geom_jitter(mapping=pn.aes(color='factor(PhageR_72h)'),
width=0.2, height=0., size=2, fill='white')
+ pn.facet_wrap("Phage_t", nrow=4,
labeller=pn.as_labeller({'C_noPhage' : '(a) no phage', 'L' : '(b) phage $\lambda$',
'T4' : '(c) phage T4', 'T7': '(d) phage T7'}))
+ pn.theme_bw()
+ pn.xlab("Bacterial genotype")
+ pn.ylab("Bacterial density (OD)")
+ pn.theme(strip_text=pn.element_text(size=11))
+ pn.geom_crossbar(inherit_aes=False, alpha=0.5,
mapping=pn.aes(x='Bacterial_genotype', y='Phage_median',
ymin='Phage_median', ymax='Phage_median'))
+ pn.scale_color_manual(values=["black", "red"], guide=False)
)
|
_solved/case3_bacterial_resistance_lab_experiment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.8.0-DEV
# language: julia
# name: julia-1.8
# ---
# +
using LinearAlgebra
function sarrus(A)
@assert size(A) == (3, 3)
a, b, c, d, e, f, g, h, k = A
a*e*k + b*f*g + c*d*h - a*f*g - b*d*k - c*e*g
end
# +
a, b, c = 1e15, 1e15+1, 1e15+2
x, y = 3, 6
z = -1
A = [
a b c
a+x b+x c+x
a+y b+y c+y+z
]
# -
B = [
a-b b c
0 x x
0 0 z
]
det(B)
sarrus(B)
(a - b)*x*z
det(A)
sarrus(A)
Q, R = qr(A)
det(R)
|
0017/sarrus.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import print_function, division
from sympy import *
init_printing(use_unicode=True)
# ## Spong
m1, l1, lc1, I1, q1, q1dot = symbols('m1 l1 lc1 I1 q1 qdot1')
m2, lc2, I2, q2, q2dot = symbols('m2 lc2 I2 q2 qdot2')
M = Matrix([[m1*lc1**2+m2*(l1**2+lc2**2+2*l1*lc2*cos(q2))+I1+I2,
m2*(lc2**2+l1*lc2*cos(q2))+I2],
[m2*(lc2**2+l1*lc2*cos(q2))+I2,
m2*lc2**2+I2]])
M
C = Matrix([[-2*m2*l1*lc2*sin(q2)*q2dot,
-m2*l1*lc2*sin(q2)*q2dot],
[m2*l1*lc2*sin(q2)*q1dot,
0]])
C
g = symbols('g')
G = g*Matrix([[(m1*lc1+m2*l1)*cos(q1)+m2*lc2*cos(q1+q2)],
[m2*lc2*cos(q1+q2)]])
G
u= symbols('u')
qdotdot = -M**(-1)*(C*Matrix([[q1dot],[q2dot]])+G
-Matrix([[0],[1]])*u)
collect(factor(simplify(cancel(qdotdot[0]))),q1dot)
# ### Make sure the system is at rest in fixed points
qdotdot.subs([(q1,pi/2),(q2,0), (u,0),(q1dot,0), (q2dot,0)])
qdotdot.subs([(q1,-pi/2),(q2,0), (u,0),(q1dot,0), (q2dot,0)])
diff(qdotdot[0],q1dot).subs([(q1,pi/2),(q2,0), (u,0), (q1dot,0), (q2dot,0)])
A20=diff(qdotdot[0],q1).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
A20.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
A21=diff(qdotdot[0],q2).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
A21.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
A30=diff(qdotdot[1],q1).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
A30.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
A31=diff(qdotdot[1],q2).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
A31.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
B20=diff(qdotdot[0],u).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
B20.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
B30=diff(qdotdot[1],u).subs([(q1,pi/2),(q2,0),
(u,0), (q1dot,0), (q2dot,0)])
B30.subs([(m1,1), (m2,1), (l1,1), (lc1,0.5), (lc2,1),
(I1,0.083), (I2,0.33), (g,9.8)])
|
src/2.1_LQR/Spong-Acrobot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dropout
# Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
#
# [1] <NAME> et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
# +
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
# %load_ext autoreload
# %autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# +
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
# -
# # Dropout forward pass
# In the file `cs231n/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
#
# Once you have done so, run the cell below to test your implementation.
# +
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
# -
# # Dropout backward pass
# In the file `cs231n/layers.py`, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
# +
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
# -
# # Fully-connected nets with Dropout
# In the file `cs231n/classifiers/fc_net.py`, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the `dropout` parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
# +
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
# -
# # Regularization experiment
# As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
# +
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# +
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
# -
# # Question
# Explain what you see in this experiment. What does it suggest about dropout?
# # Answer
# We can see that the training accuracy for dropout is worse than without, while the validation accuracy is more or less the same, which shows dropout can prevent overfit of the training data.
|
PA2/Dropout.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="xvgNUMckU7Na"
from __future__ import absolute_import, division, print_function
import tensorflow as tf
import numpy as np
# + [markdown] id="1PI1PqciVVT3"
# MNIST data set. MNIST data is a collection of hand-written digits that contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 255.
#
# Next for each image we will:
#
# 1) converted it to float32
#
# 2) normalized to [0, 1]
#
# 3) flattened to a 1-D array of 784 features (28*28).
#
#
# + [markdown] id="li4iPhZvWG-1"
# #Step 2: Loading and Preparing the MNIST Data Set
# + id="2L3GHvPxVfKH" colab={"base_uri": "https://localhost:8080/"} outputId="65e71ab8-7963-4d85-bd3c-6dad33367677"
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Flatten images to 1-D vector of 784 features (28*28).
num_features=784
x_train, x_test = x_train.reshape([-1, num_features]), x_test.reshape([-1, num_features])
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# + [markdown] id="pRmiTr7hV43k"
# #Step 3: Setting Up Hyperparameters and Data Set Parameters
#
# Initialize the model parameters.
#
# num_classes denotes the number of outputs, which is 10, as we have digits from 0 to 9 in the data set.
#
# num_features defines the number of input parameters, and we store 784 since each image contains 784 pixels.
# + id="l5wqQDxJWDi1"
# MNIST dataset parameters.
num_classes = 10 # 0 to 9 digits
num_features = 784 # 28*28
# Training parameters.
learning_rate = 0.01
training_steps = 1000
batch_size = 256
display_step = 50
# + [markdown] id="iuEixbb8WX3f"
# #Step 4: Shuffling and Batching the Data
#
# We need to shuffle and batch the data before we start the actual training to avoid the model from getting biased by the data. This will allow our data to be more random and helps our model to gain higher accuracies with the test data.
#
# With the help of tf.data.Dataset.from_tensor_slices, we can get the slices of an array in the form of objects.
#
# The function shuffle(5000) randomizes the order of the data set’s examples.
#
# Here, 5000 denotes the variable shuffle_buffer, which tells the model to pick a sample randomly from 1 to 5000 samples.
#
# After that, there are only 4999 samples left in the buffer, so the sample 5001 gets added to the buffer.
# + id="_oj9bVJqWhDv"
# Use tf.data API to shuffle and batch data.
train_data=tf.data.Dataset.from_tensor_slices((x_train,y_train))
train_data=train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# + [markdown] id="6vXwmbU7WpZf"
# #Step 5: Initializing Weights and Biases
#
# We now initialize the weights vector and bias vector with ones and zeros.
# + id="guK098RvWrek"
# Weight of shape [784, 10], the 28*28 image features, and a total number of classes.
W = tf.Variable(tf.ones([num_features, num_classes]), name="weight")
# Bias of shape [10], the total number of classes.
b = tf.Variable(tf.zeros([num_classes]), name="bias")
# + [markdown] id="zixB5WB1WzpA"
# #Step 6: Defining Logistic Regression and Cost Function
#
# We define the logistic_regression function below, which converts the inputs into a probability distribution proportional to the exponents of the inputs using the softmax function. The softmax function, which is implemented using the function tf.nn.softmax, also makes sure that the sum of all the inputs equals one.
# + id="oJEBWTjDW24p"
# Logistic regression (Wx + b).
def logistic_regression(x):
# Apply softmax to normalize the logits to a probability distribution.
return tf.nn.softmax(tf.matmul(x, W) + b)
# Cross-Entropy loss function.
def cross_entropy(y_pred, y_true):
# Encode label to a one hot vector.
y_true = tf.one_hot(y_true, depth=num_classes)
# Clip prediction values to avoid log(0) error.
y_pred = tf.clip_by_value(y_pred, 1e-9, 1.)
# Compute cross-entropy.
return tf.reduce_mean(-tf.reduce_sum(y_true * tf.math.log(y_pred)))
# + [markdown] id="FhOVijJOW_mu"
# #Step 7: Defining Optimizers and Accuracy Metrics
# When we compute the output, it gives us the probability of the given data to fit a particular class of output.
#
# We consider the correct prediction as to the class having the highest probability.
#
# We compute this using the function tf.argmax.
#
# We also define the stochastic gradient descent as the optimizer from several optimizers present in TensorFlow. We do this using the function tf.optimizers.SGD.
#
# This function takes in the learning rate as its input, which defines how fast the model should reach its minimum loss or gain the highest accuracy.
# + id="Updp5rlyXGf9"
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of the highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# + [markdown] id="5Kh8AXIeXJ2Q"
# #Step 8: Optimization Process and Updating Weights and Biases
# Now we define the run_optimization() method where we update the weights of our model. We calculate the predictions using the logistic_regression(x) method by taking the inputs and find out the loss generated by comparing the predicted value and the original value present in the data set. Next, we compute the gradients using and update the weights of the model with our stochastic gradient descent optimizer.
# + id="zh9qjQZNXSp0"
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
pred = logistic_regression(x)
loss = cross_entropy(pred, y)
# Compute gradients.
gradients = g.gradient(loss, [W, b])
optimizer = tf.optimizers.SGD(learning_rate)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, [W, b]))
# + [markdown] id="fLAj8042XWT9"
# #Step 9: The Training Loop
# + id="D0XEQd0f7QYj" colab={"base_uri": "https://localhost:8080/"} outputId="3514a233-9fac-46e3-a64b-f58b96c2b5a7"
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = logistic_regression(batch_x)
loss = cross_entropy(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
#Obtain Predictions
#Ccompute loss
#Compute Accuracy
#print accuracy
# + [markdown] id="lGXSf6nAX0yF"
# #Step 10: Testing Model Accuracy Using the Test Data
#
# Finally, we check the model accuracy by sending the test data set into our model and compute the accuracy using the accuracy function that we defined earlier.
# + id="ZNfKDFBY9NPl" colab={"base_uri": "https://localhost:8080/"} outputId="83141687-388f-48e9-dc4f-793b97603878"
# Test model on validation set.
pred = logistic_regression(x_test)
print("Test Accuracy: %f" % accuracy(pred, y_test))
|
Lab6/Lab6_Logistic_Reg_Tensorflow.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GAT
# %cd /4tb/nabarun/nlp/GAT
# !python3 train.py cora
# %cd /4tb/nabarun/nlp/DLNLP/dgl_gat
# !../../gcn_text_categorization/venv/bin/python3 train.py --dataset=pubmed
# !../../gcn_text_categorization/venv/bin/python3 train.py --dataset=citeseer
# # GCN
# %cd /4tb/nabarun/nlp/DLNLP/gcn
# !../../gcn_text_categorization/venv/bin/python3 train.py cora
# !../../gcn_text_categorization/venv/bin/python3 train.py pubmed
# !../../gcn_text_categorization/venv/bin/python3 train.py citeseer
# # SGC
# %cd /4tb/nabarun/nlp/SGC
# !../gcn_text_categorization/venv/bin/python3 citation.py --dataset cora
# !../gcn_text_categorization/venv/bin/python3 citation.py --dataset pubmed
# !../gcn_text_categorization/venv/bin/python3 citation.py --dataset citeseer
|
GCNvsGATvsSGC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from random import randint
from sklearn.utils import shuffle
from sklearn.preprocessing import MinMaxScaler
train_labels = [] # one means side effect experienced, zero means no side effect experienced
train_samples = []
# Example data:
#
# • An experimental drug was tests on individuals from ages 13 to 100 in a clinical trial.
# • The trial had 2100 participants. Half were under the age of 65 years old, half were 65 years or older.
# • 95% patients 65 years or older experienced side effects.
# • 95% patients under 65 years experienced no side effects.
# +
for i in range(50):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(0)
for i in range(1000):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(1)
# -
for i in train_samples:
print(i)
train_labels = np.array(train_labels)
train_samples = np.array(train_samples)
train_labels, train_samples = shuffle(train_labels, train_samples) # randomly shuffles each individual array, removing any order imposed on the data set during the creation process
scaler = MinMaxScaler(feature_range = (0, 1)) # specifying scale (range: 0 to 1)
scaled_train_samples = scaler.fit_transform(train_samples.reshape(-1,1)) # transforms our data scale (range: 13 to 100) into the one specified above (range: 0 to 1), we use the reshape fucntion as fit_transform doesnot accept 1-D data by default hence we need to reshape accordingly here
# print scaled data
for i in scaled_train_samples:
print(i)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
model = Sequential([
Dense(units = 16, input_shape = (1,), activation = 'relu'),
Dense(units = 32, activation = 'relu'),
Dense(units = 2, activation = 'softmax')
])
model.summary()
model.compile(optimizer = Adam(learning_rate = 0.0001), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(x = scaled_train_samples, y = train_labels, validation_split = 0.1, batch_size = 10, epochs = 30, shuffle = True, verbose = 2)
# ## Preprocess Test Data
test_labels = []
test_samples = []
# +
for i in range(10):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(0)
for i in range(200):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(1)
# -
test_labels = np.array(test_labels)
test_samples = np.array(test_samples)
test_labels, test_samples = shuffle(test_labels, test_samples)
scaled_test_samples = scaler.fit_transform(test_samples.reshape(-1,1))
# ## Predict
predictions = model.predict(x = scaled_test_samples, batch_size = 10, verbose = 0)
for i in predictions: #for [x1, x2] x1 = P(no side effect), x2 = P(side effect)
print(i)
rounded_predictions = np.argmax(predictions, axis = -1)
for i in rounded_predictions:
print(i)
# ## Confusion Matrix
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
cm = confusion_matrix(y_true = test_labels, y_pred = rounded_predictions)
# This function has been taken from the website of scikit Learn. link: https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm_plot_labels = ['no_side_effects', 'had_side_effects']
plot_confusion_matrix(cm = cm, classes = cm_plot_labels, title = 'Confusion Matrix')
|
Simple Sequential Model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/chrismarkella/Kaggle-access-from-Google-Colab/blob/master/sql_vs_pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="ign3Eoitcmuf" colab_type="code" colab={}
import os
import numpy as np
import pandas as pd
from getpass import getpass
# + id="hBE93hbhcohi" colab_type="code" outputId="b4fd2b91-6845-492e-bef7-7e25a6cce224" colab={"base_uri": "https://localhost:8080/", "height": 69}
def access_kaggle():
"""
Access Kaggle from Google Colab.
If the /root/.kaggle does not exist then prompt for
the username and for the Kaggle API key.
Creates the kaggle.json access file in the /root/.kaggle/ folder.
"""
KAGGLE_ROOT = os.path.join('/root', '.kaggle')
KAGGLE_PATH = os.path.join(KAGGLE_ROOT, 'kaggle.json')
if '.kaggle' not in os.listdir(path='/root'):
user = getpass(prompt='Kaggle username: ')
key = getpass(prompt='Kaggle API key: ')
# !mkdir $KAGGLE_ROOT
# !touch $KAGGLE_PATH
# !chmod 666 $KAGGLE_PATH
with open(KAGGLE_PATH, mode='w') as f:
f.write('{"username":"%s", "key":"%s"}' %(user, key))
f.close()
# !chmod 600 $KAGGLE_PATH
del user
del key
success_msg = "Kaggle is successfully set up. Good to go."
print(f'{success_msg}')
access_kaggle()
# + id="0U1OaNsrcrsE" colab_type="code" outputId="354e956b-e4e6-4ebc-e217-8131f144ed9c" colab={"base_uri": "https://localhost:8080/", "height": 447}
# !kaggle datasets download fernandol/countries-of-the-world --unzip
# !mv countries\ of\ the\ world.csv countries_of_the_world.csv
df = pd.read_csv('countries_of_the_world.csv', sep=',')
df.head()
# + id="dx_TVWddeFW8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="8267a2dc-4989-490c-9b44-ab5304ac35ba"
df.shape
# + id="5WF8ALmpetch" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="96c73946-6625-44e1-a3ec-07d2672d9355"
df.columns
# + id="Gc4nK4q5ewdA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="6df9d5cb-b7b8-4597-e214-e5ef9bd65d93"
df.columns = df.columns.str.strip().str.lower()
df.columns
# + id="xdnIIukte73A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="70c1d267-cf1c-47a5-c88d-7d6e96134c7c"
column_name_dict = {
'area (sq. mi.)':'area',
'pop. density (per sq. mi.)':'pop_density',
'coastline (coast/area ratio)':'coastline',
'infant mortality (per 1000 births)':'infant_mortality',
'gdp ($ per capita)':'gdp_per_capita',
'literacy (%)':'literacy',
'phones (per 1000)':'phones',
'arable (%)':'arable',
'crops (%)':'crops',
'other (%)':'other',
}
df.rename(columns=column_name_dict, inplace=True)
df.columns
# + id="EchjtlHRgOmJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="7a92ae23-e21f-4ae0-9def-fa878852e2db"
df.columns = df.columns.map(lambda c: '_'.join(c.split()))
df.columns
# + [markdown] id="CStFWPhxkQZn" colab_type="text"
# ##SELECT
# + [markdown] id="Gz95-8yMhLPj" colab_type="text"
#
# ###SELECT 1
# ```SQL
# SELECT population FROM world
# WHERE name = 'Germany'
# ```
#
#
# + id="MML5vj1zgkj4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b56f2903-19e6-4b3c-fcd7-b65c3b6214f4"
filt_germany = (df.country == 'Germany')
df.loc[filt_germany, 'population']
# + id="iZsfc7lehmXb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="d10101e8-920a-4446-f46b-bcc4f4fb1aae"
df.country.unique()[:10]
# + id="dzqV0D9JhsgC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="aa6eada0-d2b4-45f4-8913-537f72613b9f"
df.country.str.strip().unique()[:10]
# + id="pPzCnASAiEoi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="46210879-ddcc-48ca-f7a0-1951a040a109"
df.country = df.country.str.strip()
df.country.unique()[:5]
# + id="SMBQvIGUiV3v" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="c915302f-9fb5-4e57-f10b-8098ab4020f0"
filt_germany = (df.country == 'Germany')
df.loc[filt_germany, 'population']
# + [markdown] id="4_hf3u68ixG2" colab_type="text"
# ###SELECT 2
#
#
# ```SQL
# SELECT name, population FROM world
# WHERE name IN ('Brazil', 'Russia', 'India', 'China');
# ```
#
#
# + id="6efQB3hyihX_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 171} outputId="79eefd9d-f9fd-4d81-b1b5-558aca349c11"
selected_countries = [
'Brazil',
'Russia',
'India',
'China',
]
filt_country = (df.country.isin(selected_countries))
df.loc[filt_country, ['country', 'population']]
# + [markdown] id="stBP5SqRjkEi" colab_type="text"
# ###SELECT 3
#
#
# ```SQL
# SELECT name, area FROM world
# WHERE area BETWEEN 250000 AND 300000
# ```
#
#
# + id="XmwFAJ5zjchI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 233} outputId="d80f82b9-63b8-44e0-a839-b50a1d1775f1"
filt_area = df.area.between(250*10**3, 300*10**3)
selected_columns = [
'country',
'area',
]
df.loc[filt_area, selected_columns]
# + [markdown] id="QNvQxyRFktb3" colab_type="text"
# ###SELECT 4
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE 'F%'
# ```
#
#
# + id="mBMY2BcGkK0d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="953f051e-c050-44ca-b427-1f28d1f1fd18"
filt_country_f = (df.country.map(lambda c: c[0] == 'F'))
df.loc[filt_country_f, 'country']
# + [markdown] id="-DFcz7WIpLQo" colab_type="text"
# ###SELECT 5
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '%Y'
# ```
#
#
# + id="stXJTj9ppG4v" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="5c2467b9-2c32-4167-b177-386bd690eec2"
filt_country_end_y = (df.country.map(lambda c: c[-1] == 'y'))
df.loc[filt_country_end_y, 'country']
# + [markdown] id="eUIy62mXpxbk" colab_type="text"
# ###SELECT 6
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '%x%'
# ```
#
#
# + id="QMfJ8u7QpvOW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="400c9ec2-e74f-42f4-8330-67d60bca7a93"
filt_country_contains = (df.country.map(lambda c: 'x' in c))
df.loc[filt_country_contains, 'country']
# + [markdown] id="nGqH7Z0ZqX2k" colab_type="text"
# ###SELECT 7
#
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '%land'
# ```
#
#
# + id="VpnUGYBuqWAC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="dee55ef6-9220-4035-de14-7e49863806ad"
filt_land = (df.country.map(lambda c: c[-4:] == 'land'))
df.loc[filt_land, 'country']
# + [markdown] id="EAzJIME9rSvT" colab_type="text"
# ###SELECT 8
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE 'C%ia'
# ```
#
#
# + id="idguvkgTq_bU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="efc848f6-73d4-4055-efe6-ed74806b2860"
filt_C_ia = (df.country.map(lambda c: c[0]=='C' and c[-2:]=='ia'))
df.loc[filt_C_ia, 'country']
# + [markdown] id="E3xOUBs4r2Nb" colab_type="text"
# ###SELECT 9
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '%oo%'
# ```
#
#
# + id="u5SIjIhCrz8K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="6414e346-81c9-4869-e53a-0135e75923c1"
filt_oo = df.country.str.contains('oo')
df.loc[filt_oo, 'country']
# + id="4tiDNZiLsqBH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 433} outputId="2c32b58b-5265-4b0f-d14b-186d0ed4f447"
filt_three_a = df.country.str.contains('.*a.*a.*a.*')
df.loc[filt_three_a, 'country']
# + [markdown] id="N2sWGw0xuoN0" colab_type="text"
# ###SELECT 10
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '_t%'
# ```
#
#
# + id="APl0euWNtWJJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="1cf6ba13-a3ad-4e1d-9bcf-5cb7e2c05e21"
filt_second_t = (df.country.str.contains('^.t.*'))
df.loc[filt_second_t, 'country']
# + [markdown] id="ELN2osyQvaEB" colab_type="text"
# ###SELECT 11
#
#
# ```SQL
# SELECT name FROM world
# WHERE name LIKE '%o__o%'
# ```
#
#
# + id="soWQazdSu_Og" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 156} outputId="b2e2e7d6-446e-477b-c213-6b8b5c9fd3c7"
filt_o__o = (df.country.str.contains('.*o..o.*'))
df.loc[filt_o__o, 'country']
# + [markdown] id="i6mIRyvPwRVP" colab_type="text"
# ###SELECT 12
#
#
# ```
# SELECT name FROM world
# WHERE LEN(name) = 4
# ```
#
#
# + id="BNXlImT0vxlo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="6eb27d22-ad69-428c-d467-516da5ec0452"
filt_len = (df.country.str.len() == 4)
df.loc[filt_len, 'country']
# + id="SNcYRFMcwpvi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="9fce0ccb-8ff9-4743-8aae-eec5ccb09a68"
# !ls -lh
# + id="UjQjLvNq3aNx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="a33f79fe-9d9c-444d-c0fd-de647013ebde"
# !unzip nobel-laureates.zip
# !ls -lh
# + id="RtInqfAq4QH-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="3caa9c1f-29e9-4833-9058-4d3a518166b7"
df = pd.read_csv('archive.csv', sep=',')
df.head(3)
# + id="42eSNYcc4jLG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="949cba39-93ab-4c07-8064-525b5a2c2bad"
df.columns
# + id="6mWFWSEU4oBN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="35810eed-68cc-4df7-a475-17186f5dc278"
df.columns = df.columns.map(lambda c: '_'.join(c.lower().split()))
df.columns
# + [markdown] id="hM2C6d4m5OlD" colab_type="text"
# ###SELECT 13
#
#
# ```SQL
# SELECT yr, subject, winner
# FROM nobel
# WHERE yr = 1960
# ```
#
#
# + id="E-JTG_wl410d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 233} outputId="3cc08c30-bbdf-41c2-a7be-75afab944a45"
filt_year = (df.year == 1960)
selected_columns = [
'year',
'category',
'full_name',
]
df.loc[filt_year, selected_columns]
# + id="7W-EbXrw5vuV" colab_type="code" colab={}
|
sql_vs_pandas.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p36)
# language: python
# name: conda_tensorflow_p36
# ---
# +
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
model_id = 'DNN-hpsearch-full'
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
import pickle
import numpy as np
import pandas as pd
from annsa.template_sampling import *
from annsa.load_pretrained_network import save_features
# -
from hyperparameter_models import make_dense_model as make_model
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
# #### Import model, training function
from annsa.model_classes import (dnn_model_features,
DNN,
save_model,
train_earlystop)
# ## Load testing dataset
dataset = np.load('../dataset_generation/hyperparametersearch_dataset_100_full.npy')
# +
all_spectra = np.float64(np.add(dataset.item()['sources'], dataset.item()['backgrounds']))
all_keys = dataset.item()['keys']
mlb=LabelBinarizer()
all_keys_binarized = mlb.fit_transform(all_keys)
# -
# # Train network
# ### Define hyperparameters
number_hyperparameters_to_search = 256
earlystop_errors_test = []
# ### Search hyperparameters
# +
skf = StratifiedKFold(n_splits=5, random_state=5)
testing_errors = []
all_kf_errors = []
for network_id in range(number_hyperparameters_to_search):
print(network_id)
model, model_features = make_model(all_keys_binarized)
filename = os.path.join('hyperparameter-search-results',
model_id + '-' +str(hyperparameter_index))
save_features(model_features, filename)
k_folds_errors = []
for train_index, test_index in skf.split(all_spectra, all_keys):
# reset model on each iteration
model = DNN(model_features)
optimizer = tf.train.AdamOptimizer(model_features.learining_rate)
costfunction_errors_tmp, earlystop_errors_tmp = train_earlystop(
training_data=all_spectra[train_index],
training_keys=all_keys_binarized[train_index],
testing_data=all_spectra[test_index],
testing_keys=all_keys_binarized[test_index],
model=model,
optimizer=optimizer,
num_epochs=200,
obj_cost=model.cross_entropy,
earlystop_cost_fn=model.f1_error,
earlystop_patience=10,
not_learning_patience=10,
not_learning_threshold=0.9,
verbose=True,
fit_batch_verbose=10,
data_augmentation=model.default_data_augmentation)
k_folds_errors.append(earlystop_errors_tmp)
all_kf_errors.append(earlystop_errors_tmp)
testing_errors.append(np.average(k_folds_errors))
np.save('./final-models/final_test_errors_'+model_id, testing_errors)
np.save('./final-models/final_kf_errors_'+model_id, all_kf_errors)
network_id += 1
# -
|
examples/source-interdiction/hyperparameter-search/DNN-Hyperparameter-Search-Full.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1><center> Barbell Graph </center> </h1>
#
#
# This is the code for the Barbell Graph experiment described in figure 3 of the paper.
#
# ## I. Create the graph and visualize
#
# (Unfortunately, the nx.draw(G) does not yield a very clean picture, but we basically have two cliques of densely connected nodes, linked by a chain)
# +
# %matplotlib inline
#### Tests like paper
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import pickle
import seaborn as sb
import sklearn as sk
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
import sys
sys.path.append('../')
import graphwave as gw
from shapes.shapes import *
from distances.distances_signature import *
from characteristic_functions import *
name_graph='barbell'
sb.set_style('white')
G , colors = barbel_graph(0, 8, 5,plot=True)
N=nx.number_of_nodes(G)
Gg = pygsp.graphs.Graph(nx.adjacency_matrix(G))
Gg.create_laplacian("normalized")
Gg.lap_type="normalized"
Gg.compute_fourier_basis(force_recompute=True)
eigenvec=Gg.e
plt.figure()
plt.plot(eigenvec)
plt.title('Eigenvalues of the Laplacian')
# -
from graphwave import graphwave_alg
chi,heat_print, taus = graphwave_alg(G, np.linspace(0,100,25), taus=range(19,21), verbose=True)
# +
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
nb_clust=len(np.unique(colors))
pca=PCA(n_components=5)
trans_data=pca.fit_transform(StandardScaler().fit_transform(chi))
km=sk.cluster.KMeans(n_clusters=nb_clust)
km.fit(trans_data)
labels_pred=km.labels_
######## Params for plotting
cmapx=plt.get_cmap('rainbow')
x=np.linspace(0,1,np.max(labels_pred)+1)
col=[cmapx(xx) for xx in x ]
markers = {0:'*',1: '.', 2:',',3: 'o',4: 'v',5: '^',6: '<',7: '>',8: 3 ,9:'d',10: '+',11:'x',12:'D',13: '|',14: '_',15:4,16:0,17:1,18:2,19:6,20:7}
########
for c in np.unique(colors):
indc=[i for i,x in enumerate(colors) if x==c]
#print indc
plt.scatter(trans_data[indc,0], trans_data[indc,1],c=np.array(col)[list(np.array(labels_pred)[indc])] ,marker=markers[c%len(markers)],s=500)
labels = colors
for label,c, x, y in zip(labels,labels_pred, trans_data[:, 0], trans_data[:, 1]):
plt.annotate(label,xy=(x, y), xytext=(0, 0), textcoords='offset points')
# -
|
graphwave/tests/.ipynb_checkpoints/Barbell Graph Example-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Converting the Cornell Movie-Dialogs Corpus into ConvoKit format
#
# This notebook is a demonstration of how custom datasets can be converted into Corpus with ConvoKit
from tqdm import tqdm
from convokit import Corpus, User, Utterance
# ### The Cornell Movie-Dialogs Corpus
#
# The original version of the Cornell Movie-Dialogs Corpus can be downloaded from: https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html. It contains the following files:
#
# * __movie_characters_metadata.txt__ contains information about each movie character
# * __movie_lines.txt contains__ the actual text of each utterance
# * __movie_conversations.txt__ contains the structure of the conversations
# * __movie_titles_metadata.txt__ contains information about each movie title
# ### Constructing the Corpus from a list of Utterances
# Corpus can be constructed from a list of utterances with:
#
# corpus = Corpus(utterances= custom_utterance_list)
#
# Our goal is to convert the original dataset into this "custom_utterance_list", and let ConvoKit will do the rest of the conversion for us.
# #### Creating users
#
# Each character in a movie is considered a user, and there are 9,035 characters in total in this dataset. We will read off metadata for each user from __movie_characters_metadata.txt__.
#
# In general, we would directly use the name of the user as the name. However, in our case, since only the first name of the movie character is given, these names may not uniquely map to a character. We will instead use user_id provided in the original dataset as username, whereas the actual charatcter name will be saved in user metadata.
#
# For each user, metadata include the following information:
# * name of the character.
# * idx and name of the movie this charater is from
# * gender(available for 3,774 characters)
# * position on movie credits (3,321 characters available)
# replace the directory with where your downloaded cornell movie dialogs corpus is saved
data_dir = "../../data_collection/cornell_movie_dialogs_corpus/"
with open(data_dir + "movie_characters_metadata.txt", "r", encoding='utf-8', errors='ignore') as f:
user_data = f.readlines()
user_meta = {}
for user in user_data:
user_info = [info.strip() for info in user.split("+++$+++")]
user_meta[user_info[0]] = {"character_name": user_info[1],
"movie_idx": user_info[2],
"movie_name": user_info[3],
"gender": user_info[4],
"credit_pos": user_info[5]}
# We will now create an User object for each unique character in the dataset, which will be used to create Utterances objects later.
corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()}
# Sanity checking use-level data:
print("number of users in the data = {0}".format(len(corpus_users)))
corpus_users['u0'].meta
# #### Creating utterance objects
# Utterances can be found in __movie_lines.txt__. There are 304,713 utterances in total.
#
# An utterance object normally expects at least:
# - id: the unique id of the utterance.
# - user: the user giving the utterance.
# - root: the id of the root utterance of the conversation.
# - reply_to: id of the utterance this was a reply to.
# - timestamp: timestamp of the utterance.
# - text: text of the utterance.
#
# Additional information associated with the utterance, e.g., in this case, the movie this utterance is coming from, may be saved as utterance level metadata.
with open(data_dir + "movie_lines.txt", "r", encoding='utf-8', errors='ignore') as f:
utterance_data = f.readlines()
# +
utterance_corpus = {}
count = 0
for utterance in tqdm(utterance_data):
utterance_info = [info.strip() for info in utterance.split("+++$+++")]
# ignoring character name since User object already has information
idx, user, movie_id, text = utterance_info[0], utterance_info[1], utterance_info[2], utterance_info[4]
if count % 2 == 0:
meta = {'movie_id': movie_id}
else:
meta = {'movie_id': movie_id}
count += 1
# root & reply_to will be updated later, timestamp is not applicable
utterance_corpus[idx] = Utterance(idx, corpus_users[user], None, None, None, text, meta=meta)
# -
len(utterance_corpus)
# Sanity checking on the status of the utterance objects, they should now contain an id, the users who said them, the actual texts, as well as the movie ids as the metadata:
utterance_corpus['L1044']
# #### Updating root and reply_to information to utterances
# __movie_conversations.txt__ provides the structure of conversations that organizes the above utterances. This will allow us to add the missing root and reply_to information to individual utterances.
with open(data_dir + "movie_conversations.txt", "r", encoding='utf-8', errors='ignore') as f:
convo_data = f.readlines()
import ast
for info in tqdm(convo_data):
user1, user2, m, convo = [info.strip() for info in info.split("+++$+++")]
convo_seq = ast.literal_eval(convo)
# update utterance
root = convo_seq[0]
# convo_seq is a list of utterances ids, arranged in conversational order
for i, line in enumerate(convo_seq):
# sanity checking: user giving the utterance is indeed in the pair of characters provided
if utterance_corpus[line].user.name not in [user1, user2]:
print("user mismatch in line {0}".format(i))
utterance_corpus[line].root = root
if i == 0:
utterance_corpus[line].reply_to = None
else:
utterance_corpus[line].reply_to = convo_seq[i-1]
# Sanity checking on the status of utterances. After updating root and reply_to information, they should now contain all mandatory fields:
utterance_corpus['L666499']
# #### Creating corpus from list of utterances
# We are now ready to create the movie-corpus. Note that we can specify a version number for a corpus, making it easier for us to keep track of which corpus we are working with.
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
movie_corpus = Corpus(utterances=utterance_list, version=1)
# ConvoKit will automatically help us create conversations based on the information about the utterances we provide.
print("number of conversations in the dataset = {}".format(len(movie_corpus.get_conversation_ids())))
convo_ids = movie_corpus.get_conversation_ids()
for i, convo_idx in enumerate(convo_ids[0:5]):
print("sample conversation {}:".format(i))
print(movie_corpus.get_conversation(convo_idx).get_utterance_ids())
# #### Adding parses for utterances
# We can also "annotate" the utterances, e.g., getting dependency parses for them, and save the parsed versions as utterance-level metadata. Here is an example of how this can be done:
from convokit import Parser
annotator = Parser()
movie_corpus = annotator.fit_transform(movie_corpus)
# #### Updating Corpus level metadata:
# In this dataset, there are a few sets of additional information about a total of 617 movies from which these conversations are drawn. For instance, genres, release year, url from which the raw sources are retrieved are included in the original dataset. These may be saved as Corpus level metadata.
# Adding urls information:
with open(data_dir + "raw_script_urls.txt", "r", encoding='utf-8', errors='ignore') as f:
urls = f.readlines()
movie_meta = {}
for movie in urls:
movie_id, title, url = [info.strip() for info in movie.split("+++$+++")]
movie_meta[movie_id] = {'title': title, "url": url}
len(movie_meta)
# Adding more movie meta from movie_titles_metadata.txt:
with open(data_dir + "movie_titles_metadata.txt", "r", encoding='utf-8', errors='ignore') as f:
movie_extra = f.readlines()
for movie in movie_extra:
movie_id, title, year, rating, votes, genre = [info.strip() for info in movie.split("+++$+++")]
movie_meta[movie_id]['release_year'] = year
movie_meta[movie_id]['rating'] = rating
movie_meta[movie_id]['votes'] = votes
movie_meta[movie_id]['genre'] = genre
# Sanity checking for a random movie in the dataset:
movie_meta['m23']
movie_corpus.meta['movie_metadata'] = movie_meta
# Optionally, we can also the original name of the dataset:
movie_corpus.meta['name'] = "Cornell Movie-Dialogs Corpus"
# #### Saving created datasets
# To complete the final step of dataset conversion, we want to save the dataset such that it can be loaded later for reuse. You may want to specify a name. The default location to find the saved datasets will be __./convokit/saved-copora__ in your home directory, but you can also specify where you want the saved corpora to be.
# movie_corpus.dump("movie-corpus", base_path = <specify where you prefer to save it to>)
# the following would save the Corpus to the default location
movie_corpus.dump("movie-corpus")
# After saving, the available info from dataset can be checked directly, without loading
from convokit import meta_index
meta_index(filename = "movie-corpus")
# ### Other ways of conversion
#
# The above method is only one way to convert the dataset. Alternatively, one may follow strictly with the specifications of the expected data format described [here](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/doc/source/data_format.rst) and write out the component files directly.
|
examples/converting_movie_corpus.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Reading the csv file
import pandas as pd
data = pd.read_csv("2_class_data.csv")
# Splitting the data into X and y
import numpy as np
X = np.array(data[['x1', 'x2']])
y = np.array(data['y'])
# +
# Import statement for train_test_split
from sklearn.model_selection import train_test_split
# TODO: Use the train_test_split function to split the data into
# training and testing sets.
# The size of the testing set should be 20% of the total size of the data.
# Your output should contain 4 objects.
X_train, X_test, y_train, y_test = train_test_split(
X,y,test_size=0.2)
|
Training and Testing Models/testing_in_sklearn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import stats
from scipy.optimize import curve_fit
import sklearn.metrics as metrics
# ## Read bidding log for advertiser 2821
df_2821 = pd.read_csv("2821/train.log.txt", delimiter="\t")
df_2821.head()
# ### Inspect bid price
#
# Price unit: RMB/CPM
#
# `CPM`: cost per thousand impressions
#
# prices are linearly scaled for confidentiality
df_2821['bidprice'].unique()
# > This advertiser only bids on two different prices: 294 or 277
# ### Bid price by ad exchange
df_2821[df_2821['bidprice'] == 277]['adexchange'].unique()
df_2821[df_2821['bidprice'] == 294]['adexchange'].unique()
# >This advertiser always send bid price of 277 for ad exchange 2 (Google), and 294 for 1(Alibaba), 3(Tencent), 4(Baidu)
# ## Distribution of payprice, by ad exchange
# **Ad Exchange**
#
# - 1 - Tanx (Alibaba)
# - 2 - Adx (Google DoubleClick AdX)
# - 3 - Tencent (Tencent)
# - 4 - Baidu(Baidu)
# - 5 - Youku(Youku) - N/A for this advertiser
# - 6 - Amx(Google Mobile) - N/A for this advertiser
plt.subplots(figsize=(16,5))
ax = sns.violinplot(x="adexchange", y="payprice", data=df_2821)
plt.show()
# ## Explore bidding on Ad Exchange 2 (Google Adx)
adx = df_2821['adexchange'] == 2
tc = df_2821['adexchange'] == 3
# sort by payprice
sorted_adx = df_2821[adx].sort_values(by=['payprice'])
# sorted_adx.head()
sorted_tc = df_2821[tc].sort_values(by=['payprice'])
sorted_tc.head()
# ### Count number of bids by payprice
counts = sorted_adx['payprice'].reset_index()
counts = counts.groupby(by="payprice").count()[['index']]
counts.rename(columns={'index':'count'}, inplace=True)
counts.head(10)
# ### Calculate cumulaltive stats
counts['cumsum_count'] = counts['count'].cumsum()
counts['bracket_cost'] = counts['count'] * counts.index
counts['cumsum_cost'] = counts['bracket_cost'].cumsum()
counts['cummean_cost'] = counts['cumsum_cost'] / counts['cumsum_count']
counts.head(10)
# # Plots
#
# ## Cumulative Impressions as bid price increases
# $KPI_i(x_i)$
#
# where KPI = Impressions, i = 2(Google adx)
plt.plot(counts['cumsum_count'])
plt.show()
# ## Cumulative Cost as bid price increases
# $KPI_i(x_i) * AP_i(x_i)$
#
# where KPI = Impressions, i = 2(Google adx)
plt.plot(counts['cumsum_cost'])
plt.show()
# ## Cumulative Average pay price as bid price increases
# $AP_i(x_i)$
#
# where KPI = Impressions, i = 2(Google adx)
plt.plot(counts['cummean_cost'])
plt.show()
# ## Model the budget_i as decision variable directly
#
# $\Sigma_i y_i \le Budget$
#
# where $y_i$ is the total budget allocated for website i
# +
sorted_adx = sorted_adx[1:]
y = sorted_adx[['payprice']].cumsum()
y["impressions"] = np.arange(len(y))
y.columns = ['y', 'impressions']
# -
# ## Scatter Plot
plt.subplots(figsize=(12,6))
plt.scatter(y['y'], y['impressions'], marker='.', c='b', alpha=0.002)
plt.show()
# ## Fit curve 1
# +
def func(x, a, b, c):
return a * np.exp(-b * x) + c
popt, pcov = curve_fit(func, y['y'], y['impressions'])
plt.subplots(figsize=(12,6))
plt.plot(y['y'], y['impressions'], 'b.', label="Original Noised Data")
plt.plot(y['y'], func(y['y'], *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
# -
# ## Fit curve 2
# +
def func(x, a, b, c):
return a * np.power( 1 + x/b, c) - a
# def func(x, a, b, c, d):
# return a * np.power(x/b, c) + d
popt, pcov = curve_fit(func, y['y'], y['impressions'], p0=[400,10,0.4], maxfev=2000)
plt.subplots(figsize=(12,8))
plt.plot(y['y'], y['impressions'], 'b.', label="Original Noised Data")
plt.plot(y['y'], func(y['y'], *popt), 'r-', label="Fitted Curve")
plt.legend(loc="upper left")
plt.show()
# -
print(popt)
print("MAE =",metrics.mean_absolute_error(y['impressions'], func(y['y'], *popt)))
# **fitted curve:**
#
# $18525 * (1 + x/15257)^{0.3746} - 18525$
func(y['y'], *popt)
counts = sorted_tc['payprice'].reset_index()
counts = counts.groupby(by="payprice").count()[['index']]
counts.rename(columns={'index':'count'}, inplace=True)
counts.head(10)
counts['cumsum_count'] = counts['count'].cumsum()
counts['bracket_cost'] = counts['count'] * counts.index
counts['cumsum_cost'] = counts['bracket_cost'].cumsum()
counts['cummean_cost'] = counts['cumsum_cost'] / counts['cumsum_count']
counts.head(10)
sorted_tc
# +
# sorted_tc = sorted_adx[1:]
y = sorted_tc[['payprice']].cumsum()
y["impressions"] = np.arange(len(y))
y.columns = ['y', 'impressions']
# +
def func(x, a, b, c):
return a * np.power( 1 + x/b, c) - a
popt, pcov = curve_fit(func, y['y'], y['impressions'], p0=[400,10,0.4], maxfev=2000)
plt.subplots(figsize=(12,8))
plt.plot(y['y'], y['impressions'], 'b.', label="Original Noised Data")
plt.plot(y['y'], func(y['y'], *popt), 'r-', label="Fitted Curve")
plt.legend(loc="upper left")
plt.show()
# -
print(popt)
print("MAE =",metrics.mean_absolute_error(y['impressions'], func(y['y'], *popt)))
|
explore_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''base'': conda)'
# name: python3
# ---
# ### Try out the VortexaSDK
# First let's import our requirements
# + tags=[]
from datetime import datetime
import vortexasdk as v
# -
# Utilisation of laden vessels carrying Crude/Condensate, between Middle East and China over the last 7 days, by vessel_class breakdown.
# You'll need to enter your Vortexa API key when prompted.
# + tags=[]
df = v.FleetUtilisationTimeseries().search(
filter_time_min=datetime(2021, 1, 11),
filter_time_max=datetime(2021, 1, 18),
filter_vessel_status="vessel_status_laden_known",
filter_products="54af755a090118dcf9b0724c9a4e9f14745c26165385ffa7f1445bc768f06f11",
filter_origins="80aa9e4f3014c3d96559c8e642157edbb2b684ea0144ed76cd20b3af75110877",
filter_destinations="934c47f36c16a58d68ef5e007e62a23f5f036ee3f3d1f5f85a48c572b90ad8b2",
timeseries_property="vessel_class",
timeseries_frequency="day",
).to_df()
# + tags=[]
df.head()
# -
# That's it! You've successfully loaded data using the Vortexa SDK. Check out https://vortechsa.github.io/python-sdk/ for more examples
|
docs/examples/try_me_out/utilisation_timeseries.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''base'': conda)'
# name: python3
# ---
# + [markdown] id="njqZ9ZjiZhOi"
# # Validation
#
# This notebook contains examples of some of the simulations that have been used to validate Disimpy's functionality by comparing the simulated signals to analytical solutions and signals generated by other simulators. Here, we simulate free diffusion and restricted diffusion inside cylinders and spheres.
# + id="URtKY7GhZYoA"
# Import the required packages and modules
import os
import pickle
import numpy as np
import matplotlib.pyplot as plt
from disimpy import gradients, simulations, substrates, utils
from disimpy.gradients import GAMMA
# + id="Wr1C78aBZi7c"
# Define the simulation parameters
n_walkers = int(1e6) # Number of random walkers
n_t = int(1e3) # Number of time points
diffusivity = 2e-9 # In SI units (m^2/s)
# + [markdown] id="chrnW_CtZjTj"
# ## Free diffusion
#
# In the case of free diffusion, the analytical expression for the signal is $S = S_0 \exp(-bD)$, where $S_0$ is the signal without diffusion-weighting, $b$ is the b-value, and $D$ is the diffusivity.
# +
# Create a Stejskal-Tanner gradient array with ∆ = 40 ms and δ = 30 ms
gradient = np.zeros((1, 700, 3))
gradient[0, 1:300, 0] = 1
gradient[0, -300:-1, 0] = -1
T = 70e-3
dt = T / (gradient.shape[1] - 1)
gradient, dt = gradients.interpolate_gradient(gradient, dt, n_t)
bs = np.linspace(0, 3e9, 100)
gradient = np.concatenate([gradient for _ in bs], axis=0)
gradient = gradients.set_b(gradient, dt, bs)
# Show the waveform of the measurement with the highest b-value
fig, ax = plt.subplots(1, figsize=(7, 4))
for i in range(3):
ax.plot(np.linspace(0, T, n_t), gradient[-1, :, i])
ax.legend(['G$_x$', 'G$_y$', 'G$_z$'])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Gradient magnitude (T/m)')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 651} id="xDuPyAbuZkN-" outputId="427f4f65-9a78-420f-98f8-d4160277a608"
# Run the simulation
substrate = substrates.free()
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, substrate)
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.plot(bs, np.exp(-bs * diffusivity), color='tab:orange')
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.legend(['Analytical signal', 'Simulated signal'])
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.set_yscale('log')
plt.show()
# + [markdown] id="M9uMkHEha4d0"
# ## Restricted diffusion and comparison to MISST
#
# Here, diffusion inside cylinders and spheres is simulated and the signals are compared to those calculated with [MISST](http://mig.cs.ucl.ac.uk/index.php?n=Tutorial.MISST) that uses matrix operators to calculate the signal from simple geometries. The cylinder is simulated using a triangular mesh and the sphere as an analytically defined surface.
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="31_UbNxC6rhS" outputId="7c4b3de6-19aa-46a0-90f8-013fc6b892d6"
# Load and show the cylinder mesh used in the simulations
mesh_path = os.path.join(
os.path.dirname(simulations.__file__), 'tests', 'cylinder_mesh_closed.pkl')
with open(mesh_path, 'rb') as f:
example_mesh = pickle.load(f)
faces = example_mesh['faces']
vertices = example_mesh['vertices']
cylinder_substrate = substrates.mesh(
vertices, faces, periodic=True, init_pos='intra')
utils.show_mesh(cylinder_substrate)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="GOfXcVd7lq9F" outputId="0f271fcf-0a2e-494d-da0c-50d2c85add62"
# Run the simulation
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, cylinder_substrate)
# Load MISST signals
tests_dir = os.path.join(os.path.dirname(gradients.__file__), 'tests')
misst_signals = np.loadtxt(os.path.join(tests_dir,
'misst_cylinder_signal_smalldelta_30ms_bigdelta_40ms_radius_5um.txt'))
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.scatter(bs, misst_signals, s=10, marker='.')
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.legend(['Disimpy', 'MISST'])
ax.set_title('Diffusion in a cylinder')
ax.set_yscale('log')
plt.show()
# +
# Run the simulation
sphere_substrate = substrates.sphere(5e-6)
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, sphere_substrate)
# Load MISST signals
tests_dir = os.path.join(os.path.dirname(gradients.__file__), 'tests')
misst_signals = np.loadtxt(os.path.join(tests_dir,
'misst_sphere_signal_smalldelta_30ms_bigdelta_40ms_radius_5um.txt'))
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.scatter(bs, misst_signals, s=10, marker='.')
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.legend(['Disimpy', 'MISST'])
ax.set_title('Diffusion in a sphere')
ax.set_yscale('log')
plt.show()
# + [markdown] id="IDKCFF17Zlvl"
# ## Signal diffraction pattern
#
# In the case of restricted diffusion in a cylinder perpendicular to the direction of the diffusion encoding gradient with short pulses and long diffusion time, the signal minimum occurs at $0.61 · 2 · \pi/r$, where $r$ is the cylinder radius. Details are provided by [Avram et al](https://doi.org/10.1002/nbm.1277), for example.
# + colab={"base_uri": "https://localhost:8080/", "height": 657} id="-Zr5LU8_Zopl" outputId="01492a0e-bb9e-4650-cd3c-68fdef7dae9a"
# Create a Stejskal-Tanner gradient array with ∆ = 0.5 s and δ = 0.1 ms
T = 501e-3
gradient = np.zeros((1, n_t, 3))
gradient[0, 1:2, 0] = 1
gradient[0, -2:-1, 0] = -1
dt = T / (gradient.shape[1] - 1)
bs = np.linspace(1, 1e11, 250)
gradient = np.concatenate([gradient for _ in bs], axis=0)
gradient = gradients.set_b(gradient, dt, bs)
q = gradients.calc_q(gradient, dt)
qs = np.max(np.linalg.norm(q, axis=2), axis=1)
# Show the waveform of the measurement with the highest b-value
fig, ax = plt.subplots(1, figsize=(7, 4))
for i in range(3):
ax.plot(np.linspace(0, T, n_t), gradient[-1, :, i])
ax.legend(['G$_x$', 'G$_y$', 'G$_z$'])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Gradient magnitude (T/m)')
plt.show()
# Run the simulation
radius = 10e-6
substrate = substrates.cylinder(
radius=radius, orientation=np.array([0., 0., 1.]))
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, substrate)
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(1e-6 * qs, signals / n_walkers, s=10, marker='o')
minimum = 1e-6 * .61 * 2 * np.pi / radius
ax.plot([minimum, minimum], [0, 1], ls='--', lw=2, color='tab:orange')
ax.legend(['Analytical minimum', 'Simulated signal'])
ax.set_xlabel('q (μm$^{-1}$)')
ax.set_ylabel('S/S$_0$')
ax.set_yscale('log')
ax.set_ylim([1e-4, 1])
ax.set_xlim([0, max(1e-6 * qs)])
plt.show()
|
docs/source/validation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9 (tensorflow)
# language: python
# name: tensorflow
# ---
# <a href="http://localhost:8888/notebooks/t81_558_class_14_05_new_tech.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# # T81-558: Applications of Deep Neural Networks
# **Module 14: Other Neural Network Techniques**
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Module 14 Video Material
#
# * Part 14.1: What is AutoML [[Video]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/https://www.youtube.com/watch?v=1mB_5iurqzw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)
# * Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)
# * Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)
# * Part 14.4: Anomaly Detection in Keras [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)
# * **Part 14.5: The Deep Learning Technologies I am Excited About** [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb)
#
#
# # Part 14.5: New Technologies
#
# This course changes often to keep up with the rapidly evolving landscape that is deep learning. If you would like to continue to monitor this class, I suggest following me on the following:
#
# * [GitHub](https://github.com/jeffheaton) - I post all changes to GitHub.
# * [<NAME>'s YouTube Channel](https://www.youtube.com/user/HeatonResearch) - I add new videos for this class at my channel.
#
# ## New Technology Radar
#
# Currently, these new technologies are on my radar for possible future inclusion in this course:
#
# * Transformers
# * More Advanced Transfer Learning
# * Augmentation
# * Reinforcement Learning beyond TF-Agents
#
# This section seeks only to provide a high-level overview of these emerging technologies. I provide links to supplemental material and code in each subsection. I describe these technologies in the following sections.
#
# Transformers are a relatively new technology that I will soon add to this course. They have resulted in many NLP applications. Projects such as the Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT-1,2,3) received much attention from practitioners. Transformers allow the sequence to sequence machine learning, allowing the model to utilize variable length, potentially textual, input. The output from the transformer is also a variable-length sequence. This feature enables the transformer to learn to perform such tasks as translation between human languages or even complicated NLP-based classification. Considerable compute power is needed to take advantage of transformers; thus, you should be taking advantage of transfer learning to train and fine-tune your transformers.
#
# Complex models can require considerable training time. It is not unusual to see GPU clusters trained for days to achieve state of the art results. This complexity requires a substantial monetary cost to train a state of the art model. Because of this cost, you must consider transfer learning. Services, such as Hugging Face and NVIDIA GPU Cloud (NGC), contains many advanced pretrained neural networks for you to implement.
#
# Augmentation is a technique where algorithms generate additional training data augmenting the training data with new items that are modified versions of the original training data. This technique has seen many applications to computer vision. In this most basic example, the algorithm can flip images vertically and horizontally to quadruple the training set's size. Projects, such as NVIDIA StyleGAN2 ADA have implemented augmentation to substantially decrease the amount of training data that the algorithm needs.
#
# Currently, this course makes use of TF-Agents to implement reinforcement learning. TF-Agents is convenient because it is based on TensorFlow. However, TF-Agents has been slow to update compared to other frameworks. Additionally, when TF-Agents is updated, internal errors are often introduced that can take months for the TF-Agents team to fix. When I compare simple "Hello World" type examples for Atari games on platforms like Stable Baselines, to their TF-Agents equivilants, I am left wanting more from TF-Agents.
#
# ## Programming Language Radar
#
# As a machine learning programming language, Python has an absolute lock on the industry. Python is not going anywhere, any time soon. My main issue with Python is end-to-end deployment. Unless you are dealing with Jupyter notebooks or training/pipeline scripts, Python will be your go-to language. However, to create edge applications, such as web pages and mobile apps, you will certainly need to utilize other languages. I do not suggest replacing Python with any of the following languages; however, these are some alternative languages and domains that you might choose to use them.
#
# * **IOS Application Development** - Swift
# * **Android Development** - Kotlin and Java
# * **Web Development** - NodeJS and JavaScript
# * **Mac Application Development** - Swift or JavaScript with Electron or React Native
# * **Windows Application Development** - C# or JavaScript with Electron or React Native
# * **Linux Application Development** - C/C++ w with Tcl/Tk or JavaScript with Electron or React Native
#
#
# ## What About PyTorch?
#
# Technical folks love debates that can reach levels of fervor generally reserved for religion or politics. Python and TensorFlow are approaching this level of spirited competition. There is no clear winner, at least at this point. Why did I base this class on Keras/TensorFlow, as opposed to PyTorch? There are two primary reasons. The first reason is a fact; the second is my opinion.
#
# PyTorch was not available in early 2016 when I introduced/developed this course.
# PyTorch exposes lower-level details that would be distracting for an applications of deep learning course.
# I recommend being familiar with core deep learning techniques and being adaptable to switch between these two frameworks.
#
# ## Where to From Here?
#
#
# So whats next? Here are some some ideas.
#
# * [Google CoLab Pro](https://colab.research.google.com/signup) - If you need more GPU power; but are not yet ready to buy a GPU of your own.
# * [TensorFlow Certification](https://www.tensorflow.org/certificate)
# * [Coursera](https://www.coursera.org/)
#
# I really hope that you have enjoyed this course. If you have any suggestions for improvement or technology suggestions, please contact me. This course is always evolving, and I invite you to subscribe to my [YouTube channel](https://www.youtube.com/user/HeatonResearch) for my latest updates. I also frequently post videos beyond the scope of this course, so the channel itself is a good next step. Thank you very much for your interest and focus on this course. Other social media links for me include:
#
# * [<NAME> GitHub](https://github.com/jeffheaton)
# * [<NAME> Twitter](https://twitter.com/jeffheaton)
# * [<NAME> Medium](https://medium.com/@heatonresearch)
#
#
#
|
t81_558_class_14_05_new_tech.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# +
from datasets import *
from run_helpers import *
import numpy as np
np.random.seed(42)
# -
# coil20
data, label, n_class = usps()
run_RVFL(data, label, n_class)
run_dRVFL(data, label, n_class)
# +
run_edRVFL(data, label, n_class)
# -
run_BRVFL(data, label, n_class)
run_BdRVFL(data, label, n_class)
run_BedRVFL(data, label, n_class)
run_LapRVFL(data, label, n_class)
run_LapdRVFL(data, label, n_class)
run_LapedRVFL(data, label, n_class)
|
old/main_run_usps.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# cd ../data/external
# ## 講義音声
# [日本語講義音声コンテンツコーパス](http://www.slp.cs.tut.ac.jp/CJLC/)
# !wget http://www.slp.cs.tut.ac.jp/CJLC/CJLC-0.1.tar.bz2
# !unar CJLC-0.1.tar.bz2
# rm CJLC-0.1.tar.bz2
# mv CJLC-0.1/ CJLC-0.1_v
# mkdir CJLC-0.1
# + language="bash"
# mv CJLC-0.1_v/data/L11M0010.wav CJLC-0.1/1.wav
# mv CJLC-0.1_v/data/L11M0011.wav CJLC-0.1/2.wav
# mv CJLC-0.1_v/data/L11M0013.wav CJLC-0.1/3.wav
# mv CJLC-0.1_v/data/L11M0030.wav CJLC-0.1/4.wav
# mv CJLC-0.1_v/data/L11M0040.wav CJLC-0.1/5.wav
# mv CJLC-0.1_v/data/L11M0062.wav CJLC-0.1/6.wav
# mv CJLC-0.1_v/data/L11M0071.wav CJLC-0.1/7.wav
#
# mv CJLC-0.1_v/data/L11M0010.xml CJLC-0.1/1.xml
# mv CJLC-0.1_v/data/L11M0011.xml CJLC-0.1/2.xml
# mv CJLC-0.1_v/data/L11M0013.xml CJLC-0.1/3.xml
# mv CJLC-0.1_v/data/L11M0030.xml CJLC-0.1/4.xml
# mv CJLC-0.1_v/data/L11M0040.xml CJLC-0.1/5.xml
# mv CJLC-0.1_v/data/L11M0062.xml CJLC-0.1/6.xml
# mv CJLC-0.1_v/data/L11M0071.xml CJLC-0.1/7.xml
# -
# rm -r CJLC-0.1_v/
# + language="bash"
#
# for i in `seq 7`
# do
# python ../../src/data/xml_to_plain.py ../../data/external/CJLC-0.1/$i.xml \
# > ../../data/external/CJLC-0.1/$i.txt
# done
# -
# !head CJLC-0.1/1.txt
# rm CJLC-0.1/*.xml
# ls CJLC-0.1/
# ## 単語頻度リスト
# [N-gram コーパス - 日本語ウェブコーパス 2010](http://s-yata.jp/corpus/nwc2010/ngrams/)
# !wget http://dist.s-yata.jp/corpus/nwc2010/ngrams/word/over999/filelist
# cat filelist
# !wget $(sed -n 1p filelist)
# !unar ./1gm-0000.xz
# rm ./1gm-0000.xz
# !head ./1gm-0000
# ## Dictation kit
# !wget https://osdn.net/projects/julius/downloads/66544/dictation-kit-v4.4.zip/
# !unar index.html
# mv ./dictation-kit-v4.4 ../../lib/dictation-kit
# ## Word2Vec
#
# - [word2vecの学習済み日本語モデル](http://aial.shiroyagi.co.jp/2017/02/japanese-word2vec-model-builder/)
# - [download link](http://public.shiroyagi.s3.amazonaws.com/latest-ja-word2vec-gensim-model.zip)
# - [日本語 Wikipedia エンティティベクトル](http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/jawiki_vector/)
# - [download link](http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/jawiki_vector/data/20170201.tar.bz2)
# !wget http://public.shiroyagi.s3.amazonaws.com/latest-ja-word2vec-gensim-model.zip
# !unar ./latest-ja-word2vec-gensim-model.zip
# mv ./latest-ja-word2vec-gensim-model/ ./w2v50
# !wget http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/jawiki_vector/data/20170201.tar.bz2
# !unar ./20170201.tar.bz2
# rm ./entity_vector/entity_vector.model.bin
# rm ./20170201.tar.bz2
# rm ./latest-ja-word2vec-gensim-model.zip
# mv ./entity_vector/entity_vector.model.txt ./
# rm -r ./entity_vector/
# mv entity_vector.model.txt w2v200.txt
# rm filelist
# rm index.html
# ls
# mv ./w2v200.txt ./w2v200.tsv
# ### kakasi
#
# https://qiita.com/mountcedar/items/f140837393a1697a8433
# !wget http://kakasi.namazu.org/stable/kakasi-2.3.6.tar.xz
# !unar ./kakasi-2.3.6.tar.xz
# rm kakasi-2.3.6.tar.xz
# cd ./kakasi-2.3.6/
# !./configure -host=powerpc-apple-bsd --prefix=$HOME/usr
# !make
# !make install
# cd ../
# rm -r kakasi-2.3.6/
# !git clone https://github.com/neologd/mecab-ipadic-neologd.git
# cd mecab-ipadic-neologd
# !sudo bin/install-mecab-ipadic-neologd
# #### sudo install する
#
# **複数のパソコンでやるなら頑張ってバージョンを合わせる**
# ```bash
# # cd ./data/external/mecab-ipadic-neologd
# sudo bin/install-mecab-ipadic-neologd
# ```
# ls
# mv ./CJLC-0.1 ./corpus
# ## juliusのインストール
# !wget https://github.com/julius-speech/julius/archive/v4.4.2.1.tar.gz
# !unar ./v4.4.2.1.tar.gz
# cd ./julius-4.4.2.1/
# !./configure --prefix=$HOME/usr
# !make
# !make install
|
notebooks/00-download-data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# ライブラリを読み込む
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVR
from sklearn.model_selection import train_test_split
import mlflow
import mlflow.sklearn
import os
import tempfile
# -
# mlflow trackingサーバのURLを指定
mlflow.set_tracking_uri("http://mlflow:5050")
mlflow.set_experiment("tutorial")
temp_dir = tempfile.TemporaryDirectory()
# +
# サンプルデータ数
m = 200
# 乱数のシード値を指定することで,再現性を保つ
np.random.seed(seed=2018)
# 「-3」から「3」の間で等間隔にm個のデータを作成
X = np.linspace(-3, 3, m)
# 後のグラフ描画用途に,10倍細かいグリッドを準備しておく
X_plot = np.linspace(-3, 3, m*10)
# 周期的なsin関数(第一項)に右上がり成分(第二項)と乱数(第三項)を加えたデータを作る
y = np.sinc(X) + 0.2 * X + 0.3 * np.random.randn(m)
# グラフ表示するため,各数列を1列の行列に変換
X = X.reshape(-1, 1)
y = y.reshape(-1, 1)
X_plot = X_plot.reshape(-1,1)
# グラフを表示
plt.title("sample data")
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.scatter(X, y, color="black")
plt.show()
# +
# 学習データとテストデータに分割
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=2019)
# 学習データとテストデータのグラフを表示
plt.scatter(X_test, y_test, label="test data", edgecolor='k',facecolor='w')
plt.scatter(X_train, y_train, label="training data", color='c')
plt.title("sample data")
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.legend()
plt.show()
# 作成したグラフをファイルに保存
filename = os.path.join(temp_dir.name, "train_and_test_data.png")
plt.savefig(filename)
# +
# 分割前のデータセット
np.savetxt(os.path.join(temp_dir.name, "dataset_X.csv"), X, delimiter=',')
np.savetxt(os.path.join(temp_dir.name, "dataset_y.csv"), y, delimiter=',')
# 学習データとテストデータに分割後のデータセット
np.savetxt(os.path.join(temp_dir.name, "X_train.csv"), X_train, delimiter=',')
np.savetxt(os.path.join(temp_dir.name, "y_train.csv"), y_train, delimiter=',')
np.savetxt(os.path.join(temp_dir.name, "X_test.csv"), X_test, delimiter=',')
np.savetxt(os.path.join(temp_dir.name, "y_test.csv"), y_test, delimiter=',')
# -
# 予測結果のグラフを描画する関数を定義
def plot_result(model, X_train, y_train, score, filename=None):
# 予測値の計算
p = model.predict(np.sort(X_test))
# グラフの描画
plt.clf()
plt.scatter(X_test, y_test, label="test data", edgecolor='k',facecolor='w')
plt.scatter(X_train, y_train, label="Other training data", facecolor="r", marker='x')
plt.scatter(X_train[model.support_], y_train[model.support_], label="Support vectors", color='c')
plt.title("predicted results")
plt.xlabel("$x$")
plt.ylabel("$y$")
x = np.reshape(np.arange(-3,3,0.01), (-1, 1))
plt.plot(x, model.predict(x), label="model ($R^2=%1.3f$)" % (score), color='b')
plt.legend()
plt.show()
# グラフの保存
if filename is not None:
plt.savefig(filename)
for C in (0.01, 1, 5, 10, 100, 1000):
# ここからの処理を一つのrunとしてmlflowに記録する
with mlflow.start_run(run_name="Search C"):
# モデルの学習を実行
model = SVR(kernel='rbf', C=C, epsilon=0.1, gamma='auto').fit(X_train, np.ravel(y_train))
# 保存しておいたデータセットをMLflowへ送信する
mlflow.log_artifacts(temp_dir.name, artifact_path="dataset")
# タグを指定
mlflow.set_tag("algorism", "SVR")
# ハイパーパラメータを記録する
mlflow.log_param("C", C)
# 精度指標を計算し,MLflowに送信する
score = model.score(X_test, y_test)
mlflow.log_metric("R2 score", score)
# モデルを記録したファイルをMLflowに送信する
mlflow.sklearn.log_model(model, "model", serialization_format='cloudpickle')
# 一時ディレクトリに予測結果のグラフを保存して,mlflowに送信する
with tempfile.TemporaryDirectory() as tmp:
filename = os.path.join(tmp, "predict_results.png")
plot_result(model, X_train, y_train, model.score(X_test, y_test), filename)
mlflow.log_artifact(filename, artifact_path="results")
|
example/ml_flow_tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # List_5_3+4
# ## List_5_3
import tensorflow as tf
import numpy as np
from bregman.suite import *
# +
filenames = tf.train.match_filenames_once('./audio_dataset/*.wav')
count_num_files = tf.size(filenames)
filename_queue = tf.train.string_input_producer(filenames)
reader = tf.WholeFileReader()
filename, file_contents = reader.read(filename_queue)
chroma = tf.placeholder(tf.float32)
max_freqs = tf.argmax(chroma, 0)
# +
def get_next_chromagram(sess):
audio_file = sess.run(filename)
F = Chromagram(audio_file, nfft=16384, wfft=8192, nhop=2205)
return F.X
def extract_feature_vector(sess, chroma_data):
num_features, num_samples = np.shape(chroma_data)
freq_vals = sess.run(max_freqs, feed_dict={chroma: chroma_data})
hist, bins = np.histogram(freq_vals, bins=range(num_features + 1))
return hist.astype(float) / num_samples
def get_dataset(sess):
num_files = sess.run(count_num_files)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
xs = []
for _ in range(num_files):
chroma_data = get_next_chromagram(sess)
x = [extract_feature_vector(sess, chroma_data)]
x = np.matrix(x)
if len(xs) == 0:
xs = x
else:
xs = np.vstack((xs, x))
return xs
# -
# ## List_5_4
k = 2
max_iterations = 100
# +
def initial_cluster_centroids(X, k):
return X[0:k, :]
def assign_cluster(X, centroids):
expanded_vectors = tf.expand_dims(X, 0)
expanded_centroids = tf.expand_dims(centroids, 1)
distances = tf.reduce_sum(tf.square(tf.subtract(expanded_vectors,
expanded_centroids)), 2)
mins = tf.argmin(distances, 0)
return mins
def recompute_centroids(X, Y):
sums = tf.unsorted_segment_sum(X, Y, k)
counts = tf.unsorted_segment_sum(tf.ones_like(X), Y, k)
return sums / counts
# -
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
X = get_dataset(sess)
centroids = initial_cluster_centroids(X, k)
i, converged = 0, False
while not converged and i < max_iterations:
i += 1
Y = assign_cluster(X, centroids)
centroids = sess.run(recompute_centroids(X, Y))
print("centroids: ")
print(centroids)
|
ch05_clustering/List_5_3+4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7
# language: python
# name: python3.7
# ---
# # Single-sample GSEA projection (ssGSEA)
# ## Background
# Traditional gene set enrichment analysis assesses the differential coordinate up- or down-regulation of a biological process or pathway between groups of samples belonging to two phenotypes. ssGSEA is designed to assess enrichment in individual samples, independently of pre-assigned phenotype labels. This provides the opportunity to analyze transcription data at a higher level, by using gene sets/pathways, resulting in a much more biologically interpretable set of features.
# **ssGSEA projects a single sample’s full gene expression profile into the space of gene sets**. It does this via *enrichment scores*, which represent the degree to which the genes in each particular gene set are coordinately up- or down- regulated within that sample.
#
# Any supervised or unsupervised machine learning technique or other statistical analysis can then be applied to the resulting projected dataset. The benefit is that the **ssGSEA projection transforms the data to a higher-level (pathways instead of genes) space representing a more biologically interpretable set of features on which analytic methods can be applied.**
#
# Another benefit of ssGSEA projection is **dimensionality reduction**. Typically the number of gene sets employed in the enrichment analysis is substantially smaller than the number of genes targeted by a gene expression assay, and they are more robust and less noisy, resulting in significant benefits for downstream analysis.
# ## Before you begin
# You must log in to a GenePattern server. In this notebook we will use **GenePattern Cloud**
#
# Note: if you are not familiar with GenePattern Notebook features, you can revise them here: <a href="https://notebook.genepattern.org/services/sharing/notebooks/361/preview/">GenePattern Notebook Tutorial</a>.
# <div class="alert alert-info">
# <p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
#
# Sign in to GenePattern by clicking **Login as [username]**.
# </div>
# + genepattern={"name": "Login", "server": "https://cloud.genepattern.org/gp", "type": "auth"}
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://cloud.genepattern.org/gp", "", ""))
# -
# ## Project gene expression dataset into the space of oncogenic gene sets
# ssGSEA transforms a gene expression dataset into a dataset where each row corresponds to a pathway from the [MSigDB oncogenic gene sets collection](http://software.broadinstitute.org/gsea/msigdb/genesets.jsp?collection=C6), and each column is a sample. Each value in the new dataset will therefore represent the up- or downregulation of a pathway (row) within a sample (column).
# <div class="alert alert-info">
# <p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
#
# Provide the required parameters for the ssGSEA module below.
#
# - For the **input gct file** parameter, Provide a file in the [GCT format](https://genepattern.org/file-formats-guide#GCT')
# - For example: <a href="https://datasets.genepattern.org/data/ccmi_tutorial/2017-12-15/BRCA_HUGO_symbols.preprocessed.gct" target="_blank">BRCA_HUGO_symbols.preprocessed.gct</a>
# - For a detailed description of the parameters you can read the <a href='https://gsea-msigdb.github.io/ssGSEA-gpmodule/v10/index.html'>parameter documentation</a>.
# - For a description of the <strong>gene sets database files</strong> parameter options, visit <a href="https://www.gsea-msigdb.org/gsea/msigdb/index.jsp">the MSigDB webpage</a>.
# - Click <strong>Run</strong>.</li>
# </div>
# + genepattern={"description": "Performs single sample GSEA. NOTE: with the release of v10.0.1, this module was renamed from \"ssGSEAProjection\" to just \"ssGSEA\"", "name": "ssGSEA", "param_values": {"combine.mode": null, "gene.set.selection": null, "gene.sets.database.files": null, "gene.symbol.column": null, "input.gct.file": null, "job.cpuCount": null, "job.memory": null, "job.queue": null, "job.walltime": null, "min.gene.set.size": null, "output.file.prefix": null, "sample.normalization.method": null, "weighting.exponent": null}, "type": "task"}
ssgsea_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00270')
ssgsea_job_spec = ssgsea_task.make_job_spec()
ssgsea_job_spec.set_parameter("input.gct.file", "")
ssgsea_job_spec.set_parameter("output.file.prefix", "")
ssgsea_job_spec.set_parameter("gene.sets.database.files", "")
ssgsea_job_spec.set_parameter("gene.symbol.column", "Name")
ssgsea_job_spec.set_parameter("gene.set.selection", "ALL")
ssgsea_job_spec.set_parameter("sample.normalization.method", "none")
ssgsea_job_spec.set_parameter("weighting.exponent", "0.75")
ssgsea_job_spec.set_parameter("min.gene.set.size", "10")
ssgsea_job_spec.set_parameter("combine.mode", "combine.add")
ssgsea_job_spec.set_parameter("job.memory", "2 Gb")
ssgsea_job_spec.set_parameter("job.queue", "gp-cloud-default")
ssgsea_job_spec.set_parameter("job.cpuCount", "1")
ssgsea_job_spec.set_parameter("job.walltime", "02:00:00")
genepattern.display(ssgsea_task)
# -
# ## Visualize projected pathways as a heat map
# HeatMapViewer visualizes the resulting projection of genes into pathways.
# <div class="alert alert-info">
# <p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
#
# - In the **dataset** parameter below, click on the dropdown and select output of the ssGSEA module (it typically ends with `.PROJ.gct`).
# - Click **Run**.
# </div>
# + genepattern={"description": "A configurable heat map viewer that provides users with several options for manipulating and visualizing array-based data", "name": "HeatMapViewer", "param_values": {"dataset": null, "job.cpuCount": null, "job.memory": null, "job.queue": null, "job.walltime": null}, "show_code": false, "type": "task"}
heatmapviewer_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.visualizer:00010')
heatmapviewer_job_spec = heatmapviewer_task.make_job_spec()
heatmapviewer_job_spec.set_parameter("dataset", "")
genepattern.display(heatmapviewer_task)
# -
# # References
# - <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Pomeroy SL, Golub TR, <NAME>, Mesirov JP. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. PNAS. 2005;102(43):15545-15550. http://www.pnas.org/content/102/43/15545.abstract
# - <NAME>, <NAME>, et al. Systematic RNA interference reveals that oncogenic KRAS-driven cancers require TBK1. Nature. 2009;462:108-112. https://pubmed.ncbi.nlm.nih.gov/19847166/
# - MSigDB website (https://www.gsea-msigdb.org/gsea/msigdb/index.jsp)
# - GSEA website (https://www.gsea-msigdb.org/gsea/index.jsp)
|
Single-sample GSEA projection (ssGSEA).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Spectrum Representations
#
#
# The plots show different spectrum representations of a sine signal with
# additive noise. A (frequency) spectrum of a discrete-time signal is calculated
# by utilizing the fast Fourier transform (FFT).
#
# +
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
dt = 0.01 # sampling interval
Fs = 1 / dt # sampling frequency
t = np.arange(0, 10, dt)
# generate noise:
nse = np.random.randn(len(t))
r = np.exp(-t / 0.05)
cnse = np.convolve(nse, r) * dt
cnse = cnse[:len(t)]
s = 0.1 * np.sin(4 * np.pi * t) + cnse # the signal
fig, axs = plt.subplots(nrows=3, ncols=2, figsize=(7, 7))
# plot time signal:
axs[0, 0].set_title("Signal")
axs[0, 0].plot(t, s, color='C0')
axs[0, 0].set_xlabel("Time")
axs[0, 0].set_ylabel("Amplitude")
# plot different spectrum types:
axs[1, 0].set_title("Magnitude Spectrum")
axs[1, 0].magnitude_spectrum(s, Fs=Fs, color='C1')
axs[1, 1].set_title("Log. Magnitude Spectrum")
axs[1, 1].magnitude_spectrum(s, Fs=Fs, scale='dB', color='C1')
axs[2, 0].set_title("Phase Spectrum ")
axs[2, 0].phase_spectrum(s, Fs=Fs, color='C2')
axs[2, 1].set_title("Angle Spectrum")
axs[2, 1].angle_spectrum(s, Fs=Fs, color='C2')
axs[0, 1].remove() # don't display empty ax
fig.tight_layout()
plt.show()
|
matplotlib/gallery_jupyter/lines_bars_and_markers/spectrum_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# <h1> Explore and create ML datasets </h1>
#
# In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
#
# <div id="toc"></div>
#
# Let's start off with the Python imports that we need.
# + deletable=true editable=true
import google.datalab.bigquery as bq
import seaborn as sns
import pandas as pd
import numpy as np
import shutil
# + [markdown] deletable=true editable=true
# <h3> Extract sample data from BigQuery </h3>
#
# The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
#
# Write a SQL query to pick up the following fields
# <pre>
# pickup_datetime,
# pickup_longitude, pickup_latitude,
# dropoff_longitude, dropoff_latitude,
# passenger_count,
# trip_distance,
# tolls_amount,
# fare_amount,
# total_amount
# </pre>
# from the dataset and explore a small part of the data. Make sure to pick a repeatable subset of the data so that if someone reruns this notebook, they will get the same results.
# + deletable=true editable=true
rawdata = """
SELECT
pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),EVERY_N) = 1
"""
# + deletable=true editable=true
query = rawdata.replace("EVERY_N", "100000")
print query
trips = bq.Query(query).execute().result().to_dataframe()
print "Total dataset is {} taxi rides".format(len(trips))
trips[:10]
# + [markdown] deletable=true editable=true
# <h3> Exploring data </h3>
#
# Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
# + deletable=true editable=true
ax = sns.regplot(x = "trip_distance", y = "fare_amount", ci = None, truncate = True, data = trips)
# + [markdown] deletable=true editable=true
# Hmm ... do you see something wrong with the data that needs addressing?
#
# It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
#
# What's up with the streaks at \$45 and \$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
#
# Let's examine whether the toll amount is captured in the total amount.
# + deletable=true editable=true
tollrides = trips[trips['tolls_amount'] > 0]
tollrides[tollrides['pickup_datetime'] == '2014-05-20 23:09:00']
# + [markdown] deletable=true editable=true
# Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
#
# Let's also look at the distribution of values within the columns.
# + deletable=true editable=true
trips.describe()
# + [markdown] deletable=true editable=true
# Hmm ... The min, max of longitude look strange.
#
# Finally, let's actually look at the start and end of a few of the trips.
# + deletable=true editable=true
def showrides(df, numlines):
import matplotlib.pyplot as plt
lats = []
lons = []
goodrows = df[df['pickup_longitude'] < -70]
for iter, row in goodrows[:numlines].iterrows():
lons.append(row['pickup_longitude'])
lons.append(row['dropoff_longitude'])
lons.append(None)
lats.append(row['pickup_latitude'])
lats.append(row['dropoff_latitude'])
lats.append(None)
sns.set_style("darkgrid")
plt.plot(lons, lats)
showrides(trips, 10)
# + deletable=true editable=true
showrides(tollrides, 10)
# + [markdown] deletable=true editable=true
# As you'd expect, rides that involve a toll are longer than the typical ride.
# + [markdown] deletable=true editable=true
# <h3> Quality control and other preprocessing </h3>
#
# We need to some clean-up of the data:
# <ol>
# <li>New York city longitudes are around -74 and latitudes are around 41.</li>
# <li>We shouldn't have zero passengers.</li>
# <li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
# <li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
# <li>Discard the timestamp</li>
# </ol>
#
# Let's change the BigQuery query appropriately. In production, we'll have to carry out the same preprocessing on the real-time input data.
# + deletable=true editable=true
def sample_between(a, b):
basequery = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
sampler = "AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), EVERY_N) = 1"
sampler2 = "AND {0} >= {1}\n AND {0} < {2}".format(
"MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), EVERY_N * 100)",
"(EVERY_N * {})".format(a), "(EVERY_N * {})".format(b)
)
return "{}\n{}\n{}".format(basequery, sampler, sampler2)
def create_query(phase, EVERY_N):
"""Phase: train (70%) valid (15%) or test (15%)"""
query = ""
if phase == 'train':
# Training
query = sample_between(0, 70)
elif phase == 'valid':
# Validation
query = sample_between(70, 85)
else:
# Test
query = sample_between(85, 100)
return query.replace("EVERY_N", str(EVERY_N))
print create_query('train', 100000)
# + deletable=true editable=true
def to_csv(df, filename):
outdf = df.copy(deep = False)
outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove('fare_amount')
cols.insert(0, 'fare_amount')
print cols # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header = False, index_label = False, index = False)
print "Wrote {} to {}".format(len(outdf), filename)
for phase in ['train', 'valid', 'test']:
query = create_query(phase, 100000)
df = bq.Query(query).execute().result().to_dataframe()
to_csv(df, 'taxi-{}.csv'.format(phase))
# + [markdown] deletable=true editable=true
# <h3> Verify that datasets exist </h3>
# + deletable=true editable=true
# !ls -l *.csv
# + [markdown] deletable=true editable=true
# We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
# + deletable=true editable=true
# %bash
head taxi-train.csv
# + [markdown] deletable=true editable=true
# Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
# + [markdown] deletable=true editable=true
# <h3> Benchmark </h3>
#
# Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
#
# My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
# + deletable=true editable=true
import datalab.bigquery as bq
import pandas as pd
import numpy as np
import shutil
def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies". Taxis can't fly of course.
dist = np.degrees(np.arccos(np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1)))) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted)**2))
def print_rmse(df, rate, name):
print "{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate * estimate_distance(df)), name)
FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']
TARGET = 'fare_amount'
columns = list([TARGET])
columns.extend(FEATURES) # in CSV, target is the first column, after the features
columns.append('key')
df_train = pd.read_csv('taxi-train.csv', header = None, names = columns)
df_valid = pd.read_csv('taxi-valid.csv', header = None, names = columns)
df_test = pd.read_csv('taxi-test.csv', header = None, names = columns)
rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()
print "Rate = ${0}/km".format(rate)
print_rmse(df_train, rate, 'Train')
print_rmse(df_valid, rate, 'Valid')
print_rmse(df_test, rate, 'Test')
# + [markdown] deletable=true editable=true
# The simple distance-based rule gives us a RMSE of <b>$9.35</b> on the validation dataset. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. You don't wnat to set a goal on the test dataset because you want to change the architecture of the network etc. to get the best validation error. Then, you can evaluate ONCE on the test data.
# + [markdown] deletable=true editable=true
# ## Challenge Exercise
#
# Let's say that you want to predict whether a Stackoverflow question will be acceptably answered. Using this [public dataset of questions](https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_questions), create a machine learning dataset that you can use for classification.
# <p>
# What is a reasonable benchmark for this problem?
# What features might be useful?
# <p>
# If you got the above easily, try this harder problem: you want to predict whether a question will be acceptably answered within 2 days. How would you create the dataset?
# <p>
# Hint (highlight to see):
# <p style='color:white' linkstyle='color:white'>
# You will need to do a SQL join with the table of [answers]( https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_answers) to determine whether the answer was within 2 days.
# </p>
# + [markdown] deletable=true editable=true
# Copyright 2018 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
courses/machine_learning/deepdive/02_generalization/create_datasets.ipynb
|
# # nbconvert latex test
# **Lorem ipsum** dolor sit amet, consectetur adipiscing elit. Nunc luctus bibendum felis dictum sodales. Ut suscipit, orci ut interdum imperdiet, purus ligula mollis *justo*, non malesuada nisl augue eget lorem. Donec bibendum, erat sit amet porttitor aliquam, urna lorem ornare libero, in vehicula diam diam ut ante. Nam non urna rhoncus, accumsan elit sit amet, mollis tellus. Vestibulum nec tellus metus. Vestibulum tempor, ligula et vehicula rhoncus, sapien turpis faucibus lorem, id dapibus turpis mauris ac orci. Sed volutpat vestibulum venenatis.
# ## Printed Using Python
print("hello")
# ## Pyout
from IPython.display import HTML
HTML("""
<script>
console.log("hello");
</script>
<b>HTML</b>
""")
# + language="javascript"
# console.log("hi");
# -
# ### Image
from IPython.display import Image
Image("http://ipython.org/_static/IPy_header.png")
|
notebooks/4.1/test4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/pbcquoc/vietocr/blob/master/vietocr_gettingstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="uPgu4i1yvhub" colab_type="text"
#
# # Introduction
# <p align="center">
# <img src="https://raw.githubusercontent.com/pbcquoc/vietocr/master/image/vietocr.jpg" width="512" height="512">
# </p>
# This notebook describe how you can use VietOcr to train OCR model
#
#
#
# + id="xEBHav_aljVN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="9e9bee75-8f32-40a6-9abe-c673fdde0596"
# ! pip install --quiet vietocr==0.1.5
# + [markdown] id="O9zjgHwN2vuC" colab_type="text"
# # Download sample dataset
# + id="rLT1LDXOnL1s" colab_type="code" colab={}
import gdown
# + id="kMMAabbInNe3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="3c6b25b1-49c2-44f0-86f0-a01332861657"
# ! gdown https://drive.google.com/uc?id=1W2PZC94sjpA1lS7FN33VoIVleSnnWOaA
# + id="ZEBWugoTnvy0" colab_type="code" colab={}
# ! unzip -qq -o /content/data.zip
# + [markdown] id="F1lxSkEj20y0" colab_type="text"
# # Train model
# + [markdown] id="-MWgUSotv1sN" colab_type="text"
#
#
# 1. Load your config
# 2. Train model using your dataset above
#
#
# + [markdown] id="BuzRB0rxwC3m" colab_type="text"
# Load the default config, we adopt VGG for image feature extraction
# + id="jMwREzEvm_jd" colab_type="code" colab={}
from vietocr.tool.config import Cfg
from vietocr.model.trainer import Trainer
# + [markdown] id="7oKRCu2ewNE4" colab_type="text"
# # Change the config
#
# * *data_root*: the folder save your all images
# * *train_annotation*: path to train annotation
# * *valid_annotation*: path to valid annotation
# * *print_every*: show train loss at every n steps
# * *valid_every*: show validation loss at every n steps
# * *epochs*: number of epochs to train your model
# * *export*: export weights to folder that you can use for inference
# * *metrics*: number of sample in validation annotation you use for computing full_sequence_accuracy, for large dataset it will take too long, then you can reuduce this number
#
#
# + id="56VBD-Xy_ztj" colab_type="code" colab={}
config = Cfg.load_config_from_name('vgg_transformer')
# + id="ceKcT5eOnJ1G" colab_type="code" colab={}
params = {'data_root':'./data/',
'train_annotation':'train_annotation.txt',
'valid_annotation':'test_annotation.txt',
'print_every':200,
'valid_every':15*200,
'epochs':5,
'checkpoint':'./checkpoint/transformerocr_checkpoint.pth',
'export':'./weights/transformerocr.pth',
'metrics': 10000
}
config['trainer'].update(params)
config['weights'] = './weights/transformerocr.pth'
config['device'] = 'cuda:0'
# + id="AwHpqqQEnHv1" colab_type="code" colab={}
# + [markdown] id="8dxTXpqa3Hd3" colab_type="text"
# You should train model from our pretrained
# + id="xtOyT8Cpo1gl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 209, "referenced_widgets": ["bc66a911864f492c880f40cbf49aee6c", "95595ec14bb041d992d0a154d162dc72", "f11c961d673646b6a3d481bd3156e9ef", "24883c6c35af46a39e47dad722ac5c1f", "adf206e3c65a4d23b4a7878efa323643", "a0453da0d70d4576ba8fbb810d23e863", "30e43c3b86734f3da032fea34306a72e", "c14d8f0718b94b40840ac3fd4aba79fa"]} outputId="cf459aa4-0531-43a1-b97a-6cae4ecb0e03"
trainer = Trainer(config, pretrained=True)
# + id="fpZEz_DPpV6y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="ec2427a3-b9a5-4d29-8d33-32b9f47b6a2e"
trainer.train()
# + [markdown] id="Iig6gpyb3jtz" colab_type="text"
# Visualize prediction from our trained model
#
# + id="Zeai5W02qXA9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="f5777d5f-3e20-4fc1-e816-80a3a5ac57c9"
trainer.visualize()
# + [markdown] id="-bAXlHJv3ryW" colab_type="text"
# Compute full seq accuracy for full valid dataset
# + id="eM806O42q_aT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e69f2023-5e3d-4c27-8d79-1404338ddc09"
trainer.precision()
# + [markdown] id="oj1IDUg831OO" colab_type="text"
# # Inference
# + id="427tT8CD32Mj" colab_type="code" colab={}
from vietocr.tool.predictor import Predictor
from vietocr.tool.config import Cfg
from PIL import Image
# + id="ZKwrNzXz4aHx" colab_type="code" colab={}
config = Cfg.load_config_from_name('vgg_transformer')
# + [markdown] id="z0MfubTJ53eu" colab_type="text"
# change weights to your weights or using default weights from our pretrained model
# + id="CTGR7tVJ9DLG" colab_type="code" colab={}
config['weights'] = './weights/transformerocr.pth'
# + id="mVjXbnyL4caz" colab_type="code" colab={}
detector = Predictor(config)
# + id="HjVcEvTC4ecI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c262ef1e-c81d-4acb-fc49-d6b61515095a"
img = './data/InkData_line_processed/20151208_0146_7105_2_tg_5_3.png'
img = Image.open(img)
s = detector.predict(img)
s
# + id="IFmE99BO81zb" colab_type="code" colab={}
|
vietocr_gettingstart.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Machine Learning Basic Principles 2018 - Data Analysis Project Report**
# ## Machine Learning Algorithms comparison: Support Vector Machine vs Random Forest
# ## Warning
# Please, do not use the options to run all the cells together, because some cells contain the training methods for the algorithm and they require quite a long time to compute (usually more than 2 hours each). The results of the training are always included after these cells and before each cell of this type there is this warning:
#
# "The following cell contains the parameters tuning algorithm: do not execute it!"
# ### Abstract
# This notebook analyzes two different machine learning algorithms and a deep neural network.
# 1. SVM - Support Vector Machine
# 2. RF - Random Forest
#
# After a brief introduction, the given dataset will be presented and some methods to analyze and clean up the data will be introduced. Both algorithms use the same dataset and the same training approach. The model is, indeed, trained using the training data and their correspondent labels: the accuracy of the training is then calculated using the predicted labels and the training labels. Finally, the definitive predictions are exported to a CSV file in order to be uploaded to Kaggle.
# ### 1. Introduction
# #### Support Vector Machine
# Support Vector Machine is a supervised model used for classification and regression analysis. Each data point z is viewed as a p-dimensional array (in our case 264 features). The objective of the algorithm is to divide these points into a (p-1)-dimensional hyperplane. There are multiple possible choices for a hyperplane, but the best one, or the most reasonable one, is the one that guarantees the best margin between the classes or in other words, the one that is able to separate better the classes.
# \[ Source: Wikipedia, https://en.wikipedia.org/wiki/Support_vector_machine \]
# #### Random Forest
# The Random Forest technique is an ensemble learning method for classification and regression. An ensemble method is a method that uses multiple learning algorithms to improve its performance. The algorithm creates a multitude of decision trees during the training phase and outputs the class that is the *mode* of all the classes (in classification) or mean prediction (in regression) of the individual trees.
# \[ Source: Wikipedia, https://en.wikipedia.org/wiki/Random_forest ]
#
#
# *mode*: the mode of a set of data is the value that appears the most and, therefore, it is the most likely to be sampled.
# ### 2. Data analysis
# The provided data is split into two datasets, the training one and the testing one. In this case, only the labels corresponding to the training dataset are given.
# This section will be focused on displaying and analyzing the provided data. In particular, there are plots of the datasets and the histogram that displays the distribution of the labels (the number of samples that correspond to each class).
# The following cell imports the libraries needed throughout the notebook and loads the data sets (stored in CSV files).
# + deletable=false nbgrader={"cell_type": "code", "checksum": "014a593ce82d342a60d749c7a2c46b7c", "grade": true, "grade_id": "cell-c3ef844c17cf4a1e", "locked": false, "points": 1, "schema_version": 2, "solution": true}
# Import libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn import preprocessing
from sklearn import svm
import time
## Load files
test_data_accuracy = pd.read_csv('./accuracy/test_data.csv', header=None)
train_data_accuracy = pd.read_csv('./accuracy/train_data.csv', header=None)
train_labels_accuracy = pd.read_csv('./accuracy/train_labels.csv', header=None)
## Parse loaded content
test_data_accuracy = test_data_accuracy.values
train_data_accuracy = train_data_accuracy.values
train_labels_accuracy = train_labels_accuracy.values
## Check that the data was correctly loaded:
## according to the document the correct shapes are 4363 x 264 for training data and 6544 x 264 for test data
assert train_data_accuracy.shape == (4363, 264)
assert test_data_accuracy.shape == (6544, 264)
## Obviously the train label should have a shape of 4363 x 1 (one label for each song in the training data set)
assert train_labels_accuracy.shape == (train_data_accuracy.shape[0], 1)
# -
# ### Accuracy
# This notebook starts with the accuracy challenge.
#
# #### Table of contents
# 1. Data visualization
# 2. Data manipulation
# 3. ML Algorithm
# ### 1. Data visualiztion
# *The commented code is valid code, feel free to utilize it. It was commented out because it is long to read and does not provide aggregate results. Moreover, this way the notebook runs faster (less computation to do)*
## Visualizes the training_data
plt.plot(train_data_accuracy)
plt.title('Train data distribution')
plt.show()
# mean = np.mean(train_data_accuracy, axis=0)
# std = np.std(train_data_accuracy, axis=0)
# for i in range(len(mean)):
# print(f'{i}: the mean of the row is {mean[i]}, the variance is {std[i]}')
## Visualizes the test_data
plt.plot(test_data_accuracy)
plt.title('Test data distribution')
plt.show()
# mean = np.mean(test_data_accuracy, axis=0)
# std = np.std(test_data_accuracy, axis=0)
# for i in range(len(mean)):
# print(f'{i}: the mean of the row is {mean[i]}, the variance is {std[i]}')
# #### Observations
# As we can see, neither the training data nor the test data are normalized. In addition to it, labels are not equally distributed, as almost 50% of the samples belong to class 1 (Pop_Rock). The figure below represents the occurrence (in %) of each label.
# +
## Visualizes the labels
## Store the music genres that will be printed out
class_names = ['Pop_Rock', 'Electronic', 'Rap', 'Jazz', 'Latin', 'RnB', 'International', 'Country', 'Reggae', 'Blues']
data = np.empty((11, 2))
for i in range(1, 11, 1):
print(f'Label {i} ({class_names[i - 1]}): {round((len(np.where(train_labels_accuracy == i)[0]) / len(train_labels_accuracy)) * 100, 2)}%')
## Since arrays are 0-indexed but the first label is 1, initialize the first position of the array using the first label (1)
## This is just to create a nice plot, does not have any effect on the algorithm.
data[0, 0] = 0
data[0, 1] = round((len(np.where(train_labels_accuracy == 1)[0]) / len(train_labels_accuracy)) * 100, 2)
data[i, 0] = i
data[i, 1] = round((len(np.where(train_labels_accuracy == i)[0]) / len(train_labels_accuracy)) * 100, 2)
fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
ax = axs[0]
ax.hist(train_labels_accuracy)
ax.set_title('Labels distribution')
ax.set_xlabel('Label number')
ax.set_ylabel('Number of occurences')
ax.grid(True)
ax = axs[1]
ax.plot(data[:, 1])
ax.set_title('Labels distribution (in %)')
ax.set_xlabel('Label number')
ax.set_ylabel('Percentage of occurence')
ax.grid(True)
fig.suptitle('Label distribution analysis: labels\' range is from 1 to 10')
plt.show()
# -
# ### 2. Data manipulation
# #### Data Cleanup
# From the plots at the previous section, it can be noticed that some columns (205 to 220) contain features whose values are not aligned with the others. Moreover, they seem to be the same for all the samples. When checking their exact values, one can corroborate that they are exactly the same throughout all the samples, so we can eliminate them from the training data as they will not provide any useful information when trying to classify the data.
# *Now each sample, or song, has 248 features instead of 264.*
#
# The result of this procedure can be seen at the comparison between the old dataset and the cleaned up dataset at the figures below.
#
# **Note**: after various tests, it was discovered that eliminating the misaligned features had no practical effect on the whole algorithm. Indeed, even though the overall accuracy of the model slightly decreased (from 75.83% to 75.43%) the accuracy on the new data (test data) did not change. For this reason, have decided to use the complete dataset with all the features instead of the modified one.
# This is probably related to the approach that we decided to take in order to classify the samples. For example, when using a Neural Network if the classifier detects that some of the features are not meaningful to decide which class do the samples belong, the weights correspondent to that feature will be set to zero, having no impact on the decision.
#
# *Feel free to transform the following cell to a code cell to try the algorithms using the resized data set.*
# + active=""
# # Plot the original data
# fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
# ax = axs[0]
# ax.plot(train_data_accuracy)
# ax.set_title('Train data plot before')
# ax.grid(True)
#
# ax = axs[1]
# ax.plot(test_data_accuracy)
# ax.set_title('Test data plot before')
# ax.grid(True)
# # Eliminates the data
# print(f'Train data shape before resize is: {train_data_accuracy.shape}, Test data shape is: {test_data_accuracy.shape}')
# train_data_accuracy = np.delete(train_data_accuracy, [204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220], 1)
# test_data_accuracy = np.delete(test_data_accuracy, [204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220], 1)
# # Let's check that the elimination process succeded
# print(f'Train data shape after resize is: {train_data_accuracy.shape}, Test data shape is: {test_data_accuracy.shape}')
# # Plot the original data
# fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
# ax = axs[0]
# ax.plot(train_data_accuracy)
# ax.set_title('Train data plot after')
# ax.grid(True)
#
# ax = axs[1]
# ax.plot(test_data_accuracy)
# ax.set_title('Test data plot after')
# ax.grid(True)
# -
# **Note**: since this notebook uses the scikit-learn library, it is suggested by the documentation to scale the data in order to obtain better results.
# Scaled data has two particular characteristics:
# - The mean value is almost equal to 0
# - The variance is equal to 1
#
# Thus, the data are centered around 0. The differences between the raw training data and the centered training data is visible at the plots below.
# By using the StandardScaler() function it is possible to apply the same transformation to both datasets: training and testing.
# +
## Scales the data before feeding them into ML algorithms
scaler = preprocessing.StandardScaler().fit(train_data_accuracy)
train_data_scaled = scaler.transform(train_data_accuracy)
test_data_scaled = scaler.transform(test_data_accuracy)
print(f'The mean of the train data is: {np.mean(train_data_scaled)}, the variance is: {np.std(train_data_scaled)}')
print(f'The mean of the test data is: {np.mean(test_data_scaled)}, the variance is: {np.std(test_data_scaled)}')
fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
ax = axs[0]
ax.plot(train_data_accuracy)
ax.set_title('Train data plot before scaling')
ax.grid(True)
ax = axs[1]
ax.plot(train_data_scaled)
ax.set_title('Train data plot after scaling')
ax.grid(True)
# -
# ### 3. ML Algorithm
# After cleaning up the data, it is time to implement the ML algorithms to predict the labels. For this purpose, this notebook uses *SVM - Support Vector Machine* and *RF - Random Forest*.
# The notebook first starts with the SVM approach and ends with the implementation of the RF method.
# It is important to notice that both algorithms use the same data.
# #### Parameters choice and hyperparameters tuning
# In order to get the best results with SVM, the library allows the user to define several hyperparameters to fine tune the classifier. In order to find the best combination of those values, the following code has been used.
# Mainly, the code creates a grid that will try all the combinations of the different parameters defined inside it. In our case, we have decided to tune the classifier trying different combinations between 'C', 'kernel' and 'gamma'.
# The code will try all of the combinations and output all the hypervalues for the best classifier found.
# **Important**: running the following cell require A LOT of time. The best set of parameters is reported in the next cell, so it is not necessary to run this cell again.
# The following cell contains the algorithm that tunes the classifier: do not execute it!
# + active=""
# ## Calculate the time needed for the following cell
# start_time = time.time()
# ## Determine the best parameters for the Support Vector Algorithm
# parameters = {'kernel':('linear', 'poly', 'rbf', 'sigmoid'), 'C':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'gamma':('scale', 'auto', 0.00001, 0.001, 0.0001)}
# svc = svm.SVC()
# clf = GridSearchCV(svc, parameters, cv=5, return_train_score=True, n_jobs = -1)
# clf.fit(train_data_scaled, np.ravel(train_labels_accuracy))
# end_time = time.time()
# print(f'Total time needed for hyperparameters tuning: {end_time - start_time}')
# -
# #### Grid search results
#
# In the following cell only a part of the data is displayed. The file accuracy-svm-tuning.csv contains all the data.
# + active=""
# print('Grid search results:')
# df = pd.DataFrame(data=clf.cv_results_)
# display(df)
# df.to_csv('accuracy-svm-tuning.csv')
# print('The best estimator is:')
# print(clf.best_estimator_)
# -
# #### Best estimator:
# The best classifier found combining the specified hypervalue parameters is:
# `SVC(C=3, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
# max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=True)`
# #### Best parameters
# The following cell contains the ML algorithm that uses the best set of parameters.
## Once we have the best parameters for the estimator, we can start predictions
clf = svm.SVC(C=3, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001)
clf.fit(train_data_scaled, np.ravel(train_labels_accuracy))
# Once we had a good idea of the optimal values for the classifier we tried to manually fine tune the SVM. Finally, we obtained the classifier below that performed even better than the previous one.
clf = svm.SVC(C=11, cache_size=400, class_weight=None, coef0=0.0,
decision_function_shape='ovr', gamma=0.001, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.00001)
clf.fit(train_data_scaled, np.ravel(train_labels_accuracy))
# Once we have our classifier, we can proceed to calculate the accuracy of the algorithm.
score = clf.score(train_data_scaled, train_labels_accuracy)
print(f'Algorithm accuracy is {np.round(score * 100, 2)}%')
# The following cell displays the results obtained with the Support Vector Machine algorithm. The blue line represents the training labels and the red lines the predicted one. They should have a similar pattern, although if they were perfectly aligned it could be a sign of overfitting. When this happens, we might obtain great results (up to a 100% of accuracy) over the train data, but when trying with new data the algorithm has problems generalizing and the accuracy falls considerably.
# +
predictions = clf.predict(test_data_scaled)
data_p = np.empty((11, 2))
for i in range(1, 11, 1):
print(f'Label {i} ({class_names[i - 1]}): {round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)}%')
# Since arrays are 0-indexed but the first label is 1, initialize the first position of the array using the first label (1)
# This is just to create a nice plot, does not have any effect on the algorithm.
data_p[0, 0] = 0
data_p[0, 1] = round((len(np.where(predictions == 1)[0]) / len(predictions)) * 100, 2)
data_p[i, 0] = i
data_p[i, 1] = round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)
plt.plot(data_p[:, 1], c='red')
plt.plot(data[:, 1], c='blue')
plt.grid(True)
plt.title('Comparison between train labels and predicted labels')
plt.xlabel('Label number')
plt.ylabel('Percentage of occurence')
plt.show()
# -
# The following cell is just a small utility function to save the predicted labels (contained in the 'pred' array) to a csv file that follows the specifications to be uploaded to Kaggle.
# +
def save_prediction(pred, filename):
prediction = pd.DataFrame(pred, columns=['Sample_label'])
prediction.index += 1
prediction.to_csv(f'{filename}.csv',index_label="Sample_id",index=1)
save_prediction(predictions, 'SVM')
# -
# ### RTF - Random Tree Forest
# #### Tuning
#
# The same way we did before with the SVM, the RTF object has many hyperparameters that allow the classifier to be optimized for each dataset. At the following cell, we perform the same grid search method in order to get an idea of the optimal values for the most important hyperparameters.
#
# The best estimator found is presented in the next cell.
# *The following cell takes about 93 minutes on jupyterhub to compute.*
# The following cell contains the algorithm that tunes the classifier: do not execute it!
# + active=""
# ## Calculate the time needed for the following cell
# start_time = time.time()
# n_estimators = [int(x) for x in np.linspace(start = 10, stop = 1000, num = 8)]
# max_features = ['auto', 'sqrt', None]
# max_depth = [int(x) for x in np.linspace(10, 110, num = 9)]
# max_depth.append(None)
# min_samples_split = [2, 5, 10]
# min_samples_leaf = [1, 2, 4]
# bootstrap = [True]
# random_grid = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split,
# 'min_samples_leaf': min_samples_leaf,
# 'bootstrap': bootstrap}
# display(random_grid)
# rtf = RandomForestClassifier()
# rf_random = RandomizedSearchCV(estimator = rtf, param_distributions = random_grid, n_iter = 10, cv = 5, verbose=2, random_state=169, n_jobs = -1, return_train_score=True)
# rf_random.fit(train_data_accuracy, np.ravel(train_labels_accuracy))
# end_time = time.time()
# print(f'Total time needed for hyperparameters tuning: {end_time - start_time}')
# + active=""
# print('Grid search results:')
# df = pd.DataFrame(data=rf_random.cv_results_)
# display(df)
# df.to_csv('output.csv')
# print('The best estimator is:')
# print(rf_random.best_params_)
# -
# The best classifier found is:
# `{'n_estimators': 717, 'min_samples_split': 5, 'min_samples_leaf': 4, 'max_features': None, 'max_depth': 97, 'bootstrap': True}`
# The Random Forest Classifier takes a lot of time to be computed.
#
# Replace n_estimators=717 with n_estimators=100 to speed up the process (RandomForestClassifier with no tuning)
## Plug in the best parameters found
rtf = RandomForestClassifier(criterion='gini', n_estimators=717, min_samples_split=5, min_samples_leaf=4, max_features=None, max_depth=97, bootstrap=True)
rtf.fit(train_data_accuracy, np.ravel(train_labels_accuracy))
score = rtf.score(train_data_accuracy, train_labels_accuracy)
print(f'Algorithm accuracy is {np.round(score * 100, 2)}%')
# The following cell displays the results obatined with the RTF algorithm. Same way as before, the blue line represent the training labels and the red lines the predicted one.
# +
predictions = rtf.predict(test_data_accuracy)
data_p = np.empty((11, 2))
for i in range(1, 11, 1):
print(f'Label {i} ({class_names[i - 1]}): {round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)}%')
data_p[0, 0] = 0
data_p[0, 1] = round((len(np.where(predictions == 1)[0]) / len(predictions)) * 100, 2)
data_p[i, 0] = i
data_p[i, 1] = round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)
plt.plot(data_p[:, 1], c='red')
plt.plot(data[:, 1], c='blue')
plt.grid(True)
plt.title('Comparison between train labels and predicted labels')
plt.xlabel('Label number')
plt.ylabel('Percentage of occurence')
plt.show()
## Export the calculated predictions
save_prediction(predictions, 'Tree')
# -
# ### Bonus content: Neural Network
# In this last section related to accuracy, the notebook contains the implementation of a Neural Network.
# The Neural Network uses the MLPClassifier function from scikit-learn library. It is composed of two hidden layers of size ten each, plus one input and one output layer. Therefore, the final Neural Network has an initial layer composed by N-neurons, where N is the number of features of each sample, a second and a third layers of ten neurons, and finally an output layer of Z-neurons, where Z is the number of different classes.
# In order to prevent the classifier to overfit, we use the early stop technique.
# The algorithm tries to optimize the loss function in every epoch (maximum 1000 iterations) and stops when the loss does not improve more than 0.000001 for more than ten consecutive epochs.
from sklearn.neural_network import MLPClassifier
# #### Parameters tuning
# Again, the following cells evaluate which are the best parameters for the NN.
#
# The best combination can be found below.
#
# *Note*: running this cell takes a lot of time (more than 2 hours on our machine), so it is not advisable to run it. The correct parameters are already plugged in.
# The following cell contains the parameters tuning algorithm: do not execute it!
# + active=""
# ## Instantiate the NN
# mlp = MLPClassifier(max_iter=1000)
# parameters = {
# 'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
# 'activation': ['tanh', 'relu'],
# 'solver': ['sgd', 'adam'],
# 'alpha': [0.0001, 0.05],
# 'learning_rate': ['constant', 'adaptive'],
# 'shuffle': [True, False]
# 'early_stopping': [True, False]
# }
# clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3)
# clf.fit(train_data_scaled, np.ravel(train_labels_accuracy))
# + active=""
# print('Best parameters found:\n', clf.best_params_)
# -
# Best parameters found:
# `{
# 'activation': 'relu',
# 'alpha': 0.0001,
# 'early_stopping': True,
# 'learning_rate': 'adaptive',
# 'shuffle': False,
# 'solver': 'adam'
# }`
mlp = MLPClassifier(activation='relu',
alpha=0.0001,
hidden_layer_sizes=(264, 264, 110),
learning_rate='constant',
max_iter=1000,
shuffle=False,
solver='adam',
tol=0.000001,
verbose=True,
early_stopping=True
)
mlp.fit(train_data_scaled, np.ravel(train_labels_accuracy))
score = mlp.score(train_data_scaled, train_labels_accuracy)
print(f'Algorithm accuracy is {np.round(score * 100, 2)}%')
# The following cell displays the results achieved with the Neural Network algorithm. As always, the blue line represent the training labels and the red lines the predicted one.
# +
predictions = mlp.predict(test_data_scaled)
data_p = np.empty((11, 2))
for i in range(1, 11, 1):
print(f'Label {i} ({class_names[i - 1]}): {round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)}%')
data_p[0, 0] = 0
data_p[0, 1] = round((len(np.where(predictions == 1)[0]) / len(predictions)) * 100, 2)
data_p[i, 0] = i
data_p[i, 1] = round((len(np.where(predictions == i)[0]) / len(predictions)) * 100, 2)
plt.plot(data_p[:, 1], c='red')
plt.plot(data[:, 1], c='blue')
plt.grid(True)
plt.title('Comparison between train labels and predicted labels')
plt.xlabel('Label number')
plt.ylabel('Percentage of occurence')
plt.show()
save_prediction(predictions, 'NN')
# -
# ## Log-loss
# After the accuracy challenge, this section is focused on the other challenge, the log-loss.
# Data visualization and manipulation sections are the same as above, so will not be commented.
#
# For the log-loss challenge, only the SVM algorithm has been used, since the Neural Network did not perform well during tests (its loss on Kaggle was 0.68353), obtaining a loss approximately 6 times worse than the SVM one. However, as in the previous challenge, the two versions of SVM are compared (with and without hyperparameters tuning).
#
# ### Contents
# 1. Data visualization: how are the raw data provided?
# 2. Data manipulation
# 3. ML Algorithm
# +
# Libraries are already imported
# Loads the files
test_data_logloss = pd.read_csv('./log-loss/test_data.csv', header=None)
train_data_logloss = pd.read_csv('./log-loss/train_data.csv', header=None)
train_labels_logloss = pd.read_csv('./log-loss/train_labels.csv', header=None)
# Parse loaded content
test_data_logloss = test_data_logloss.values
train_data_logloss = train_data_logloss.values
train_labels_logloss = train_labels_logloss.values
# -
# ### 1. Data visualization
## Visualizes the training_data
plt.plot(train_data_logloss)
plt.title('Train data distribution')
plt.show()
# mean = np.mean(train_data_logloss, axis=0)
# std = np.std(train_data_logloss, axis=0)
# for i in range(len(mean)):
# print(f'{i}: the mean of the row is {mean[i]}, the variance is {std[i]}')
## Visualizes the test_data
plt.plot(test_data_logloss)
plt.title('Test data distribution')
plt.show()
# mean = np.mean(test_data_logloss, axis=0)
# std = np.std(test_data_logloss, axis=0)
# for i in range(len(mean)):
# print(f'{i}: the mean of the row is {mean[i]}, the variance is {std[i]}')
# #### Observations
# As we can see the training data nor the test data are normalized. In addition to it, labels are not equally distribuited: indeed there are more label of class 1 (almost 50%), Pop_Rock, then any other classes as we can see in the cell below. The figure below represents the occurence (in %) of each label.
# +
# Visualizes the labels
class_names = ['Pop_Rock', 'Electronic', 'Rap', 'Jazz', 'Latin', 'RnB', 'International', 'Country', 'Reggae', 'Blues']
# Since labels start by 1, in the first position of the data array, I inserted again the value of the first label, to obtain a decent plot
data = np.empty((11, 2))
for i in range(1, 11, 1):
print(f'Label {i} ({class_names[i - 1]}): {round((len(np.where(train_labels_logloss == i)[0]) / len(train_labels_logloss)) * 100, 2)}%')
data[0, 0] = 0
data[0, 1] = round((len(np.where(train_labels_logloss == 1)[0]) / len(train_labels_logloss)) * 100, 2)
data[i, 0] = i
data[i, 1] = round((len(np.where(train_labels_logloss == i)[0]) / len(train_labels_logloss)) * 100, 2)
fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
ax = axs[0]
ax.hist(train_labels_logloss)
ax.set_title('Labels distribution')
ax.grid(True)
ax = axs[1]
ax.plot(data[:, 1])
ax.set_title('Labels distribution in %')
ax.grid(True)
fig.suptitle('Label distribution analysis: labels\' range is from 1 to 10')
plt.show()
# -
# ### 2. Data manipulation
# +
# Scales the data before feeding them into ML algorithms
scaler = preprocessing.StandardScaler().fit(train_data_logloss)
train_data_scaled = scaler.transform(train_data_logloss)
test_data_scaled = scaler.transform(test_data_logloss)
print(f'The mean of the train data is: {np.mean(train_data_scaled)}, the variance is {np.std(train_data_scaled)}')
print(f'The mean of the test data is: {np.mean(test_data_scaled)}, the variance is {np.std(test_data_scaled)}')
fig, axs = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
ax = axs[0]
ax.plot(train_data_accuracy)
ax.set_title('Train data plot before scaling')
ax.grid(True)
ax = axs[1]
ax.plot(train_data_scaled)
ax.set_title('Train data plot after scaling')
ax.grid(True)
# -
# ### 3. ML Algorithm
# After data cleanup, the code used to predict the labels with the Support Vector Machine classifier can be found in the cells below.
# The following cell contains the parameters tuning algorithm: do not execute it!
# + active=""
# ## Calculate the time needed for the following cell
# start_time = time.time()
# ## Determine the best parameters for the Support Vector Algorithm
# parameters = {'kernel':('linear', 'poly', 'rbf', 'sigmoid'), 'C':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'gamma':('scale', 'auto')}
# svc = svm.SVC(probability=True)
# clf = GridSearchCV(svc, parameters, cv=5, return_train_score=True, n_jobs = -1)
# clf.fit(train_data_scaled, np.ravel(train_labels_logloss))
# end_time = time.time()
# print(f'Total time needed for hyperparameters tuning: {end_time - start_time}')
# -
# #### Grid search results
#
# In the following cell only a part of the data is displayed. The file log-loss-svm-tuning.csv contains all the data.
# + active=""
# print('Grid search results:')
# df = pd.DataFrame(data=clf.cv_results_)
# display(df)
# df.to_csv('log-loss-svm-tuning.csv')
# print('The best estimator is:')
# print(clf.best_estimator_)
# -
# The best estimator is:
# `SVC(
# C=3,
# cache_size=200,
# class_weight=None,
# coef0=0.0,
# decision_function_shape='ovr',
# degree=3,
# gamma='auto',
# kernel='rbf',
# max_iter=-1,
# probability=True,
# random_state=None,
# shrinking=True,
# tol=0.001,
# verbose=True
# )`
clf = svm.SVC(
C=3,
cache_size=200,
class_weight=None,
coef0=0.0,
decision_function_shape='ovr',
degree=3,
gamma='auto',
kernel='rbf',
max_iter=-1,
probability=True,
random_state=None,
shrinking=True,
tol=0.001,
verbose=True
)
clf.fit(train_data_scaled, np.ravel(train_labels_logloss))
# The same it was done previously, once we have a starting point for the optimal values, we can fine tune the classifier manually obtaining:
clf = svm.SVC(C=10, cache_size=400, class_weight=None, coef0=0.0,
decision_function_shape='ovr', gamma=0.001, kernel='rbf',
max_iter=-1, probability=True, random_state=None, shrinking=True,
tol=0.00001)
clf.fit(train_data_scaled, np.ravel(train_labels_logloss))
score = clf.score(train_data_scaled, train_labels_logloss)
print(f'Algorithm accuracy is {np.round(score * 100, 2)}%')
# +
predictions = clf.predict_proba(test_data_scaled)
assert predictions.shape == (6544, 10)
data_l = np.empty((1, 10))
for i in range(predictions.shape[1]):
data_l[0, i] = np.mean(predictions[:, i]) * 100
print(f'Average probability for label {i + 1} ({class_names[i]}): {np.round(data_l[0, i], 2)}%')
plt.plot(data_l[0, :], c='red')
plt.plot(data[:, 1], c='blue')
plt.grid(True)
plt.title('Comparison between train labels and predicted labels')
plt.xlabel('Label number')
plt.ylabel('Percentage of occurence')
plt.show()
# +
def save_prediction_loss(loss):
prediction = pd.DataFrame(loss,columns=['Class_1','Class_2','Class_3','Class_4','Class_5',
'Class_6','Class_7','Class_8','Class_9','Class_10'])
prediction.index += 1
prediction.to_csv('loss_kaggle.csv',index_label="Sample_id",index=1)
save_prediction_loss(predictions)
# -
# ## 4. Results
# As an overview, these are the results we obtained on Kaggle:
# ### Accuracy challenge
# - SVM with hyperparameter tuning: 0.66123
# - SVM without hyperparameter tuning: 0.64034
# - RF with hyperparameter tuning: 0.61691
# - RF without hyperparameter tuning: 0.64034
# - NN with hyperparameter tuning: 0.62608
# - NN without hyperparameter tuning: 0.63728
#
# ### Logloss challenge
# - SVM with hyperparameter tuning: 0.16761
# - SVM without hyperparameter tuning: 0.16922
# ## 5. Discussion/Conclusions
# This paper analyzed different approaches in order to solve the same problem: music genre classification.
# The most accurate algorithm was the Support Vector Machine with hyperparameters tuning: it was possible to achieve an accuracy of 66.123% over the test data.
# On the other hand, the Random Forest algorithm performed quite well as well without any kind of tuning (64.03%), but became less accurate when we tried to optimize its parameters.
# In order to optimize the parameters, two different approaches were used:
# 1. GridSearchCV
# 2. RandomizedSearchCV
#
# These two strategies apply the same algorithm in two different methods:
# GridSearchCV evaluates the accuracy and other statistics of a ML method by trying all the possible combinations between the parameters and the values specified in the grid. For this reason, it is computationally expensive and takes a long time to complete.
#
# RandomizedSearchCV, on the other hand, operates by choosing random combinations of parameters and picking the best one. This method is faster since the number of iterations can be limited, but sometimes is less accurate. This second is useful when not having any idea on which values could be the optimal ones, as the GridSearchCV is more useful when we already have an idea where the optimal values could be found.
#
# GridSearchCV was used to tune the SVM estimator since it has fewer parameters and it was possible to compute the best estimator in a relatively small amount of time (less than 2 hours).
#
# RandomizedSearchCV was instead used to tune the parameters of RT because it contains more parameters and thus more possible combination. Indeed, applying the GridSearchCV resulted in having to evaluate more than 25 thousand scores. While we were able to optimize it performing only 50 fits with the randomized search.
#
# Finally, the Neural Network did not benefit from the hyperparameters tuning.
#
# The ineffectivity of the tuning for Random Forest and for Neural Networks might be caused by the fact that, while for SVM is possible to test more parameters that have some more common values (for example gamma is usually a value less than 1 or is 'auto' or 'scale'), the RF and the Neural Network both rely heavily on the number of the estimators (for the RF) or layers (e.g. perceptrons per layer) for the Neural Network. This factor difficulted the task of finding the best combination and it requires significantly more time.
# ## 6. References
# All the references used throughout the notebook can be found reported at the bottom of this cell.
# However, instead of using the standard IEEE reference style, the references were put inline in the text for an easier and faster access.
#
# Wikipedia, https://en.wikipedia.org/wiki/Support_vector_machine
# Wikipedia, https://en.wikipedia.org/wiki/Random_forest
#
|
MLBP2018 Project Report [Group 169].ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### LOAD DATA
import csv # for csv file import
import numpy as np
import os
import cv2
# +
def get_file_data(file_path, header=False):
# function to read in data from driving_log.csv
lines = []
with open(file_path + '/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
# if header is set to true then skip first line of csv
if header:
# if header exists iterate to next item in list, returns -1 if exhausted
next(reader, -1)
for line in reader:
# loop through reader appending each line to lines array
lines.append(line)
return lines
centre_camera = 0
left_camera = 1
right_camera = 2
steering_angle = 3
new_x = get_file_data('./my_driving')
udacity_x = get_file_data('./data')
# +
steering_angles = []
camera_images = []
source_path = './my_driving/IMG/'
for line in new_x[0:]:
# get steering angle from csv 4th element of CSV and cast as float for this point in time
#measurement = float(line[3])
steering_centre = float(line[steering_angle])
# create adjusted steering measurements for the side camera images
correction = 0.25 # this is a parameter to tune
steering_left = steering_centre + correction
steering_right = steering_centre - correction
img_centre = cv2.imread(source_path + line[centre_camera].split('/')[-1])
img_left = cv2.imread(source_path + line[left_camera].split('/')[-1])
img_right = cv2.imread(source_path + line[right_camera].split('/')[-1])
#img_centre_hsv = cv2.cvtColor(img_centre, cv2.COLOR_BGR2HSV)
#img_left_hsv = cv2.cvtColor(img_left, cv2.COLOR_BGR2HSV)
#img_right_hsv = cv2.cvtColor(img_right, cv2.COLOR_BGR2HSV)
#img_centre_yuv = cv2.cvtColor(img_centre, cv2.COLOR_BGR2YUV)
#img_left_yuv = cv2.cvtColor(img_left, cv2.COLOR_BGR2YUV)
#img_right_yuv = cv2.cvtColor(img_right, cv2.COLOR_BGR2YUV)
# add images and angles to data set
camera_images.append(img_centre)
camera_images.append(img_left)
camera_images.append(img_right)
steering_angles.append(steering_centre)
steering_angles.append(steering_left)
steering_angles.append(steering_right)
# +
# convert arrays to numpy arrays for keras
X_train = np.array(camera_images)
y_train = np.array(steering_angles)
print(X_train[1].shape)
# +
my_timestamps = []
my_ms_timestamp = []
my_steering_angles = []
for y in new_x:
my_steering_angles.append(np.float((y[steering_angle])))
# add file name from centre camerea image to get timestamp
my_timestamps.append(y[centre_camera].split('/')[-1])
# get the file name from centre camera image with no file extension
filename_noext = (os.path.splitext(os.path.basename(y[centre_camera]))[0])
# extract hours, mins, secs and msecs from filename
h_m_s_ms = filename_noext.split('_')[-4:]
# process time to msecs
mins = np.float(h_m_s_ms[0]) * 60 + np.float(h_m_s_ms[1])
secs = mins * 60 + np.float(h_m_s_ms[2])
msecs = secs * 1000 + np.float(h_m_s_ms [3])
# append msecs to array
my_ms_timestamp.append(int(msecs))
# create numpy array from array
np_my_steering_angles = np.array(my_steering_angles)
udacity_steering_angles = []
udacity_timestamps = []
udacity_ms_timestamp = []
for y in udacity_x:
udacity_steering_angles.append(np.float((y[steering_angle])))
# add file name from centre camerea image to get timestamp
udacity_timestamps.append(y[centre_camera].split('/')[-1])
# get the file name from centre camera image with no file extension
filename_noext = (os.path.splitext(os.path.basename(y[centre_camera]))[0])
# extract hours, mins, secs and msecs from filename
h_m_s_ms = filename_noext.split('_')[-4:]
# process time to msecs
mins = np.float(h_m_s_ms[0]) * 60 + np.float(h_m_s_ms[1])
secs = mins * 60 + np.float(h_m_s_ms[2])
msecs = secs * 1000 + np.float(h_m_s_ms [3])
# append msecs to array
udacity_ms_timestamp.append(int(msecs))
my_ms_timestamp_norm = ([my_ms_timestamp[i+1]- my_ms_timestamp[i] for i in range(len(my_ms_timestamp)-1)])
my_ms_timestamp_norm.insert(0,0)
my_ms_timestamp_norm_cum = np.cumsum(my_ms_timestamp_norm)
udacity_ms_timestamp_norm = ([udacity_ms_timestamp[i+1]- udacity_ms_timestamp[i] for i in range(len(udacity_ms_timestamp)-1)])
udacity_ms_timestamp_norm.insert(0,0)
udacity_ms_timestamp_norm_cum = np.cumsum(udacity_ms_timestamp_norm)
# +
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
fig = plt.figure(figsize=(12,12))
ax1 = plt.subplot(2, 2, 1)
plt.plot(my_ms_timestamp_norm_cum, np_my_steering_angles, label='my_data')
plt.plot(udacity_ms_timestamp_norm_cum, udacity_steering_angles, label='udacity', alpha=0.5)
plt.legend()
plt.xlabel('mSecs')
plt.ylabel('angle (rad)')
plt.title('Steering Wheel Angle Variation')
plt.grid(True)
ax2 = plt.subplot(2, 2, 2)
plt.plot(np_my_steering_angles, label='my_data' )
plt.plot(udacity_steering_angles, label='udacity',alpha=0.5)
plt.legend()
plt.xlabel('instance position')
plt.ylabel('angle (rad)')
plt.title('Steering Wheel Angle Variation')
plt.grid(True)
ax3 = plt.subplot(2, 2, 3)
#plt.scatter(udacity_ms_timestamp_norm_cum, udacity_steering_angles, label='udacity')
plt.scatter(my_ms_timestamp_norm_cum, np_my_steering_angles, label='my_data', alpha=0.5)
plt.legend()
plt.xlabel('mSecs')
plt.ylabel('angle (rad)')
plt.title('Steering Wheel Angle Variation')
plt.grid(True)
ax4 = plt.subplot(2, 2, 4)
#plt.scatter(udacity_ms_timestamp_norm_cum, udacity_steering_angles, label='udacity')
#plt.hist(my_ms_timestamp_norm_cum, my_steering_angles, label='my_data')
plt.hist(np_my_steering_angles, 100, alpha=0.75, label='my_data' )
plt.legend()
plt.xlabel('angle(rad)')
plt.ylabel('Number of instances')
plt.title('Steering Wheel Angle Variation')
plt.grid(True)
#plt.savefig("test.png")
plt.show()
# +
import math
instance_count = len(np_my_steering_angles)
num_zeros = ((np_my_steering_angles == 0.0) & (np_my_steering_angles == -0.0)).sum()
num_near_zero = ((np_my_steering_angles < 0.0174) & (np_my_steering_angles > -0.0174)).sum()
num_left = (np_my_steering_angles < 0.0).sum()
num_right = (np_my_steering_angles > 0.0).sum()
deg = math.degrees(0.0174)
rad = math.radians(1)
print("Total number of steering instances: {0}".format(instance_count))
print("Number of instances with 0 as steering Angle: {0} ({1:.2f}%)".format(num_zeros, (num_zeros/instance_count)*100))
print("Number of instances < +/-1 degree as steering Angle: {0} ({1:.2f}%)".format(num_near_zero, (num_near_zero/instance_count)*100))
print("Number of instances with left steering Angle: {0} ({1:.2f}%)".format(num_left, (num_left/instance_count)*100))
print("Number of instances with right steering Angle: {0} ({1:.2f}%)".format(num_right, (num_right/instance_count)*100))
# +
import math
instance_count = len(y_train)
image_count = len(X_train)
num_zeros = ((y_train == 0.0) & (y_train == -0.0)).sum()
num_near_zero = ((y_train < 0.0174) & (y_train > -0.0174)).sum()
num_left = (y_train < 0.0).sum()
num_right = (y_train > 0.0).sum()
deg = math.degrees(0.0174)
rad = math.radians(1)
print("Total number of steering instances: {0}".format(instance_count))
print("Total number of image instances: {0}".format(image_count))
print("Number of instances with 0 as steering Angle: {0} ({1:.2f}%)".format(num_zeros, (num_zeros/instance_count)*100))
print("Number of instances < +/-1 degree as steering Angle: {0} ({1:.2f}%)".format(num_near_zero, (num_near_zero/instance_count)*100))
print("Number of instances with left steering Angle: {0} ({1:.2f}%)".format(num_left, (num_left/instance_count)*100))
print("Number of instances with right steering Angle: {0} ({1:.2f}%)".format(num_right, (num_right/instance_count)*100))
# -
# ### play video file
# +
from moviepy.editor import ImageSequenceClip
import argparse
import os
IMAGE_EXT = ['jpeg', 'gif', 'png', 'jpg']
def video_maker(image_folder='./data/IMG',set_fps=10):
#convert file folder into list filtered for image file types
image_list = sorted([os.path.join(image_folder, image_file)
for image_file in os.listdir(image_folder)])
image_list = [image_file for image_file in image_list if os.path.splitext(image_file)[1][1:].lower() in IMAGE_EXT]
#two methods of naming output video to handle varying environemnts
video_file_1 = image_folder + '.mp4'
video_file_2 = image_folder + 'output_video.mp4'
print("Creating video {}, FPS={}".format(image_folder, set_fps))
clip = ImageSequenceClip(image_list, fps=set_fps)
try:
clip.write_videofile(video_file_1)
except:
clip.write_videofile(video_file_2)
# +
#video_maker()
# +
import numpy as np
import matplotlib.pyplot as plt
# evenly sampled time at 200ms intervals
#t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
#plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
#plt.show()
# -
# ### NVIDIA NET FUNCTION
# +
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda, Cropping2D, Dropout
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
def net_NVIDIA():
# NVIDIA Convolutional Network function
# create a sequential model
model = Sequential()
# add pre-processing steps - normalising the data and mean centre the data
# add a lambda layer for normalisation
# normalise image by divide each element by 255 (max value of an image pixel)
model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(160, 320, 3)))
# after image is normalised in a range 0 to 1 - mean centre it by subtracting 0.5 from each element - shifts mean from 0.5 to 0
# training loss and validation loss should be much smaller
# crop the image to remove pixels that are not adding value - top 70, and bottom 25 rows
model.add(Cropping2D(cropping=((70, 25), (0, 0))))
# keras auto infer shape of all layers after 1st layer
# 1st layer
#model.add(Conv2D(24, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(24, (5, 5), activation="relu", strides=(2, 2)))
# 2nd layer
#model.add(Conv2D(36, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(36, (5, 5), activation="relu", strides=(2, 2)))
# 3rd layer
#model.add(Conv2D(48, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(48, (5, 5), activation="relu", strides=(2, 2)))
# 4th layer
model.add(Conv2D(64, (3, 3), activation="relu"))
# 5th layer
model.add(Conv2D(64, (3, 3), activation="relu"))
# 6th layer
model.add(Flatten())
# 7th layer - add fully connected layer ouput of 100
model.add(Dense(100))
# 8th layer - add fully connected layer ouput of 50
model.add(Dense(50))
# 9th layer - add fully connected layer ouput of 10
model.add(Dense(10))
# 0th layer - add fully connected layer ouput of 1
model.add(Dense(1))
model.summary()
return model
def train_model(model, inputs, outputs, model_path, set_epochs= 3):
#model.compile(loss='mse', optimizer='adam')
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape', 'cosine', 'acc'])
history_object = model.fit(inputs, outputs, validation_split=0.2, shuffle=True, epochs=set_epochs, verbose=1)
model_object = 'Final_' + model_path + str(set_epochs) + '.h5'
model.save(model_object)
print("Model saved at " + model_object)
return history_object
# +
# Create Model
model = net_NVIDIA()
num_epoch = 6
history_object = train_model(model, X_train, y_train, './NVidia_', num_epoch)
# +
### print the keys contained in the history object
print(history_object.history.keys())
### plot the training and validation loss for each epoch
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
#plt.savefig("Loss_NVidia_6.png")
plt.savefig("Final_Loss_NVidia_{0}.png".format(num_epochs))
plt.show()
# -
|
.ipynb_checkpoints/BehaviourInvestigations-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
ebola = pd.read_csv(r'C:\Users\USER-PC\Documents\Data Science\data set\ebola_2014_2016_clean.csv')
ebola.columns
ebola.describe()
ebola.head()
e_country = ebola.groupby('Country').sum()
e_country
ebola.head()
sns.scatterplot(x= 'Cumulative no. of confirmed, probable and suspected cases', y='Cumulative no. of confirmed, probable and suspected deaths',
hue= 'Country', data= ebola)
plt.legend(loc=(1.1,0.2))
sns.distplot(ebola['Cumulative no. of confirmed, probable and suspected cases'], kde=False)
sns.distplot(ebola['Cumulative no. of confirmed, probable and suspected deaths'], kde=False)
sns.jointplot(x= 'Cumulative no. of confirmed, probable and suspected deaths', y= 'Cumulative no. of confirmed, probable and suspected cases',
data=ebola, kind='reg', color='blue')
e_country.corr().plot(kind='bar')
sns.countplot(y= 'Cumulative no. of confirmed, probable and suspected cases', data=e_country)
e_country.plot()
e_country.head()
sns.scatterplot(x= 'Cumulative no. of confirmed, probable and suspected deaths', y= 'Cumulative no. of confirmed, probable and suspected cases', data= e_country)
sns.lmplot(x= 'Cumulative no. of confirmed, probable and suspected deaths', y= 'Cumulative no. of confirmed, probable and suspected cases',
data=ebola, hue='Country')
# observe the trend in countries less than 500 cases
e_country[e_country['Cumulative no. of confirmed, probable and suspected cases'] < 500]['Cumulative no. of confirmed, probable and suspected deaths']
# +
# Evidently, countries with less than 500 cases no single death
# +
# observe the trend in countries less than 500 cases
usa = e_country[(e_country['Cumulative no. of confirmed, probable and suspected cases'] > 500) & (e_country['Cumulative no. of confirmed, probable and suspected cases'] < 1000)]
# -
usa
sns.barplot(usa)
usa.corr( method='kendall')
date = ebola.groupby(by='Date').sum()
date.head()
import chart_studio.plotly as cs
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected = True)
data = dict(type ='choropleth',
locations = ebola['Country'],
locationmode='ISO-3',
colorscale = 'Greens',
text =ebola['Country'],
marker = dict(line = dict(color = 'rgb(12, 12, 12)', width = 2)),
z = ebola['Cumulative no. of confirmed, probable and suspected cases'],
colorbar = {'title': 'Affected Countries With Ebola'})
layout = dict(title = 'Affected Countries With Ebola',
geo = dict(showframe = True,
showlakes = True, lakecolor= 'rgb(85,173,240)',
projection = {'type': 'natural earth'}))
choromap = go.Figure(data = [data], layout = layout)
iplot(choromap)
|
Project Ebola.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="hMqWDc_m6rUC" colab_type="code" cellView="form" colab={}
#@title Copyright 2020 Google LLC. Double-click here for license information.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="4f3CKqFUqL2-" colab_type="text"
# # Validation Sets and Test Sets
#
# The previous Colab exercises evaluated the trained model against the training set, which does not provide a strong signal about the quality of your model. In this Colab, you'll experiment with validation sets and test sets.
#
#
#
#
#
# + [markdown] id="3spZH_kNkWWX" colab_type="text"
# ## Learning objectives
#
# After doing this Colab, you'll know how to do the following:
#
# * Split a [training set](https://developers.google.com/machine-learning/glossary/#training_set) into a smaller training set and a [validation set](https://developers.google.com/machine-learning/glossary/#validation_set).
# * Analyze deltas between training set and validation set results.
# * Test the trained model with a [test set](https://developers.google.com/machine-learning/glossary/#test_set) to determine whether your trained model is [overfitting](https://developers.google.com/machine-learning/glossary/#overfitting).
# * Detect and fix a common training problem.
# + [markdown] id="gV82DJO3kWpk" colab_type="text"
# ## The dataset
#
# As in the previous exercise, this exercise uses the [California Housing dataset](https://developers.google.com/machine-learning/crash-course/california-housing-data-description) to predict the `median_house_value` at the city block level. Like many "famous" datasets, the California Housing Dataset actually consists of two separate datasets, each living in separate .csv files:
#
# * The training set is in `california_housing_train.csv`.
# * The test set is in `california_housing_test.csv`.
#
# You'll create the validation set by dividing the downloaded training set into two parts:
#
# * a smaller training set
# * a validation set
# + [markdown] id="u84mXopntPFZ" colab_type="text"
# ## Use the right version of TensorFlow
#
# The following hidden code cell ensures that the Colab will run on TensorFlow 2.X.
# + id="FBhNIdUatOU6" colab_type="code" cellView="form" colab={}
#@title Run on TensorFlow 2.x
# %tensorflow_version 2.x
# + [markdown] id="S8gm6BpqRRuh" colab_type="text"
# ## Import relevant modules
#
# As before, this first code cell imports the necessary modules and sets a few display options.
# + id="9D8GgUovHbG0" colab_type="code" cellView="form" colab={}
#@title Import modules
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format
# + [markdown] id="xjvrrClQeAJu" colab_type="text"
# ## Load the datasets from the internet
#
# The following code cell loads the separate .csv files and creates the following two pandas DataFrames:
#
# * `train_df`, which contains the training set.
# * `test_df`, which contains the test set.
#
#
# + id="zUnTc_wfd_o3" colab_type="code" colab={}
train_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
test_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv")
# + [markdown] id="P_KBdj2M_yjM" colab_type="text"
# ## Scale the label values
#
# The following code cell scales the `median_house_value`.
# See the previous Colab exercise for details.
# + id="3hc7QQhaAFXD" colab_type="code" colab={}
scale_factor = 1000.0
# Scale the training set's label.
train_df["median_house_value"] /= scale_factor
# Scale the test set's label
test_df["median_house_value"] /= scale_factor
# + [markdown] id="FhessIIV8VPc" colab_type="text"
# ## Load the functions that build and train a model
#
# The following code cell defines two functions:
#
# * `build_model`, which defines the model's topography.
# * `train_model`, which will ultimately train the model, outputting not only the loss value for the training set but also the loss value for the validation set.
#
# Since you don't need to understand model building code right now, we've hidden this code cell. As always, you must run hidden code cells.
# + id="bvonhK857msj" colab_type="code" cellView="form" colab={}
#@title Define the functions that build and train a model
def build_model(my_learning_rate):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Add one linear layer to the model to yield a simple linear regressor.
model.add(tf.keras.layers.Dense(units=1, input_shape=(1,)))
# Compile the model topography into code that TensorFlow can efficiently
# execute. Configure training to minimize the model's mean squared error.
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.RootMeanSquaredError()])
return model
def train_model(model, df, feature, label, my_epochs,
my_batch_size=None, my_validation_split=0.1):
"""Feed a dataset into the model in order to train it."""
history = model.fit(x=df[feature],
y=df[label],
batch_size=my_batch_size,
epochs=my_epochs,
validation_split=my_validation_split)
# Gather the model's trained weight and bias.
trained_weight = model.get_weights()[0]
trained_bias = model.get_weights()[1]
# The list of epochs is stored separately from the
# rest of history.
epochs = history.epoch
# Isolate the root mean squared error for each epoch.
hist = pd.DataFrame(history.history)
rmse = hist["root_mean_squared_error"]
return epochs, rmse, history.history
print("Defined the build_model and train_model functions.")
# + [markdown] id="8gRu4Ri0D8tH" colab_type="text"
# ## Define plotting functions
#
# The `plot_the_loss_curve` function plots loss vs. epochs for both the training set and the validation set.
# + id="QA7hsqPZDvVM" colab_type="code" cellView="form" colab={}
#@title Define the plotting function
def plot_the_loss_curve(epochs, mae_training, mae_validation):
"""Plot a curve of loss vs. epoch."""
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Root Mean Squared Error")
plt.plot(epochs[1:], mae_training[1:], label="Training Loss")
plt.plot(epochs[1:], mae_validation[1:], label="Validation Loss")
plt.legend()
# We're not going to plot the first epoch, since the loss on the first epoch
# is often substantially greater than the loss for other epochs.
merged_mae_lists = mae_training[1:] + mae_validation[1:]
highest_loss = max(merged_mae_lists)
lowest_loss = min(merged_mae_lists)
delta = highest_loss - lowest_loss
print(delta)
top_of_y_axis = highest_loss + (delta * 0.05)
bottom_of_y_axis = lowest_loss - (delta * 0.05)
plt.ylim([bottom_of_y_axis, top_of_y_axis])
plt.show()
print("Defined the plot_the_loss_curve function.")
# + [markdown] id="jipBqEQXlsN8" colab_type="text"
# ## Task 1: Experiment with the validation split
#
# In the following code cell, you'll see a variable named `validation_split`, which we've initialized at 0.2. The `validation_split` variable specifies the proportion of the original training set that will serve as the validation set. The original training set contains 17,000 examples. Therefore, a `validation_split` of 0.2 means that:
#
# * 17,000 * 0.2 ~= 3,400 examples will become the validation set.
# * 17,000 * 0.8 ~= 13,600 examples will become the new training set.
#
# The following code builds a model, trains it on the training set, and evaluates the built model on both:
#
# * The training set.
# * And the validation set.
#
# If the data in the training set is similar to the data in the validation set, then the two loss curves and the final loss values should be almost identical. However, the loss curves and final loss values are **not** almost identical. Hmm, that's odd.
#
# Experiment with two or three different values of `validation_split`. Do different values of `validation_split` fix the problem?
#
# + id="knP23Taoa00a" colab_type="code" colab={}
# The following variables are the hyperparameters.
learning_rate = 0.08
epochs = 30
batch_size = 100
# Split the original training set into a reduced training set and a
# validation set.
validation_split = 0.2
# Identify the feature and the label.
my_feature = "median_income" # the median income on a specific city block.
my_label = "median_house_value" # the median house value on a specific city block.
# That is, you're going to create a model that predicts house value based
# solely on the neighborhood's median income.
# Discard any pre-existing version of the model.
my_model = None
# Invoke the functions to build and train the model.
my_model = build_model(learning_rate)
epochs, rmse, history = train_model(my_model, train_df, my_feature,
my_label, epochs, batch_size,
validation_split)
plot_the_loss_curve(epochs, history["root_mean_squared_error"],
history["val_root_mean_squared_error"])
# + [markdown] id="TKa11JK4Pm3f" colab_type="text"
# ## Task 2: Determine **why** the loss curves differ
#
# No matter how you split the training set and the validation set, the loss curves differ significantly. Evidently, the data in the training set isn't similar enough to the data in the validation set. Counterintuitive? Yes, but this problem is actually pretty common in machine learning.
#
# Your task is to determine **why** the loss curves aren't highly similar. As with most issues in machine learning, the problem is rooted in the data itself. To solve this mystery of why the training set and validation set aren't almost identical, write a line or two of [pandas code](https://colab.research.google.com/github/google/eng-edu/blob/main/ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb?utm_source=validation-colab&utm_medium=colab&utm_campaign=colab-external&utm_content=pandas_tf2-colab&hl=en) in the following code cell. Here are a couple of hints:
#
# * The previous code cell split the original training set into:
# * a reduced training set (the original training set - the validation set)
# * the validation set
# * By default, the pandas [`head`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.head.html) method outputs the *first* 5 rows of the DataFrame. To see more of the training set, specify the `n` argument to `head` and assign a large positive integer to `n`.
# + id="VJQcAZkwJt_p" colab_type="code" colab={}
# Write some code in this code cell.
# + id="EnNvkFwwK8WY" colab_type="code" cellView="form" colab={}
#@title Double-click for a possible solution to Task 2.
# Examine examples 0 through 4 and examples 25 through 29
# of the training set
train_df.head(n=1000)
# The original training set is sorted by longitude.
# Apparently, longitude influences the relationship of
# total_rooms to median_house_value.
# + [markdown] id="rw4xI1ZEckI8" colab_type="text"
# ## Task 3. Fix the problem
#
# To fix the problem, shuffle the examples in the training set before splitting the examples into a training set and validation set. To do so, take the following steps:
#
# 1. Shuffle the data in the training set by adding the following line anywhere before you call `train_model` (in the code cell associated with Task 1):
#
# ```
# shuffled_train_df = train_df.reindex(np.random.permutation(train_df.index))
# ```
#
# 2. Pass `shuffled_train_df` (instead of `train_df`) as the second argument to `train_model` (in the code call associated with Task 1) so that the call becomes as follows:
#
# ```
# epochs, rmse, history = train_model(my_model, shuffled_train_df, my_feature,
# my_label, epochs, batch_size,
# validation_split)
# ```
# + id="ncODhpv0h-LG" colab_type="code" cellView="form" colab={}
#@title Double-click to view the complete implementation.
# The following variables are the hyperparameters.
learning_rate = 0.08
epochs = 70
batch_size = 100
# Split the original training set into a reduced training set and a
# validation set.
validation_split = 0.2
# Identify the feature and the label.
my_feature = "median_income" # the median income on a specific city block.
my_label = "median_house_value" # the median house value on a specific city block.
# That is, you're going to create a model that predicts house value based
# solely on the neighborhood's median income.
# Discard any pre-existing version of the model.
my_model = None
# Shuffle the examples.
shuffled_train_df = train_df.reindex(np.random.permutation(train_df.index))
# Invoke the functions to build and train the model. Train on the shuffled
# training set.
my_model = build_model(learning_rate)
epochs, rmse, history = train_model(my_model, shuffled_train_df, my_feature,
my_label, epochs, batch_size,
validation_split)
plot_the_loss_curve(epochs, history["root_mean_squared_error"],
history["val_root_mean_squared_error"])
# + [markdown] id="tKN239_miW8C" colab_type="text"
# Experiment with `validation_split` to answer the following questions:
#
# * With the training set shuffled, is the final loss for the training set closer to the final loss for the validation set?
# * At what range of values of `validation_split` do the final loss values for the training set and validation set diverge meaningfully? Why?
# + id="-UAJ3Q86iz31" colab_type="code" cellView="form" colab={}
#@title Double-click for the answers to the questions
# Yes, after shuffling the original training set,
# the final loss for the training set and the
# validation set become much closer.
# If validation_split < 0.15,
# the final loss values for the training set and
# validation set diverge meaningfully. Apparently,
# the validation set no longer contains enough examples.
# + [markdown] id="1PP-O8TOZOeo" colab_type="text"
# ## Task 4: Use the Test Dataset to Evaluate Your Model's Performance
#
# The test set usually acts as the ultimate judge of a model's quality. The test set can serve as an impartial judge because its examples haven't been used in training the model. Run the following code cell to evaluate the model with the test set:
# + id="nd_Sw2cygOip" colab_type="code" colab={}
x_test = test_df[my_feature]
y_test = test_df[my_label]
results = my_model.evaluate(x_test, y_test, batch_size=batch_size)
# + [markdown] id="qoyQKvsjmV_A" colab_type="text"
# Compare the root mean squared error of the model when evaluated on each of the three datasets:
#
# * training set: look for `root_mean_squared_error` in the final training epoch.
# * validation set: look for `val_root_mean_squared_error` in the final training epoch.
# * test set: run the preceding code cell and examine the `root_mean_squared_error`.
#
# Ideally, the root mean squared error of all three sets should be similar. Are they?
# + id="FxXtp-aVdIgJ" colab_type="code" cellView="form" colab={}
#@title Double-click for an answer
# In our experiments, yes, the rmse values
# were similar enough.
|
ml/cc/exercises/validation_and_test_sets.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia nodeps 1.6.1
# language: julia
# name: julia-nodeps-1.6
# ---
# In this notebook we implement the quantum annealing algorithm to search for the ground state using the Yao package from Julia with P1 gates.
#import Pkg
using Yao
using Yao.ConstGate
using Plots
using BenchmarkTools
using Printf
using StatsBase
# +
#=
H(t) = Ω(t) ∑_i σ_i^x - δ(t) ∑_i n_i + u ∑_ij n_i n_j
=#
const u = 1.35
const Ω_max = 1.89
const δ_0 = -1.0
const δ_max = 1.0
function get_edges(graph::Vector{NTuple{2, Float64}})
Nv = size(graph)[1]
edges = falses(Nv, Nv)
for i in 1:(Nv-1)
xi, yi = graph[i]
for j in (i+1):Nv
xj, yj = graph[j]
dij = sqrt((xi - xj)^2. + (yi - yj)^2.)
if dij <= 1.0
edges[i,j] = true
end
end
end
return findall(edges)
end
function Ω(t::Float64)
if 0 <= t <= 0.25
return (Ω_max / 0.25) * t
elseif 0.25 < t <= 0.69
return Ω_max
elseif 0.69 < t <= 1
return - Ω_max * t / 0.31 + Ω_max * (1 + 0.69/0.31)
end
end
function δ(t::Float64)
slope = (δ_0 - δ_max)/(0.25 - 0.69)
if 0 <= t <= 0.25
return δ_0
elseif 0.25 < t <= 0.69
return t * slope + (δ_max - slope * 0.69)
elseif 0.69 < t <= 1
return δ_max
end
end
function hamiltonian(graph::Vector{NTuple{2, Float64}}, edges::Vector{CartesianIndex{2}}, t::Float64)
# the UD-MIS Hamiltonian
Nv = size(graph)[1] # number of vertices
interaction_term = map(1:size(edges)[1]) do i
l,m = edges[i][1], edges[i][2]
repeat(Nv,u*P1,(l,m))
end |> sum
interaction_term - δ(t)*sum(map(i->put(Nv,i=>P1), 1:Nv)) + Ω(t)*sum(map(i->put(Nv,i=>X), 1:Nv))
end
function run_annealing(graph::Vector{NTuple{2, Float64}}, edges::Vector{CartesianIndex{2}}, dt::Float64)
psi_t = zero_state(size(graph)[1])
for t in 0:dt:1.0
h = hamiltonian(graph, edges, t)
psi_t = psi_t |> TimeEvolution(h, dt * 100)
end
return psi_t
end
# +
graph = [(0.3461717838632017, 1.4984640297338632),
(0.6316400411846113, 2.5754677320579895),
(1.3906262250927481, 2.164978861396621),
(0.66436005100802, 0.6717919819739032),
(0.8663329771713457, 3.3876341010035995),
(1.1643107343501296, 1.0823066243402013)
]
edges = get_edges(graph);
# -
# We notice the algorithm reaches a different ground state depending on N (the number of samples taken). The graphs show clearly that there is a three-fold degeneracy of the ground state.
# +
N = [1000, 10000, 100000, 1000000]
dt = 0.001
plots = []
for n in N
psi = run_annealing(graph, edges, dt)
samples = measure(psi; nshots=n)
samples_int = [Int(b) for b in samples]
bins_int = unique(samples_int)
s = [string(each, base=2, pad=size(graph)[1]) for each in bins_int]
datamap = countmap(samples)
bins = unique(samples)
optimal_sol = findmax(datamap)
println("Optimal Solution for N = ", n)
println(optimal_sol)
title = @sprintf("Frequency of the states N = %d", n)
b_plot = bar((x -> datamap[x]).(bins),
xticks=(1:size(s)[1],s),
xtickfont = font(3, "Courier"),title=title, legend = false,
xrotation=60)
push!(plots, b_plot)
end
plot(plots...)
plot!(size=(1000,500))
# -
# We have also shown how the speed for the algorithm to converge depends of the value of dt (for small values of dt the algorithm takes much longer to converge).
# +
dt = [0.1, 0.01, 0.001, 0.0001, 0.00001]
time = []
for t in dt
push!(time, @elapsed run_annealing(graph, edges, t))
end
# -
title = @sprintf("Time to Run Function")
scatter(dt,
time,
legend = false,
title = title,
xaxis = "dt",
yaxis = "Time (s)"
)
|
Week2_Rydberg_Atoms/Task2_julia.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: AI Learns to TikTok
#
# ---
#
# In this notebook, we will use our model to generate dances from music.
#
# Use the links below to navigate the notebook:
# - [Step 1](#step1): Get Data Loader for Test Dataset
# - [Step 2](#step2): Visualise Spectrogram
# - [Step 3](#step3): Load Trained Models
# - [Step 4](#step4): Get Predicted Poses
# - [Step 5](#step5): Generate dance video from outputs
# <a id='step1'></a>
# ## Step 1: Get Data Loader for Test Dataset
#
# Before running the code cell below, define the transform in `transform_test` that you would like to use to pre-process the test images.
#
# Make sure that the transform that you define here agrees with the transform that you used to pre-process the training images (in **2_Training.ipynb**). For instance, if you normalized the training images, you should also apply the same normalization procedure to the test images.
# +
# %load_ext autoreload
# %autoreload 2
import sys
from data_loader import get_loader
from torchvision import transforms
# TODO #1: Define a transform to pre-process the testing images.
transform_test = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Create the data loader.
data_loader = get_loader(transform=transform_test,
mode='test')
# -
# <a id='step2'></a>
# ## Step 2: Visualise Spectrogram
#
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Obtain sample image before and after pre-processing.
orig_spectrogram, spectrogram = next(iter(data_loader))
# Visualize sample image, before pre-processing.
plt.imshow(np.squeeze(orig_spectrogram))
plt.title('example of spectrogram b4 transformation')
plt.show()
# -
# Visualize sample image, after pre-processing.
plt.imshow(np.squeeze(spectrogram).permute(1,2,0))
plt.title('example of spectrogram aft transformation')
plt.show()
# <a id='step3'></a>
# ## Step 3: Load Trained Models
#
# In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
# +
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# +
# Watch for any changes in model.py, and re-load it automatically.
# %load_ext autoreload
# %autoreload 2
import os
import torch
from model import EncoderCNN, DecoderRNN
# TODO #2: Specify the saved models to load.
encoder_file = "encoder-5.pkl"
decoder_file = "decoder-5.pkl"
# TODO #3: Select appropriate values for the Python variables below.
input_size = 50
hidden_size = 500
# Initialize the encoder and decoder, and set each to inference mode.
encoder = EncoderCNN(input_size)
encoder.eval()
decoder = DecoderRNN(input_size, hidden_size, num_layers=2)
decoder.eval()
# Load the trained weights.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
# Move models to GPU if CUDA is available.
encoder.to(device)
decoder.to(device)
# -
# <a id='step4'></a>
# ## Step 4: Get Predicted Poses
# +
# Move image Pytorch Tensor to GPU if CUDA is available.
spectrogram = spectrogram.to(device)
# Obtain the embedded image features.
features = encoder(spectrogram).unsqueeze(1)
# Pass the embedded image features through the model to get predicted poses.
output = decoder.sample(features)
#print('example output:', output)
print('example output:', output[0].shape)
# -
# <a id='step5'></a>
# ## Step 5: Generate dance video from outputs
#
predictions_folder = "predictions"
# +
import subprocess
import glob
def video_from_poses(output):
for i in range(225):
pose = output[i]
plt.scatter(pose[:,0],pose[:,1])
plt.savefig(predictions_folder + "/file%02d.png" % i)
plt.close()
os.chdir(predictions_folder)
subprocess.call([
'ffmpeg', '-framerate', '15', '-i', 'file%02d.png', '-pix_fmt', 'yuv420p',
'video_name.mp4'
])
for file_name in glob.glob("*.png"):
os.remove(file_name)
os.chdir("..")
# +
# 225 poses represented by 1x50 vectors...
# -
video = video_from_poses(output)
# watch example video
|
3_Inference.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
# # Common Questions
#
# Here, I answer questions from published [here](https://huyenchip.com/ml-interviews-book/contents/) in preparation for my interviews. I also include other questions.
# ## Vectors
#
# 1. Dot product
# 1. [E] What’s the geometric interpretation of the dot product of two vectors?
# 2. [E] Given a vector , find vector of unit length such that the dot product of u and v is maximum.
# 2. Outer product
# 1. [E] Given two vectors a=[3,2,1] and b=[-1,0,1]. Calculate the outer product $a^T b$?
# 2. [M] Give an example of how the outer product can be useful in ML.
# 3. [E] What does it mean for two vectors to be linearly independent?
# 3. [M] Given two sets of vectors $A = a_1, a_2, ..., a_n$ and $B = b_1, b_2, ..., b_n$. How do you check that they share the same basis?
# 4. [M] Given n vectors, each of d dimensions. What is the dimension of their span?
# 5. Norms and metrics
# 1. [E] What's a norm? What is $L_0, L_1, L_2, L_{norm}$?
# 2. [M] How do norm and metric differ? Given a norm, make a metric. Given a metric, can we make a norm?
# +
import numpy as np
import matplotlib.pyplot as plt
print("""1.1 Dot product finds the length of the projection of x onto y
""")
num_iter = 3
fig, axs = plt.subplots(1,num_iter)
for seed, ax in zip(range(num_iter), axs):
np.random.seed(seed)
n=2
x = np.random.uniform(0,1,n)
y = np.random.uniform(0,1,n)
# Dot product finds the length of the projection of x onto y
dot = np.sum(x.T*y) # or np.dot(x,y)
x_mag = np.sqrt(np.sum(np.square(x)))
y_mag = np.sqrt(np.sum(np.square(y)))
angle = np.arccos(dot / (x_mag * y_mag)) * 360 / (2 * np.pi)
ax.plot([0,x[0]], [0,x[1]], label='x')
ax.plot([0,y[0]], [0,y[1]], label='y')
ax.set_title(f"Dot:{round(dot,2)}, angle:{round(angle,2)}")
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='center right')
plt.tight_layout()
plt.show()
print("""1.2 The maximum dot product is found when the lines are parallel.
""")
print("""2.1 Calculate elementwise product (notated with "X⊗Y")
""")
x = np.array([3,2,1])
y = np.array([-1,0,1])
print('x', x), print('y', y)
print('X⊗Y =', np.multiply.outer(x.T,y))
print("""2.2 Cross products can be used to analyze pairwise correlations
""")
print("""3. Linearly independent vectors have dot(x,y)=0 because angle=90. In terms of eigenvectors/eigenvalues, if the eigenvalue of the matrix is zero, the eigenvector is linearly dependent.
""")
import numpy as np
matrix = np.array(
[
[0, 1 ,0 ,0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]
])
lambdas, V = np.linalg.eig(matrix.T)
# The linearly dependent row vectors
print("Dependent: ", matrix[lambdas == 0,:])
print("4. Confirm independence.")
print("5. The span is the same dimension as the basis. It is generated from linear combinations of the basis vectors.")
print("6. L0 reports the number of incorrect responses. For instance, if 1 answer is reported incorrect out of 5 questions, then the L0 is 1.")
print(" L1 is manhattan distance and is described as the sum of absolutes.")
print(" L2 is euclidean distance and is described as the square root of the sum of squares.")
print(" L-infinity reports the largest magnitud among each element of a vector. In the analogy of construction, by minimizing the L-infinity, we are reducing the cost of the most expensive building.")
print("""\nMetrics d(u,v) induced by a vector space norm has additional properties that are not true of general metrics, namely:
1. Translation Invariance: d(u+w, v+w) = d(u,v)
2. Scaling property: for any real number t, d(tu,tv) = |t| d(u,v)
Conversely, if a metric has the above properties, then d(u,0) is a norm. In other words, a metric is a function of two variables while a norm is a function of one variable.
""")
# -
# ## Matrices
#
# **1. Why do we say that matrices are linear transformations?**
#
# Matrices, when multiplied with a vector (for instance) cause a linear transformation on that vector.
#
# $$
# T(\mathbf{v}) = M \mathbf{v} = M \begin{bmatrix}x\\y\\\end{bmatrix} = \begin{bmatrix}a&b\\c&d\\\end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} = \begin{bmatrix}ax+by\\cx+dy\\\end{bmatrix}
# $$
#
# Matrices give us a powerful systematic way to describe a wide variety of transformations: they can describe rotations, reflections, dilations, and much more
#
# **2. What's the inverse of a matrix? Do all matrices have an inverse? Is the inverse of a matrix always unique?**
#
# $A^{-1} A = A A^{-1} = I$ descibes a matrix $A$ that, when multiplied by its inverse $A^{-1}$, generates the identity matrix. Matrices are invertible when they have a nonzero determinant, nonzero eigenvalues, trivial nullspace (only zeros), and full rank (rank = dimension). By, $A=AI=A(CB)=(AC)B=IB=B$, where $A$ and $B$ are square matrices with the same inverse $C$, an inverse of a matrix is always unique.
#
# **3. What does the determinant of a matrix represent?**
#
# Factor of deformation caused by the transformation. A determinant of zero "squashes" the parallelpiped, in other words, this matrix is singular.
#
# **4. What happens to the determinant of a matrix if we multiply one of its rows by a scalar $t\times R$ ?**
#
# * $\det (kA) = k^n \det(A)$ where A is an $n \times n$ matrix
# * Also, If a matrix $A$ has a row that is all zeros, then $\det A = 0$
#
# **5. A $4 \times 4$ matrix has four eigenvalues $3,3,2,−1$. What can we say about the trace and the determinant of this matrix?**
#
# Trace is the sum of the eigenvalues of a matrix.
# Product of eigenvalues of a matrix is equal to the value of the determinant of a matrix.
#
# **6. Given the following matrix:**
# $$\begin{bmatrix}
# 1&4&-2\\
# -1&3&2\\
# 3&5&-6\\
# \end{bmatrix}$$
# **Without explicitly using the equation for calculating determinants, what can we say about this matrix’s determinant? Hint: rely on a property of this matrix to determine its determinant.**
#
# This matrix has dependent columns, so we know that the determinant is zero. This is true because a matrix whose column vectors are linearly dependent will have a zero row show up in its reduced row echelon form, which means that a parameter in the system can be of any value you like.
#
# **7. What's the difference between the covariance matrix $A^T A$ and the Gram matrix $AA^T$ ? Given $A \in R^{n\times m}$ and $b \in R^n$.**
#
# $A A^T$ is a $m \times m$ matrix
#
# $A^T A$ is a $n \times n$ matrix and resembles the covariance.
#
# **i. Find $x$ such that: $Ax=b$ .**
#
# $Ax = b$
#
# $A^{-1} A x = A^{-1} b$
#
# $I x = A^{-1} B$
#
# $x = A^{-1} B$
#
# **ii. When does this have a unique solution?**
#
# When A is invertible.
#
# **iii. Why is it when A has more columns than rows, Ax=b has multiple solutions?**
#
# The most condensed solution will still be a function of multiple columns, meaning multiple solutions will exist.
#
# **iv. Given a matrix A with no inverse. How would you solve the equation Ax=b ? What is the pseudoinverse and how to calculate it?**
#
# https://www.omnicalculator.com/math/pseudoinverse
#
# **8. Derivative is the backbone of gradient descent.**
#
# **i. What does derivative represent?**
#
# Speed of change.
#
# **ii. What’s the difference between derivative, gradient, and Jacobian?**
#
# Gradient: multivariate derivatives
#
# $$\triangledown f = \begin{bmatrix}
# \frac{\delta f(x_1, x_2, x_3)}{\delta x_1} & \frac{\delta f(x_1, x_2, x_3)}{\delta x_2} & \frac{\delta f(x_1, x_2, x_3)}{\delta x_3} \\
# \end{bmatrix}$$
#
# Jacobian: vector-valued derivatives
#
# $$J = \begin{bmatrix}
# \frac{d f_1}{d x_1} & ... & \frac{d f_1}{d x_n}\\
# \vdots & \ddots & \vdots\\
# \frac{d f_n}{d x_1} & ... & \frac{d f_n}{d x_n}\\
# \end{bmatrix}$$
#
# As a note, the Hessian is the derivative of the Jacobian.
#
# **8. Say we have the weights w∈Rd×m and a mini-batch x of n elements, each element is of the shape 1×d so that x∈Rn×d . We have the output y=f(x;w)=xw . What’s the dimension of the Jacobian δyδx ?**
#
#
#
# 1. Given a very large symmetric matrix A that doesn’t fit in memory, say A∈R1M×1M and a function f that can quickly compute f(x)=Ax for x∈R1M . Find the unit vector x so that xTAx is minimal. Hint: Can you frame it as an optimization problem and use gradient descent to find an approximate solution?
# ## Linear regression
#
# **1. Derive the least squares solution.**
#
# $$\begin{align*}
# RSS &= (Y-X\beta)^T (Y-X\beta)\\
# &= (Y^T - \beta^T X^T)(Y-X\beta)\\
# &= Y^T Y
# - Y^T X \beta
# - \beta^T X^T Y
# + \beta^T X^T X \beta\\
# \end{align*}
# $$
#
# Differentiate wrt $\beta$ to minimize...
# $$\begin{align*}
# 0 &= - X^T Y
# - X^T Y
# + 2X^T X \beta\\
# &= -2 X^T Y + 2X^T X \beta\\
# &= - X^T Y + X^T X \beta\\
# &= X^T ( -Y + X\beta )\\
# &= X^T ( Y - X\beta )\\
# \end{align*}
# $$
#
# This is a common solution. But, to solve for $\beta$, we can backtrack a little...
#
# $$\begin{align*}
# 0 &= - X^T Y + X^T X \beta\\
# \beta &= X^T Y/ ( X^T X )\\
# \end{align*}
# $$
#
# **2. Prove that** $X$ and $\epsilon$ **are independent**
#
# We do this by proving $X \perp \epsilon$. In other words, $X^T \epsilon = 0$ where $X$ is $p\times n$ and $\epsilon$ is $n \times 1$
#
# $$\begin{align*}
# X^T \epsilon &= X^T (I - H) y\\
# &= X^T y - X^T H y\\
# &= X^T y - X^T X (X^T X)^{-1} X^T y\\
# &= X^T y - X^T y\\
# &= 0\\
# \end{align*}$$
#
# **While here, we should also prove that** $\epsilon$ and $\hat{y}$ **are independent**
#
# $$\begin{align*}
# \epsilon \times \hat{y} &= \epsilon \times \hat{y}^T\\
# &= (y - \hat{y}) \hat{y}^T\\
# &= (I - H) y \hat{y}^T\\
# &= (I - H) \epsilon \epsilon^T H\\
# &= (I - H) H\\
# &= HH - H = 0\\
# \end{align*}$$
#
# assuming $\epsilon \epsilon^T = \sigma_\epsilon^2 I$ where $\epsilon \sim N(0,1)$ and knowing that $H$ is idopotent $HH = H$.
#
# **3. Prove ANOVA** $SST = SSE + SSR$
#
# $$\begin{align*}
# SST &= \sum_{i=1}^n (y_i - \bar{y})^2\\
# &= \sum_{i=1}^n (y_i - \hat{y}_i + \hat{y}_i - \bar{y})^2\\
# &= \sum_{i=1}^n (y_i - \hat{y}_i)^2 + 2 \sum_{i=1}^n (y_i - \hat{y}_i) (\hat{y}_i - \bar{y}) + \sum_{i=1}^n (\hat{y}_i - \bar{y})^2\\
# &= SSR + SSE + 2 \sum_{i=1}^n (y_i - \hat{y}) (\hat{y}_i - \bar{y})\\
# \end{align*}$$
#
# We know $2 \sum_{i=1}^n (y_i - \bar{y}) (\hat{y}_i - \bar{y}) = 0$ because
#
# $\sum_{i=1}^n (y_i - \hat{y}_i) (\hat{y}_i - \bar{y}) = \sum_{i=1}^n \hat{y}_i (y_i - \hat{y}_i) - \bar{y}_i \sum_{i=1}^n (y_i - \hat{y}_i) = 0 - 0 = 0$
#
# We know $$R^2 = \frac{SSR}{SST} = 1 - \frac{SSE}{SST}$$
#
# As a note, the adjusted $R^2$ is $$R^2_{adj} = 1 - \frac{SSE/(N-p-1)}{SST/(N-1)} = 1 - \frac{(1 - R^2)(N-1)}{(N-p-1)}$$
#
# **4. Given the standard deviation of residuals** ($\hat{\sigma}^2$), find $RSS$:
#
# $$RSE = \sqrt{\frac{RSS}{N-p-1}}$$
#
# $$\hat{\sigma}^2 = RSE^2$$
#
# **5. Given F, p, and n, find** $R^2$
#
# $$
# \begin{align*}
# F &= \frac{SSR/p}{SSE/(n-p-1)}\\
# &= \frac{(SST - SSE)/p}{SSE/(n-p-1)}\\
# F \frac{p}{n-p-1}&= \frac{SST-SSE}{SSE}\\
# 1 + F \frac{p}{n-p-1}&= \frac{SST}{SSE}\\
# 1 - (1 + F \frac{p}{n-p-1})^{-1}&= 1-\frac{SSE}{SST} = R^2\\
# \end{align*}
# $$
#
# **6. In the R output below, how are the terms calculated?**
from utils import disp
disp('example_OLS_output.png')
# The **estimate** is calculated through ordinary least squares (closed form derivation shown above).
#
# The **std. error** is $\sqrt{\widehat{Var}(\hat{\beta}_j)}$ where $\hat{\beta}_j$ is the LS estimator of $\beta_j$, $Var(\hat{\beta}_j) = \frac{\sigma_\epsilon^2}{X^\prime X}$ (proof below) is the variability of the coefficients (as new data points are added). We use $\widehat{Var}$ instead of $Var$ because we are estimating the sampling variability; things like the gaussian noise can be unknown quanitites and, therefore, the variance must be estimated.
#
# The **t-value** is the **estimate** divided by the **std. error**
#
# The **p-value** $Pr(>|t|)$ is a table lookup; We find the p-value on the t distribution with DF $N-p-1$ and **t-value**.
#
# The **residual standard error** is $RSE = \sqrt{\frac{RSS}{N-p-1}}$. Note, we can find $RSS$ using the information on this line. Additionally, if we square this value, we receive the variance of the residuals according to $\hat{\sigma}^2 = RSE^2$.
#
# The **R-square** value is described as the total amount of variance explained by the model, or $SSR / SST$.
#
# The **adjusted R-square** is calculated as a function of the $R^2$: .
#
# The **F-statistic** is a "global" test that checks if at least one of your coefficients are nonzero.
#
# Because $F \sim F_{p, N - p - 1}$, the p-value is estimated as $Pr(F_{p, N - p - 1} \geq F)$.
#
# **7. Prove** $Var[\hat{\beta}] = \frac{\sigma_\epsilon^2}{X^\prime X}$
#
# We know that
#
# $$\begin{align*}
# Var(X) = E[Var(X|Y)] + Var[E(X|Y)]
# \end{align*}$$
#
# $$Var(\hat{\beta}) = E[Var(\hat{\beta}|X)] + Var[E(\hat{\beta}|X)]$$
#
# Knowing OLS is unbiased, $E(\hat{\beta}|X) = \beta$, and therefore $Var[E(\hat{\beta}|X)] = 0$ and that $\beta$ is a constant so
#
# $$\begin{align*}
# Var(\hat{\beta}) &= E[Var(\hat{\beta}|X)]\\
# &= E[\sigma (X^\prime X)^{-1}]
# \end{align*}$$
#
# To prove this last step,
#
# $$
# \textrm{Var}(\hat{\mathbf{\beta}}) =
# (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime}
# \; \sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1}
# = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}
# $$
#
# Using this, Let $\mathbf{x}_j$ be the $j^{th}$ column of $\mathbf{X}$, and $\mathbf{X}_{-j}$ be the $\mathbf{X}$ matrix with the $j^{th}$ column removed.
#
# $$
# \textrm{Var}(\hat{\mathbf{\beta}}_j) =
# \sigma^2 [\mathbf{x}_j^{\prime} \mathbf{x}_j - \mathbf{x}_j^{\prime}
# \mathbf{X}_{-j} (\mathbf{X}_{-j}^{\prime} \mathbf{X}_{-j})^{-1}
# \mathbf{X}_{-j}^{\prime} \mathbf{x}_j]^{-1}
# $$
#
# From here, Let $\mathbf{x_1}$ be the $1$st column of $X$. Let $X_{-1}$ be the matrix $X$ with the $1$st column removed.
#
# Consider the matrices:
#
# $$
# \begin{align*}
# A &= \mathbf{x_1}'\mathbf{x_1}\quad \quad &\text{1 by 1 matrix}\\
# B &= \mathbf{x_1}'X_{-1} \quad &\text{1 by n-1 matrix}\\
# C &= X_{-1}\mathbf{x_1} & \text{n-1 by 1 matrix} \\
# D &= X_{-1}'X_{-1} & \text{n-1 by n-1 matrix}
# \end{align*}
# $$
#
# Observe that:
#
# $$X'X = \begin{bmatrix}A & B \\C & D \end{bmatrix}$$
#
# By the matrix inversion lemma (and under some existence conditions):
#
# $$\left(X'X \right)^{-1} = \begin{bmatrix}\left(A - BD^{-1}C \right)^{-1} & \ldots \\ \ldots & \ldots \end{bmatrix}$$
#
# Notice the 1st row, 1st column of $(X'X)^{-1}$ is given by the [Schur complement][1] of block $D$ of the matrix $X'X$
#
# $$\left(A - BD^{-1}C \right)^{-1}$$
#
#
# [1]: https://en.wikipedia.org/wiki/Schur_complement
# **8. Derive the ridge regression beta in closed form**
#
# It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes
# $$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta$$
#
# Expanding the RSS
#
# $$\begin{align*}
# RSS &= (Y-X\beta)^T (Y-X\beta) + \lambda \beta^T\beta\\
# &= (Y^T - \beta^T X^T)(Y-X\beta)\\
# &= Y^T Y
# - Y^T X \beta
# - \beta^T X^T Y
# + \beta^T X^T X \beta
# + \lambda \beta^T\beta\\
# \end{align*}
# $$
#
# Differentiate wrt $\beta$ to minimize...
# $$\begin{align*}
# 0 &= - X^T Y
# - X^T Y
# + 2X^T X \beta
# + 2 \lambda \beta\\
# &= -2 X^T Y + 2X^T X \beta + 2 \lambda \beta\\
# X^T Y &= (X^T X + \lambda I) \beta\\
# \end{align*}
# $$
#
# Therefore, the ridge estimator is
#
# $$\beta = \frac{X^T Y}{X^T X + \lambda I}$$
#
# As a note, assuming orthonormality of the design matrix implies $X^T X = I = (X^T X)^{-1}$. So, the ridge estimator can be defined as $\hat{\beta}(\lambda)_{ridge} = (1 + \lambda)^{-1} \hat{\beta}_{OLS}$.
#
# Also, bias increases with $\lambda$ (bc more sparse model) and variance decreases with $\lambda$ (bc more sparse model). So, what happens to the MSE of ridge?
#
# **9. Compare the MSE of ridge regression and OLS**
#
# OLS minimizes MSE so it will have a smaller MSE than ridge regression.
# ## Dimensionality reduction
#
# **1. Why do we need dimensionality reduction?**
#
# Remove collinearity & multicollinearity, and save storage & computation time.
#
# **2. Eigendecomposition is a common factorization technique used for dimensionality reduction. Is the eigendecomposition of a matrix always unique?**
#
# No. If multiple eigenvalues are the same, then decomposition is not unique.
#
# **3. Name some applications of eigenvalues and eigenvectors.**
#
# Singular value decomposition (SVD), $A = U D V^T$, is more general than eigendecomposition. Every real matrix has a SVD
# +
# Singular-value decomposition
import numpy as np
from scipy.linalg import svd
# define a matrix
A = np.array([[1, 2], [3, 4], [5, 6]])
A = A - np.mean(A,0)
print("A\n",A)
# Eigendecomposition
co=np.cov(A.T)
[D,UI]=np.linalg.eigh(co)
print("UI",UI)
# SVD
U, s, VT = svd(A)
print("U, left-singular vectors of A\n", U)
print("Singular values of original matrix A\n", s)
print("V, right-singular vectors of A\n", VT)
# -
# **4. We want to do PCA on a dataset of multiple features in different ranges. For example, one is in the range 0-1 and one is in the range 10 - 1000. Will PCA work on this dataset?**
#
# Normalization is important in PCA since it is a variance maximizing exercise. On larger scales, the variance is naturally larger. So, the wrong feature combinations might be chosen.
#
# **5. Under what conditions can one apply eigendecomposition? What about SVD?**
#
# https://math.stackexchange.com/a/365020/752105
#
# **i. What is the relationship between SVD and eigendecomposition?**
#
# **ii. What’s the relationship between PCA and SVD?**
#
#
# **6. How does t-SNE (T-distributed Stochastic Neighbor Embedding) work? Why do we need it?**
#
# https://towardsdatascience.com/t-distributed-stochastic-neighbor-embedding-t-sne-bb60ff109561
#
# An unsupervised, randomized algorithm, used only for visualization
# Applies a non-linear dimensionality reduction technique where the focus is on keeping the very similar data points close together in lower-dimensional space.
# Preserves the local structure of the data using student t-distribution to compute the similarity between two points in lower-dimensional space.
# t-SNE uses a heavy-tailed Student-t distribution to compute the similarity between two points in the low-dimensional space rather than a Gaussian distribution, which helps to address the crowding and optimization problems.
# Outliers do not impact t-SNE.
#
# Step 1: Find the pairwise similarity between nearby points in a high dimensional space.
#
# Step 2: Map each point in high dimensional space to a low dimensional map based on the pairwise similarity of points in the high dimensional space.
#
# Step 3: Find a low-dimensional data representation that minimizes the mismatch between Pᵢⱼ and qᵢⱼ using gradient descent based on Kullback-Leibler divergence(KL Divergence)
#
# Step 4: Use Student-t distribution to compute the similarity between two points in the low-dimensional space.
#
# PCA is deterministic, whereas t-SNE is not deterministic and is randomized.
# t-SNE tries to map only local neighbors whereas PCA is just a diagonal rotation of our initial covariance matrix and the eigenvectors represent and preserve the global properties
#
# ## Statistics
#
# **1. Explain frequentist vs. Bayesian statistics.**
#
# I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping.
#
# Problem: Which area of my home should I search?
#
# **Frequentist Reasoning**
#
# I can hear the phone beeping. I also have a mental model which helps me identify the area from which the sound is coming. Therefore, upon hearing the beep, I infer the area of my home I must search to locate the phone.
#
# **Bayesian Reasoning**
#
# I can hear the phone beeping. Now, apart from a mental model which helps me identify the area from which the sound is coming from, I also know the locations where I have misplaced the phone in the past. So, I combine my inferences using the beeps and my prior information about the locations I have misplaced the phone in the past to identify an area I must search to locate the phone.
#
# So, prior beliefs ($f(p)$) get updated with new data! This follows human thinking! However, it is sometimes hard to define the priors.
#
# **2. Given the array , find its mean, median, variance, and standard deviation.**
#
# mean $\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i$
#
# variance $s^2 = \frac{1}{n-1} \sum_{i=1}^n (x - \bar{x})^2$
#
# **3. When should we use median instead of mean? When should we use mean instead of median?**
#
# Median is more robust to outliers. Mean is tractible.
#
# **4. What is a moment of function? Explain the meanings of the zeroth to fourth moments.**
#
# A moment $M_X(t) = E(e^{tX})$ of a distribution about a number is the expected value of the $n$th power of the deviations about that number. It's a good trick for calculating the properties of a distribution.
#
# n = 0, moment = 1 because the AUC of PDF must be 1.
#
# n = 1 and centered about origin, $E(X)$
#
# n = 2 and centered about mean, the variance $Var(X) = E((X-\mu)^2)$
#
# n = 3 and centered about mean, the skewness $E((X-\mu)^3)$
#
# n = 4 and centered about mean, the kurtosis $E((X-\mu)^4)$
#
# **5. Are independence and zero covariance the same? Give a counterexample if not.**
#
# Independence does not mean a zero covariance. For instance, let $X$ be a random variable that is $−1$ or $+1$ with probability $0.5$. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$ and $Y$ is randomly $-1$ or $+1$ with probability $0.5$ if $X=1$. Clearly, $X$ and $Y$ are dependent (since knowing $Y$ allows me to perfectly know $X$), but their covariance is zero. They both have zero mean, and
#
# $$E[XY] = \begin{align*}
# & (-1) * 0 * P(X=-1)\\
# &+ 1 * 1 * P(X=1, Y=1)\\
# &+ 1 * (-1) * P(X=1, Y=-1)\\
# \end{align*} = 0
# $$
#
# Or more generally, take any distribution $P(X)$ and any $P(Y|X)$ such that $P(Y=a|X)=P(Y=−a|X)$ for all $X$ (i.e., a joint distribution that is symmetric around the $x$ axis), and you will always have zero covariance. But you will have non-independence whenever $P(Y|X)\neq P(Y)$; i.e., the conditionals are not all equal to the marginal. Or ditto for symmetry around the $y$ axis.
#
# Another example is: Take a random variable $X$ with $EX=0$ and $EX^3=0$, e.g. normal random variable with zero mean. Take Y=X2. It is clear that $X$ and $Y$ are related, but
#
# $$cov(X,Y)=EXY−EX*EY=EX^3=0$$
# Summary of ML implementations on Kaggle: https://www.kaggle.com/shivamb/data-science-glossary-on-kaggle
#
# 6. Bayesian rule
#
# Let's say the response variable is either `low` or `high`. The population can have either `default (D)` or `not default (ND)`.
#
# We know $P(low|D)=$
#
# **What is the probability of default given high?**
#
# $$
# P_D(high) = P(D|high) = \frac{P(high|D) * P(D)}{P(high|D)P(D) + P(high|ND)P(ND)} = \frac{0.85 * 0.5}{0.85 * 0.5 + 0.1 * 0.5}
# $$
#
# More generally, we can write the posterior continuous probability as:
#
# $$
# Pr(Y=k|X=x) = \frac{P(X|k) P(k)}{\sum_k P(k) x P(X|k)}
# $$
#
# **With prior of 1/2 what is posterior probability?**
#
# $P(k) = 1/2 = P(1) = P(2)$
#
# If the $P(k)$ are the same (as we see), then the posterior update is simply the ratio of the probabilities. So, you'd classify the point where the density is higher.
# In other words, instead of calculating the posterior $p_k(x)$, $k \in C$, we can simply compare them and select the class $k$ that maximizes $p_k(x)$
#
# $p_1(x) = \frac{\pi_1 f_1(x)}{f(x)}$
#
# $p_2(x) = \frac{\pi_2 f_2(x)}{f(x)}$
#
# Taking the ratio eliminates the $f(x)$ and makes the computation simpler.
#
# If you simplify this form (where $P(k) \sim N(\mu, \sigma^2$)),
#
# $$\ln (\frac{P_1(x)}{P_2(x)}) = \ln (\frac{\pi_1 f_1(x)}{\pi_2 f_2(x)}) = ...$$
#
# **Show $x = \mu_1 + \mu_2$**
#
# $$\begin{align*}
# \delta_1(x) &= \delta_2(x)\\
# \frac{\mu_1 x}{\sigma^2} - \frac{\mu_1^2}{2 \sigma^2} &= \frac{\mu_2 x}{\sigma^2} - \frac{\mu_2^2}{2 \sigma^2}\\
# x &= \frac{\mu_1 + \mu_2}{2}\\
# \end{align*}$$
#
# As a note, if $\sigma$ is the same, this is contained for $i = 2, ..., m$ groups.
|
HopML/Common Interview Questions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: amap_env
# language: python
# name: amap_env
# ---
# # Introduction to the `BrainGlobeAtlas` class
# ## 0. Creating a `BrainGlobeAtlas` object and list availabe options
# To instantiate a `BrainGlobeAtlas` object, we need to instantiate it with the atlas name. The first time we use it, a version of this atlas files will be downloaded from the [remote GIN repository](http://gin.g-node.org/brainglobe/atlases) and stored on your local machine (by default, in .../Users/username/.brainglobe):
# +
from bg_atlasapi import BrainGlobeAtlas
from pprint import pprint
bg_atlas = BrainGlobeAtlas("allen_mouse_100um", check_latest=False)
# -
# To know what atlases are available through BrainGlobe, we can use the `show_atlases` function (we need to be online):
from bg_atlasapi import show_atlases
show_atlases()
# ## 1. Using a `BrainGlobe` atlas
# A BrainGlobe atlas is a convenient API for interacting with an anatomical atlas. BrainGlobe atlases contain:
# * Metadata
# * Reference anatomical stack
# * Region annotation stack
# * Hemisphere annotation stack
# * Description of the region hierarchy
# * Meshes for the regions
# ### 1.0 Metadata
# All atlases have a standard set of medatata describing their source, species, resolution, etc:
bg_atlas.metadata
# ### 1.1 Anatomical, annotation and hemispheres stack
from matplotlib import pyplot as plt
# Anatomical reference:
# +
space = bg_atlas.space
stack = bg_atlas.reference
f, axs = plt.subplots(1,3, figsize=(12, 3))
for i, (plane, labels) in enumerate(zip(space.sections, space.axis_labels)):
axs[i].imshow(stack.mean(i), cmap="gray")
axs[i].set_title(f"{plane.capitalize()} view")
axs[i].set_ylabel(labels[0])
axs[i].set_xlabel(labels[1])
# -
# Annotations stack:
# +
space = bg_atlas.space
stack = bg_atlas.annotation
f, axs = plt.subplots(1,3, figsize=(12, 3))
for i, (plane, labels) in enumerate(zip(space.sections, space.axis_labels)):
axs[i].imshow(stack.max(i), cmap="gray")
axs[i].set_title(f"{plane.capitalize()} view")
axs[i].set_ylabel(labels[0])
axs[i].set_xlabel(labels[1])
# +
space = bg_atlas.space
stack = bg_atlas.hemispheres
f, axs = plt.subplots(1,3, figsize=(12, 3))
for i, (plane, labels) in enumerate(zip(space.sections, space.axis_labels)):
axs[i].imshow(stack.max(i), cmap="gray")
axs[i].set_title(f"{plane.capitalize()} view")
axs[i].set_ylabel(labels[0])
axs[i].set_xlabel(labels[1])
# -
# ### 1.2 Regions hierarchy
# The atlas comes with the description of a hierarchy of brain structures. To have an overview:
# bg_atlas.structures
# The structures attribute is a custom dictionary that can be queried by region number or acronym, and contains all the information for a given structure:
pprint(bg_atlas.structures["root"])
# In particular, the `structure_id_path` key contains a list description of the path in the hierarchy up to a particular region, and can be used for queries on the hierarchy.
bg_atlas.structures["CH"]["structure_id_path"]
# We can use the `bg_atlas.get_structure_descendants` and `bg_atlas.get_structure_ancestors` methods to explore the hierarchy:
bg_atlas.get_structure_descendants("VISC")
bg_atlas.get_structure_ancestors("VISC6a")
# ---
# **NOTE**:
# the levels of the hierarchy depends on the underlying atlas, so we cannot ensure the goodness and consistency of their hierarchy three.
# ---
# There is an higher level description of the structures hierarchy that is built using the [treelib](https://treelib.readthedocs.io/en/latest/) package, and is available as:
bg_atlas.structures.tree
# For most applications though the methods described above and the list path of each region should be enough to query the hierarchy without additional layers of complication.
# ### 1.3 Region masks
# Sometimes, we might want to have the mask for a region that is not labelled in the annotation stack as all its voxels have the number of some lower level parcellation in the hierarchy (concretely, if the brain is divided in hindbrain, midbrain, and forebrain, `annotation == root_id` will be all False).
#
# To get the mask for a region, simply:
stack = bg_atlas.get_structure_mask(997)
# +
space = bg_atlas.space
f, axs = plt.subplots(1,3, figsize=(12, 3))
for i, (plane, labels) in enumerate(zip(space.sections, space.axis_labels)):
axs[i].imshow(stack.max(i), cmap="gray")
axs[i].set_title(f"{plane.capitalize()} view")
axs[i].set_ylabel(labels[0])
axs[i].set_xlabel(labels[1])
# -
# ### 1.3 Regions meshes
# If we need to access the structure meshes, we can either query for the file (e.g., if we need to load the file through some library like `vedo`):
bg_atlas.meshfile_from_structure("CH")
# Or directly obtain the mesh, as a mesh object of the `meshio` library:
bg_atlas.mesh_from_structure("CH")
# ## 2 Query the `BrainGlobeAtlas`
# ### 2.0 Query for structures:
# A very convenient feature of the `BrainGlobeAtlas` API is the simplicity of querying for the identity of the structure or the hemisphere at a given location, either from stack indexes or space coordinates, and even cutting the hierarchy at some higher level:
# +
# Ask for identity of some indexes in the stack:
print("By index:", bg_atlas.structure_from_coords((50, 40, 30),
as_acronym=True))
# Now give coordinates in microns
print("By coordinates:", bg_atlas.structure_from_coords((5000, 4000, 3000),
as_acronym=True,
microns=True))
# Now cut hierarchy at some level
print("Higher hierarchy level:", bg_atlas.structure_from_coords((5000, 4000, 3000),
as_acronym=True,
microns=True,
hierarchy_lev=2))
# -
# ### 2.1 Query for hemispheres
# A very similar method can be used for hemispheres. 0 correspond to outside the brain, a,d 1 and 2 to left and right hemispheres - but we can just ask for the side name instead of the number:
# +
# Ask for identity of some indexes in the stack:
print("By index:", bg_atlas.hemisphere_from_coords((50, 40, 30)))
# Now give coordinates in microns
print("By coordinates:", bg_atlas.hemisphere_from_coords((5000, 4000, 3000), microns=True))
# Now print side string
print("By :", bg_atlas.hemisphere_from_coords((5000, 4000, 3000), microns=True))
# -
|
tutorials/Atlas API usage.ipynb
|